id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
16037463
pes2o/s2orc
v3-fos-license
A quadratic potential in a light cone QCD inspired model The general equation from previous work is specialized to a quadratic potential $V(r)=-a+\frac12 f r^2$ acting in the space of spherically symmetric S wave functions. The fine and hyperfine interaction creates then a position dependent mass $\widetilde m(r)$ in the effective kinetic energy of the associated Schr\"odinger equation. The results are compared with the available experimental and theoretical spectral data on the $\pi$ and $\rho$. Solving the eigenvalue problem within the usual oscillator approach induces a certain amount of arbitrariness. Despite of this, the agreement with experimental data is within the experimental error and better than other calculations, including Godfrey and Isgur \cite{GodIsg85} and Baldicchi and Prosperi \cite{BalPro02}. The short coming can be removed easily in more elaborate work. Because it is shorter, σ 1 σ 2 is kept explicit in the equations as an abbreviation for Eq. (2). With a quadratic potential, the spring constant is f , the Hamiltonian (1) becomes the non-local Schrödinger equation Shaping notation, the Hamiltonian is written as The non locality of the Hamiltonian resides in the position dependent mass To solve this Hamiltonian, one must go on a computer. The Hamiltonian in Eq.(5) looks like a conventional instant form Hamiltonian as obtained by quantizing the system at equal usual time. But it must be emphasized that it continues to be a genuine front form or light cone Hamiltonian [5], derived from the latter by a series of exact unitary transformations [1,3]. The model Hamiltonian and its parameters In this first round, I try to avoid to go on the computer as far as possible, by the following reason. According to renormalization theory, the renormalization group invariants (parameters) must be determined from experiment. This is a strongly non linear problem. In order to get a first and rough estimate, the Hamiltonian is simplified here until it has a form which is amenable to analytical solution. Therefore, all in-tractable terms in the above will be replaced here by mean values and related to the experimentally accessible mean square radius r 2 [6]. In effect, the substitution is the only true assumption in the present model. I consider thus the model Hamiltonian, with the abbreviations Its eigenvalues are with ξ 0 = 3 2 and η n = 2n. The invariant mass squares are then related to experiment. For equal masses m 1 = m 2 = m, the model has the 3 parameters m, f and a. One thus needs 3 empirical data to determine them. I choose: The spectrum is labeled self explanatory by the flavor composition M n = M dū,tn or M n = M dū,sn , for singlets or triplets, respectively. The triple chosen in Eq.(14) exposes a certain asymmetry. The excited ρ is chosen since its experimental limit of error is very much smaller than the one for the corresponding π state. Only its ground state mass is known very accurately, i.e. m π + = 139.57018 ± 0.00035 MeV. In the present work only the first 4 digits are used. For equal masses, the above abbreviations become The experiment defines 2 certainly positive differences: (16) A third one can be constructed by the observation that Keeping in mind that ω 2 = 2f / m, one can remove trivial kinematic factors and define 3 experimental quantities B, C and D by Substituting f = mC 2 and ma = B 2 + m 2 gives a quadratic equation with the solution Having m, the f and a are calculated from (17). The position dependent mass changes the relation between the mean square radius and it experimental value. Therefore, I introduce a fudge factor f * according to Since all mesons have about the same size [6], by order of magnitude, this number is kept universal. The fudge factor is introduced here to account, in some global fashion, for the tremendous simplification introduced by replacing Eq.(5) with (8). Some large scale variations are compiled in Table 3. The mass spectra including the ground states vary very little with the fudge factor. Any variations would show up the fastest for the high excitations. For this reason, the masses for n = 4 are included in the table. I do not understand this insensitivity from a mathematical or numerical point of view. The major effect of f * is the ease by which one can change the quark mass. A value of f * ∼ 40 leads to the 20 MeV for the quark mass quoted in [10]. Here f * = 4 is chosen. The other components of the spectrum, both for the ρ and the π, are then obtained for free. They will be compiled in Table 4, below. [8]. 3 Godfrey and Isgur [9], 4 Baldicchi and Prosperi [10] (a), a Could be a D state [11]. In principle, one could determine the heavier quark masses analytically from the hyperfine splittings: The so obtained results are however, not very reasonable, see Table 1. The experimental numbers are insufficiently accurate. Therefore, I determine them numerically from the singlets M us,s0 , M uc,s0 and M uc,s0 and compile them in Table 2. Results and Discussion Unflavored light mesons. The results for the π-ρ system are compiled in Table 4. The experimental points are taken from from Hagiwara et al [7]. It is no surprise that theory and experiment coincide for the π + , the ρ + and the ρ + (1450), because these data have been used to determine the parameters. The remarkable thing is that one can perform such a fit at all, The model reproduces the huge mass of the excited pion within the error limit. This solves the long standing puzzle, why a physical system can have a first excited state with a ten times larger mass. The remaining three calculated masses of the π-ρ sector agree with experiment almost within the error bars. The model underestimates the second π-excitation by only 26 MeV. The second excitation of the ρ is overestimated by a comparatively large 224 MeV, but the experiment for the ρ + (3 3 S 1 ) needs confirmation. The third excitation of the ρ + (4 3 S 1 ) is overestimated by 224 MeV. The table includes also a comparison with other theoretical calculations. It includes the results from a recent oscillator model [8]. Their model is even simpler than the [8]. 3 Godfrey and Isgur [9], a To be confirmed; b J P not confirmed. present one: it works with a hyperfine splitting, only, but suppresses the mechanism of a position dependent mass. Despite this, the results of [8] coincide practically with the present ones. I have included also the results from the pioneering work of Godfrey and Isgur [9] as a prototype of a phenomenological model, and from a recent advanced calculation by Baldicchi and Prosperi [10]. Neither of these models have much in common with the present one. They fail to reproduce the pion, this mystery particle of QCD. Strange mesons. The S wave K + and K * + spectra are given in Table 5. The mass of the singlet ground state is used to determine the mass parameter m s . Except the ground states, the experiments carry many ambiguities about the quantum number assignment for K and K * mesons. The model prediction for the triplet ground state underestimates the experimental value by 20 MeV. Both the first and the second excited state of K (2 1 S 0 and 3 1 S 0 ) are not confirmed. Another unconfirmed resonance with mass 1.629 ± 0.027 GeV lying between 2 1 S 0 and 3 1 S 0 was assigned to be a singlet K. Apparently there is no position for it in the K spectrum if it is an S wave state. However, according to its mass, it might well be the first excited state of K * (2 1 S 0 ). Taken the numbers in the table, the discrepancies are 88 and 69 MeV for the singlet and triplet n = 2 states, respectively. The second excited state of the K (2 1 S 0 ) differs by only 48 MeV, but the datum needs confirmation. Heavy mesons. The S wave uc, ub, sc, sb and cb meson spectra are given in Table 6. No excitations were observed for these mesons. The uc singlet is used to determine the mass parameter m c . Its ground state mass (D 0 ) therefore coincides with experiment. The model overestimates the mass of the tripletD * 0 (1 1 S 0 ) by about 50 MeV. -The ub singlet is used to determine the mass parameter m b . Its ground state mass (B + ) therefore coincides with experiment. The model overestimates the mass of the tripletB * + (1 1 S 0 ) by 16 MeV only. No data in the sc mesons are used to determine model parameters. Model and experiment differ by 27 and 40 MeV for singlet and triplet, respectively. [8]. 3 Godfrey and Isgur [9], a To be confirmed; b J P not confirmed. Model and experiment differ by 27 and 4 MeV for singlet and triplet, respectively. Model and experiment agree for the singlet. The triplet data are unknown. The model prediction for the complete spectrum are compiled in Table 7, for easy reference. The flavor diagonal mesons like ss, sc or bb may not be calculated in the model, see [5]. Conclusions The agreement between the present simple model with an oscillator potential and the experiment is generally good. These are good news, since harmonic interactions are easy to work with in many body problems. The present approach will be useful for considering baryons and nuclei. With the 4 mass parameters of the up/down, strange, charm and bottom quarks, the model has only 2 additional 2 parameters for the harmonic oscillator potential. In principle, the fudge factor must be counted as parameter, but as seen above, the choice of the up/down and the fudge factor is strongly coupled. The 6 canonical parameters of the model generate a reasonably good agreement with the 21 data points available. Note that renormalized gauge field theory has also 4+1+1 parameters: The 4 flavor quark masses, the strong coupling constant α s , and the renormalization scale λ. Of course, they can be mapped into each other [1,2,3]. Once one has determined the parameters in such a first guess, one should relax the model assumption, Eq. (7), and
2014-10-01T00:00:00.000Z
2003-12-20T00:00:00.000
{ "year": 2003, "sha1": "3dffb443acfdbfa073de6af37ab8c322bbe2ad92", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "fc76ce558a6c679ff2b500df57dabb2e417741ca", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
219097406
pes2o/s2orc
v3-fos-license
A Novel Method to Predict Processor Performance by Modeling Different Architecture Parameters : Predicting processor throughput and performance is one of the essential aspects of computer architecture. It is crucial to model processor performance behavior for future architectures based on the existing data set. Modeling processor performance for a given workload enables architects to enhance processor features to meet specific performance targets for a given benchmark. Developing an estimation method to predict performance using one micro-architecture parameter is limited, given the need to model multiple parameters simultaneously. In this paper, we propose a novel performance prediction method for SPEC CPU 2006 and HDxPRT 2014 benchmarks based on a combination of measured and estimated performance data. The performance project model predicts processor performance while altering multiple microarchitecture parameters simultaneously such as memory speed, number of cores and the core frequency. We also present a detailed timing analysis for each processor sub-component. The model is verified to project performance with less than 5% error margin between projected and measured baseline. Introduction Accessing processor performance is critical for the effectiveness of the entire system combining both hardware and software. The task of performance estimation is challenging, given that performance depends on different software and hardware variables. Given the complexity of this task, it is still essential to predict processor performance for a given benchmark and be able to change the micro-architecture parameters so that to estimate future performance numbers. The first task to achieve this is to understand what determines the processor performance. The two apparent settings in processor performance are throughput and latency. Unlike transaction-processing workloads, some workloads are incredibly diverse in their use and stress on different server sub-systems. Some are Central Processing Unit (CPU)-bound and others are strongly memory-bound. There is a big difference between CPUbound vs. memory-bound workloads. The most important characteristic affecting the performance of any workload on any system is the number of primary memory transactions it does. For the CPU-bound workloads, the performance is gated by activity on the processor chip. The critical performance parameters are core frequency, latencies and bandwidth from processor caches. The unimportant parameter is the memory subsystem. Usually, systems are cheaper to build for CPU-bound workloads. The memory-bound workloads are the opposite of CPUbounded workloads. The performance is mainly determined by the off-chip events, primarily how many main memory transactions can be completed per unit time. CPU-bound workloads have few main memory transactions and are constrained by core frequency, cache latency/bandwidth, cache design and pipeline. Memory-bound workloads have many principal memory transactions and are limited by memory bandwidth and sometimes by memory latency. In this paper, we present a novel performance prediction method based on a mathematical regression approach, which takes as inputs different processor microarchitecture parameters simultaneously to predict performance for SPEC CPU (2006) and HDxPRT benchmarks. The measured baseline is a Nehalem processor in which the measured data is used for the model. We propose a method to develop a projection model that utilizes measured and mathematical methods using regression data analysis and Amdahl's Law. The measured data assures that we are capturing the proper effect of the workload behavior and its architecture capabilities. The performance contributions from the processor and memory can be mathematically determined using Amdahl's Law and carefully crafted through experiments. We can look at the impact of different architectural features for CPU and Memory by studying them individually. The data regression techniques are used to find mathematical relationships in the data that can be used in developing extendable models to predict performance for CPU configurations which cannot be measured. The method presented in this paper is analytical, which means it does not require simulation data or sampling traces for simulation. The simulation approach requires developing a software-based simulator and capturing significant traces based on Cycles-Per-Instruction (CPI) and other architecture constraints that resembles the entire benchmark. The paper is structured as follows: In Section II, we discuss the motivation behind developing the proposed analytical model. In Section III, we discuss SPEC CPU 2006 and HDxPRT benchmarks. In section IV we review previous work in which we compare our analytical model to other published modules, which estimate processor performance using a systematic approach. In section V we present the performance and sensitivity analysis for SPEC CPU 2006 and HDxPRT using Nehalem processor. In section VI we present the proposed performance projection model supported by experimental results and we conclude in Section VII. Motivation Modeling processor performance is essential for processor engineers and designers using an analytical approach as compared to a simulation approach. The important feature is to evaluate different hardware configurations and predict the performance for a benchmark without using a trace-based simulator. This approach will help processor architects in designing and fine-tuning different architecture parameters for future processors. The model can give an estimate for performance indicators for SPEC CPU 2006 and HDxPRT workloads by selecting the desired processor architecture parameters. For example, what is the change in performance when the number of cores increases? This can provide processor engineers a leading edge to estimate performance without having to measure the benchmark on a processor that is not yet developed. The model also enables evaluating performance for different benchmarks projected performance score (Unit less performance metric) for different processors, given they are within the same family architecture. The score variable used in this paper is inversely proportional to the time domain for performance measurements. Usually, we expect an increase in benchmark performance for future processors, given that more technology, hardware features and capabilities are added throughout the processor roadmap. Some of the essential elements are an increase in the number of cores, an increase in memory speed and memory capacity, or an increase in the core frequency itself. In the model proposed in this paper, we covered all the critical features that will enable developers to get an early projected performance number for SPEC CPU 2006 and HDxPRT fora future processor configuration. A similar approach can be developed for a different workload. We chose SPEC CPU 2006 and HDxPRT because they are CPU intensive (compute-bound) workload; other workloads can be more memory intensive (memorybound). In order to develop a new module for a different workload, a new set of processor sensitivity analysis is required. Measurement provides the expected performance of the workload on a given set of architecture settings. The benchmark characteristics consist of a collection of measured data, defining a set of architecture parameters of interest and statistical output for the architecture parameters. The concept in Amdahl's Law allows us to determine the contribution of the CPU and Memory to the overall performance and how specific elements change the component contribution. We do this by running experiments where we keep one side constant and vary parameters, on the other hand, as shown in Fig. 1. Benchmarks Overview The SPEC CPU2006 was released by the Standard Performance Evaluation Corporation (SPEC). It's a standardized processor and memory benchmark, which is what we need for our performance projection model. It is designed to stress the CPU and memory subsystems and provides a comparative measure of compute-intensive performance by measuring integer and floating-point performance. This benchmark is widely used in the industry by several computers and processor Higher memory latency manufacturers to test their processor performance. It's also used for comparing the performance of different processors by different vendors to decide what processor to purchase based on performance and other factors. It is also used to compare the high-end processor versus the low-end processor's performance; this is used to determine the cost of each processor segment. There are two metrics to measure processor performance, the first metric is time and the second metric is throughput. Time determines the execution time, which is how fast a task is completed per unit time. Another parameter is the throughput, which is to measure how the amount of computation achieved per unit time. In SPEC CPU 2006, we used throughput as performance metrics and also execution time. SPEC CPU 2006 is categorized as a compute-intensive workload, which means it's a compute-bound workload or bounded by the number of cores. Every workload belongs to these two categories, a compute-bound workload or memory-bound workload or in between. For compute-bound workloads mean that the workload is only sensitive to the number of cores and the core frequency. This also means that if memory bandwidth and capacity increases, the performance will not increase. Memory-bound workloads mean that the workload is bounded to the capacity of memory and memory speed. So, any increase in the number of cores and/or the core frequency will not be translated into an increase in performance even though the computation power increased. Workloads can have different sensitivity; for example, some workloads are sensitive to memory bandwidth and memory speed as compared to being sensitive to the number of cores, core frequency or the total number of threads. The performance contributions coming from the CPU and memory can be mathematically determined using a measured baseline. The impact of performance change from different parts of processors and memory and be analyzed individually. A regression method is used to determine the relationship to performance in order to construct the performance projection model. In this paper, we propose a performance equation as a function of different microarchitecture parameters, which includes the number of cores, CPI, core frequency, memory frequency and memory latency. This enables the processor engineer to change different microarchitecture parameters and estimate the change in processor performance. HDxPRT scoring benchmark is divided into two subcategories. The first category consists of creating the HD score, which in turn includes an edit and convert videos from camcorder, edit photos and video from a digital camera and prepare media for portable devices. The second category is the HD video playback, which consists of HD video (1080 p, H.264) and HD video online (1080 p with Flash). Related Work Researchers have developed different prediction models to predict processor performance for a given benchmark using an analytical approach instead of a tracebased simulation. The analytical model presented in this paper enables the performance projection of relative performance with a <10% error margin difference between measured and estimated performance scores using the SPEC CPU 2006 and HDxPRT benchmarks. In our previously published papers, we proposed a performance estimation model using Amdahl's law regression method in (Issa and Figueira, 2010). Amdahl's law is based on the law of diminishing returns, which means increasing the number of processors or the number of cores, do not lead to a proportional increase in the same amount in performance. The definition for Amdahl's law states that the performance improvement gained from implementing a faster mode of execution is limited by the fraction of the time the quicker mode can is used. Amdahl's Law states that a system's overall performance increase is limited by the fraction of the system that cannot take advantage of the enhanced performance. The method published in (Issa and Figueira, 2010) predicts benchmark performance with less than a 10% error margin. The way presented in (Issa and Figueira, 2010) is limited, given that it can only accept only one architecture parameter change at a time to estimate performance for different values of that same parameter. The method requires at least two measured data points to establish a measured baseline and this enables performance estimation for microarchitecture parameters that cannot be measured on the processor under test. Note that the measured baseline and the projected performance must be of the same microarchitecture parameter, for example, the number of cores or the core frequency. This paper is also a continuation of the work we published in (Issa, 2016) for our initial work on this project. In this paper, we added more elaboration, finetuned the regression method and added the timing analysis in the results section to show the breakdown in time between the core time and the memory time for SPEC CPU 2006. Saavedra and Smith (1996) proposed a method for a given benchmark to characterize the machine performance and the program execution. The paper focuses on determining the execution time of the benchmark. The difference between our method and the method published in (Saavedra and Smith, 1996) is that our approach is more general and can be used for any processor by changing different microarchitecture parameters. Krishnaprasad (2001) presented various ways of using Amdahl's law in a different form. Our method presented in this paper has the same objectives, but we use a regression approach instead. presented a method based on computing a set of microarchitecture parameters independent characteristics and weights these independent characteristics resulting in locating the application of interest in benchmark space. The performance prediction is implemented by weighting the performance number of a benchmark in the neighborhood of the application of interest. In our method, we do not apply any weighting mechanism for a given benchmark to predict performance, as this may change and becomes different for a different benchmark. Phansalkar et al. (2007) proposed a simulation-based approach for SPEC CPU 2006 by calculating the CPI for cache and Translation Lookaside Buffer (TLB) misses. The paper concludes that a larger TLB size can reduce the cache and TLB miss rates, which in turn will reduce the CPI and may improve performance. Jens (1996) proposed a different performance estimation method for the Linpack benchmark based on predicting the runtime using a message-passing approach. Our estimation model approach is different in a way we analyze different processor architecture parameters and developed an empirical formula to predict relative performance. A significant amount of work has been done (Ganesan et al., 2008;Prakash and Peng, 2008) using different performance metrics to analyze and optimize the performance of different workloads. These research papers are highly dependent on microarchitecture parameters that are tight to a specific Instruction Set Architecture (ISA) which makes it bias to a specific architecture. It is used to find performance bottlenecks for different benchmarks. Khan et al. (2012) presented a novel method for cache segmentation replacement technique that works independently from Least Recently Used (LRU) replacement method. The method is tested with different cache sizes for Last Level Cache (LLC) sizes using intensive memory subsets of SPEC CPU 2006. This shows the importance for cache performance modeling for memory intensive subsets of SPEC CPU 2006. Issa and Figueira (2010) proposed a performance estimation model using Amdahl's Law regression method. The method is limited as it requires changing one microarchitecture parameter such as core frequency or memory frequency while keeping other processor parameters fixed. The technique also requires having a measured baseline with a minimum of three measured data points to enable performance projection using the measured baseline. The approach presented in this paper allows performance prediction by changing several architecture variables simultaneously. Hoste and Eeckhout (2007) presented different metrics for characterizing benchmarks based on microarchitecture-independent characteristics. It is based on instrumenting program binaries to describe diverse instruction mix, ILP, working set size and branch predictability. This is based on the simulation of ISA traces to module performance. Baghsorkhi et al. (2010) proposed an analytical method to predict the performance of the generalpurpose application on a GPU architecture. The technique identified how kernel affects different GPU microarchitecture parameters. Hong and Kim (2009) presented an analytical model for GPU architecture with an emphasis on memory-level and thread-level parallelism. In our analysis, we analyzed the sensitivity of HDxPRT with respect to different cache sizes and the number of cores. Sensitivity Analysis a) SPEC CPU 2006 SPEC CPU 2006 benchmark includes twenty-six different benchmarks executed to stress the processor and memory. The output of the benchmark is one number, which is referred to as the performance score (SPEC rate). It is important to design the right experiment so that the performance data can be analyzed accordingly. The objective of the performance model presented in this paper is to combine all the regression measurements into a single empirical formula to predict performance for a SPEC CPU 2006. This enables us to perform a multivariable regression. It is important to mention that all measured data presented contains a common configuration, which means that all performance data presented is referenced to a normalized measured baseline, which is equal to one '1'. The remaining measured data are referenced to this '1', which is known as the normalized measured baseline. The main factor contributing to lower processor performance are summarized as follows:  A low number of cores  Small cache size  Low core Instruction-Per-Cycle (IPC). IPC is usually reduced (lower performance) in case of an increase in cache misses structural hazards, control hazards, or data hazards There are different memory factors that contribute to lower performance such as, lower memory bandwidth, smaller cache size and high memory latency. All the performance measurements used for sensitivity analysis are based on relative performance with respect to the Intel Nehalem Xeon processor with 8 cores, 2800 MHz core frequency and 400 MHz memory speed. It is implemented by taking the measured data for a given workload and analyze the sensitivity performance curve with respect to one performance parameter (number of cores) while keeping all other parameters fixed. When a benchmark score is larger for higher performance numbers as shown in Fig. 2, inverting the performance parameters, in this case, it's the number of cores along with the score often provides linear lines in a plot, as shown in Fig. 3. The model output for the relative score with respect to the number of cores is calculated using regression as = 1/(M*(1/# of cores)+B) where M and B are the regression slope and intercept. Linear relationships help in simplifying the predictive model, but this does not always happen. Some elements of performance end to be well behaved in producing a linear relationship to performance using this technique. The lists of architecture parameters that work well for SPEC CPU 2006 benchmark are:  Frequency vs. Score (Fig. 4)  Core count vs. Score  Memory Bandwidth vs. Score (Fig. 5)  Memory Latency vs. Score  IPC improvements vs. Score Best conditions are with heavily threaded homogeneous workloads and they vary per benchmark. Some regressions don't work well using this approach. Some relationships are harder to work with. The cache size tends to be one of those. It will shift work from the main memory to the CPU as you increase the work size. We can modify the input parameters in the experiment and regress this into a relationship. In this case, a 3 rd order polynomial relationship works best, as shown in Fig. 6 to 8. However, what we need here is to know how the memory contribution changes and this relationship is to total performance. Some regressions don't work well; In this case, we need to simulate two different frequencies for each cache size. From this data, we can extract the CPU and memory contributions for each of the cache sizes. The impact of the cache size on memory contribution can regress into an equation that is used to modify the memory component, as shown in Fig. 9. The reference configuration cache size should be set to 1 (normalized). All values should be in reference to the normalized baseline (1). Fig. 10, the slope for 1/score vs. 1/Freq for all sizes of the cache memory is the same, which is 2.588; however, the intercept part differs. This is expected, given the different cache sizes. Designing the right experiments simplifies the analysis. With the right experiment sets, we can combine the regression data into one formula to project performance. We can conduct a multivariable regression to do this. Some rules used in conducting a measured experiment used to develop the performance model are: Rule 1: All experiment sets must contain a standard configuration Rule 2: Experiment sets should have a minimum of 3 configurations. More is always better. Rule 3: Always measure two different frequencies for sets producing non-linear relationships (i.e., cache size) The experiment set for option 1 used is shown in Fig. 11. This would be the best experiment set, a single thread set is used for better projections of single-thread 1/size of cache benchmarks. Note that in every set, there are duplicates from other sets. The total number of experiments is less than the number of configurations shown. The experiment set of other options is described as option 1 and option 2. For option 1, represent a method to use the least amount of Multi-thread measurement. The frequency and cache scaling experiments are done with 1 thread measurement. The output for this option provides the least measurement time with the least accurate method. For option 3, these sets recognize the behavior of a module (core pairs) needs to be modeled. It is a compromise of Option 1 and Option 2. A Single thread set is used for better projections of single-thread benchmarks. Note that in every set, there are duplicates from other sets. The total number of experiments is less than the number of configurations shown in Fig. 11. Deriving the relative performance is with respect to the Intel Nehalem Xeon processor, configured with eight cores, with 2800 MHz core frequency and memory speed bus of 400 MHz. First, we take measured data from SPEC CPU2006 and analyze the sensitivity performance curve with respect to one performance parameter, for this case, it's the number of cores while keeping all other architecture parameters fixed. The measured performance curve is shown in Fig. 12. Relative performance derived is shown in equation (1) The constant '8' used in Equation (1) is derived from a measured baseline using the Intel Xeon processor with eight cores. Processor configuration with eight cores is used as the normalized baseline and all other measurements are relative to this baseline. The slope and intercept values are derived using regression. The performance in Figure 12 shows a non-linear relation between the number of cores and relative performance. Taking the inverse will give us a linear relationship, as shown in Figure 13. We implement the linearity method for memory DIMM speed per thread. The relative memory performance is derived in Equation (2) (2) The slope and intercepts derivations are discussed and derived in the results section. The reason we have 400 in the equation is that for the measured baseline we used a memory speed of 400 MHz. We repeat the same experiments for DIMM speed versus the memory relative performance, also the inverse of memory speed versus the inverse of relative performance as shown in Fig 14 and 15. By repeating the same analysis for the core frequency generates a linear relationship between the inverse of the relative performance versus the inverse of the core frequency, this is shown in Fig. 16 and 17. b) HDxPRT For HDxPRT, the performance score which consists of the three sub-categories (convert videos from camcorder, edit photos and video from a digital camera and prepare media for portable devices) is derived using the GeoMean of the three components as follows: 100 Tref score Trun In our sensitivity analysis, we conclude that there is a 40% scaling for the change in the number of cores, minimum sensitivity to cache and <3% sensitivity to Simultaneous Multi-Threading (SMT), as shown in Table 1. The core sensitivity for different subcategories are shown in Fig. 18. There is an 80% performance improvement for Edit and Convert videos from camcorder and 30% improvements for Edit photos and videos from the camera. For HDxPRT benchmark, the performance for projected time is calculated as: and per component HDxPRT score is computed as The overall score is computed using the GeoMean score show in Equation (3). Experimental Results Using multivariable regression on the linear relationships, we can get coefficients for the input parameters to predict a score. Additionally, we can compute the CPU and memory component times. The component times can be modified by the non-linear relationship from the L3 measurements. We used four different processor architecture variables, which are the number of cores, the core frequency, the memory DIMM speed, latency and the Instruction-Per-Cycle (IPC). The IPC variable depends on the number of branches misses cache misses and pipeline and structural hazards. The higher the occurrence of these variables, the more cycles are consumed, which will result in a lower IPC, which in turn will result in lower performance. The IPC value is measured for SPEC CPU2006 using a measured reference baseline for Nehalem processor. Given the sensitivity analysis we discussed, the general formula for the relative performance can be derived as in Equation (3) The value for the core coefficient in Equation (3) is derived from the regression coefficient for 1/# of cores. The value for Z in the linear line intercepts and core frequency coefficient is also calculated from regression for 1/(core frequency). The value for the DIMM speed coefficient can be derived from the regression coefficient for 1/DIMM speed. These coefficients are derived using statistical regression analysis for the measured dataset. For example, for one of the configuration we want to predict performance, the coefficients calculated by the regression statistics are as follows: Z = -0.75, number of cores coefficient = 6, core frequency coefficient =2100 and DIMM speed coefficient = 100. The relative performance equation is set to project relative performance with respect to the measured baseline. For this experiment, the measured baseline is Intel Xeon 8 cores, 2800 MHz core frequency, with 400 MHz DIMM speed. The following table summarizes the relative performance score for different projected processor configurations. The relative performance shown in Table 2 is normalized '1' with respect to Intel Nehalem Xeon using eight cores, 2800 MHz core frequency and DIMM speed of 400 MHz. This is the normalized baseline and all other measured and projected performance is relative to this measured baseline. The remaining configurations are measured and relative project performance to this normalized configuration. The statistical regression tool enables us to derive the regression coefficients for the number of cores coefficient, core frequency coefficient and DIMM speed coefficient and Z. The next step is to apply the empirical performance relation in Equation (3) to verify the method with respect to the measured data baseline. We compare relative performance measured with respect to predicted relative performance. The error margin between estimated and measured relative performance is <10% for all test configurations, as shown in Fig. 19. The model is used to cross-validate the estimation of SPEC CPU 2006 performance for different Xeon processor configurations. This enabled performance projection for the future processors that we don't have it yet available for measurement. In Fig. 19, the performance projection model is used to estimate the relative performance with respect to the Intel Xeon Nehalem baseline. To verify the model, we compare the performance score to a measured score using the same processor configuration as a baseline. The normalized configuration in Fig. 11 is normalized relative to '1', which is done by setting the number of cores to 8, core frequency to 2800 MHz and DIMM speed to 400 MHz. Using the proposed model, if we increase the number of cores to 12, core frequency to 3600 MHz and DIMM speed to 600 MHz, the relative performance increases to about 2.1. The actual measured relative performance is two which is about 5% error margin. Different configurations show an error margin < 5% between measured and projected data. The timing analysis for multi-variable regression is derived in Equation (7) as follows: The time contribution for the core computation time is the Core_time + Frequency_time + Intercept_Time and the DIMM_time is for the memory time only. In Table 3, we show the timing breakdown for SPEC CPU 2006 in terms of core time and memory time. The benchmark is more dependent on the core time as compared to the memory time for a lower number of cores. For one core system, the core time is very significant (96%) as compared to the memory time (4%). As the number of cores increases and DIMM speed increases, the memory time contribution also increases. The % core time is derived by taking the ratio of the core_time/total_time. Memory % time is derived similarly by taking the ratio of the DIMM time and the total time. The same method is used to project relative performance for HDxPRT, as shown in Fig. 20. Concluding Remarks In this paper, we proposed a novel performance projection method using measured and regression data to predict relative performance for SPEC CPU2006 and HDxPRT using different processor architecture variables that stress the CPU and memory sub-systems. The projection model is independent of underlying ISA; it utilizes regression with a mathematical approach to project relative processor performance. We discovered that the relative performance for the cache is logarithmic rather than linear, while the relative performance for the core frequency, number of cores and memory bandwidth is linear. The estimated relative performance average error margin < 5% compared to the measured performance baseline for Xeon processor configurations. The proposed method in this paper enables the modeling of different processor architecture parameters to estimate relative performance for SPEC CPU 2006 and HDxPRT. The model can be modified by establishing a new measured baseline known as the normalized baseline (normalized to 1) and estimate relative performance from that baseline for different processor architecture. This method does not require any binary or sampled traces used in the simulation for a given benchmark to have an instruction mix that represents well the entire benchmark's instruction mix. For future work, we can implement a sensitivity analysis for different architecture parameters such as the TLB misses, which contributes to lower IPC (higher CPI). Also, the method can be expanded to cover different benchmarks that are used by the industry as a reference to evaluate processor
2020-05-30T23:00:58.859Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "633deea8d048b3165c32b55bc726ce03581a2934", "oa_license": "CCBY", "oa_url": "https://thescipub.com/pdf/jcssp.2020.479.492.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "1d2d1ff934350fe4ebf408795ba3dbab166d0498", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
17893582
pes2o/s2orc
v3-fos-license
Rainfall threshold definition using an entropy decision approach and radar data Flash flood events are floods characterised by a very rapid response of basins to storms, often resulting in loss of life and property damage. Due to the specific space- time scale of this type of flood, the lead time available for triggering civil protection measures is typically short. Rain- fall threshold values specify the amount of precipitation for a given duration that generates a critical discharge in a given river cross section. If the threshold values are exceeded, it can produce a critical situation in river sites exposed to al- luvial risk. It is therefore possible to directly compare the observed or forecasted precipitation with critical reference values, without running online real-time forecasting systems. The focus of this study is the Mignone River basin, located in Central Italy. The critical rainfall threshold values are eval- uated by minimising a utility function based on the infor- mative entropy concept and by using a simulation approach based on radar data. The study concludes with a system performance analysis, in terms of correctly issued warnings, false alarms and missed alarms. Introduction Classical real-time flood forecasting systems generally must run hydrological models in real-time.The time required for the model to run can be greater than the lead time available for issuing alerts in basins subject to flash flooding.A flood warning system based on a comparison of observed or forecasted precipitation and critical rainfall values could provide decision-makers with relatively simple, clear, and immediate messages.Rainfall thresholds specify the amount of precipitation for a given duration that generates a critical discharge in a given river cross section.If the thresholds are exceeded, Correspondence to: V. Montesarchio (valeria.montesarchio@uniroma1.it) it can produce a critical situation in river sites exposed to alluvial risk, triggering prevention operations and emergency system alerts (Georgakakos, 1995). In this work the critical rainfall thresholds for the Mignone River cross section are defined using an entropy-based decision approach and a simulation approach based on radar data, in order to establish rainfall warning values for critical flood events.First, an overview of the entropy concept is given.Second, the methodology for rainfall thresholds estimation is presented and applied to the case study of the Mignone River.Finally, the reliability of rainfall thresholds is evaluated and results are discussed. The entropy concept The entropy concept was introduced by Clausius in 1864 (from ancient Greek en, "inside", and troph, "change") to explain heat behaviour at different temperatures.The entropy of a system is given by the sum of the entropy of each part of the system, so that for m subsystems, H is where H i is the entropy of each subsystem i, p i is the probability of being in the ith state and k is a positive constant. Assuming k = 1 and the base of logarithm as 2, the measure of entropy is in bit (as in this work).The most probable distribution of the energy in a system is one for which the entropy of the whole system would be equal to its maximum value: Heuristic entropy can be interpreted as a measure of the uncertainty about the occurrence of a certain event (Papoulis, 1991).The probability P (A), for example, of an event A, V. Montesarchio et al.: Rainfall threshold definition using an entropy decision approach can be defined as the measure of the uncertainty about the occurrence or not-occurrence of A. If A i events are a partition of the event U , so that each event is incompatible with others (A i A j =∅ for i = j ), and the union of all the events is U itself (U = A 1 ∪ A 2 ∪ ...A n ), then the measure of the uncertainty of U is H (U ) and it is the entropy of the partition of U .The functional H (U ) was derived from a number of postulates, such as: 1. H (U ) is a continuous function of p i = P (A i ); 2. if p 1 =. . .=p N = 1/N, then H (U ) is an increasing function of N; 3. if a new partition B is formed by subdividing one of the sets of U , then H (B) > H (U ), it can be shown that the sum: satisfies the postulates and is unique within a constant factor. Discrete random variable (RV) type Suppose that x is a discrete type of RV taking the value x i with probability The events {X = x i } are mutually exclusive and their union is the certain event; hence they form a partition.This partition will be denoted by U x . The entropy H (X) of a discrete-type RV X is the entropy H (U x ) of its partition U x (Papoulis, 1991): (5) Continuous random variable (RV) type The entropy of a continuous-type RV cannot be so defined because the events {X = x i } do not form a partition.The entropy of a continuous type RV X is by definition the integral (Papoulis, 1991): The integration extends only over the region where f Flood rainfall thresholds Generally, rainfall thresholds identify precipitation critical values, which could be used in the context of landslides and debris flow hazard forecasting (Neary and Swift, 1987;Annunziati et al., 1999;Crosta and Frattini, 2000), as well as in flood forecasting or warning (Carpenter et al., 1999;Mancini et al., 2002;Georgakakos, 2006;Martina et al., 2006).In the context of flood warning, when critical values are exceeded, flooding is expected.Rainfall thresholds specify the amount of precipitation for a given duration that generates a critical discharge in a given cross section. There is a long tradition of rainfall threshold-based methodologies, although different approaches have been adopted.The Flash Flood Guidance method (FFG) (Mogil et al., 1978) was developed by the US National Weather Service (NWS) for flash flood warning.FFG is based on the effective depth of rain of a given duration, taken as uniform in space and time, necessary to cause minor flooding (e.g., 2 yr return time flow) at the outlet of the considered basin.If the FFG is surpassed by rainfall amounts, then flooding in the basin is considered likely to occur.FFG values are computed as the flow causing flooding divided by the catchment area times the Unit Hydrograph's (Snyder's or Geomorphologic) peak value for any specified duration.GIS support is used to determine the main characteristics of the basin, and regionalization values are provided to extend the methodology to other locations (Carpenter et al., 1999). The key advantage of FFG is that it is possible to issue warnings without the need to run entire hydrometeorological forecasting chains.The limitations of FFG are in the assumptions of spatially/temporally uniform rainfall and linear responses (i.e., that affect the size of the basins), and the use of regional relationships to make inferences about ungauged locations.FFG performance in ungauged basins is poor (Norbiato et al., 2008), because the hydrological model parameters cannot be calibrated (Bloeschl, 2005) and it is more difficult to estimate critical discharge values (Ntelekos et al., 2006). A different approach was proposed by Mancini et al. (2002).Threshold values are estimated with an eventbased rainfall-runoff model, which iteratively searches for the rainfall amount that could produce a critical discharge or water stage.As input of the rainfall runoff model, synthetic hyetographs with different shapes and durations are used.This is a deterministic approach and has been used in previous work (e.g., Montesarchio et al., 2009); it will be further discussed in Sect.3.1. Lastly, a utility function approach couples rainfall and discharge in a probabilistic way (Martina et al., 2006).The rainfall incorporates the dependence between the cumulated rainfall volume for the storm duration and the possible consequences on the water level or discharge in a river section as perceived by the stakeholders.The thresholds therefore correspond to the minimum expected value of an opportunely chosen Bayesian cost utility function.This approach is used as a probabilistic approach in the present work and will be further discussed in Sect.3.2.Moreover, this approach is extended by employing an entropy-based decision function to overcome the subjectivity implied in the Bayesian approach (Sect.3.2.2). Deterministic approach for rainfall threshold evaluation The deterministic approach needs an opportunely calibrated rainfall runoff model to simulate basin response to storms. The inverse hydrologic problem is iteratively solved to identify, for a given duration d, the cumulative rainfall that corresponds to the critical discharge.The identification of the critical section is usually based on the flooding history of the river and its hydraulic geometry.When this information is not available, the critical section can be identified as the outlet of the basin, where all of the upstream contributions converge. Given the hydraulic geometry and the marked critical water stage, the critical discharge is estimated by using the stage-discharge curve (Rosso, 2002).When the discharge data are unavailable and a hydraulic simulation cannot be carried out, a regional model can be applied (e.g., the index discharge method; Darlymple, 1960) in order to identify the critical discharge.This method is based on statistical regionalization and allows for the replacement of time with space and the use of a set of hydrometric observations from a homogeneous area to replace the lack of hydrometric data in the critical section.Given a certain return period, the maximum discharge for the critical section is calculated as the product of two terms: a scaling factor which is characteristic of the site, and a dimensionless growth factor which is characteristic of the homogenous region. In the gauged sections it is clearly possible to calculate the discharge index directly using the arithmetic mean of the available data.For the ungauged sections, indirect methods (e.g., Brath et al., 2001) must be used instead.With this approach, a critical rainfall threshold is obtained which no longer refers to the critical discharge but to the different return periods (i.e., 2, 5, 10, 20, 50, 100 yr).A description of the procedures which must be carried out in order to evaluate the critical rainfall thresholds in different situations is reported in Montesarchio et al. (2009).The critical reference discharge could be reached and surpassed for different space-time configurations of rainfall fields.To simplify, cumulative precipitation (P ) can be evaluated globally over the entire basin, after a time (d) from the beginning of a thunderstorm.Rainfall thresholds are generally a function of the critical cross-section characteristics, but also of the boundary conditions (e.g., soil imbibition condition at the beginning of the event) and the type and temporal evolution of the rain event.These dependencies can be summarised using the AMC (Antecedent Moisture Condition) index (SCS, 1971(SCS, , 1986) ) and standard hyetograph (Rosso, 2002). Given hyetograph, rain duration (d) and initial soil imbibition condition based on the AMC index, the critical depth can be investigated.Independent simulations can be performed for all combinations of rainfall durations (3, 6, 12 and 24 h), hyetographs and AMC classes.The rainfall thresholds are iteratively identified by trial and error until the critical discharge value is reached. Probabilistic approach for rainfall threshold evaluation To evaluate the threshold values corresponding to the minimum expected value of an opportunely chosen function, the joint cumulative distribution of cumulated rainfall and the corresponding peak discharge must be defined.Also, when a probabilistic approach is used, the soil moisture conditions affect the threshold values.Using a hydrological simulation approach, the peak discharge was evaluated for each AMC class. Bayesian approach The "convenience" concept is introduced in relation to damage perception, and is measured by a Bayesian utility function that also includes immeasurable damages derived by missed alarms.The utility function is dependent on the critical discharge value (Martina et al., 2006), such that: where: q is the discharge value; Q * is the critical discharge value for the river critical cross section; v is the cumulated rainfall value; V T is the critical threshold value; T is the storm duration (3, 6, 12, 24 h); and a, b, c and C 0 , a , b , c are proper parameters.The utility function U (q,v|V T ,T ) for values of v < V T expresses damage perception if the alert is missed.There are negligible costs if the effective discharge q is lower than Q * , while the cost increases rapidly if q is higher than Q * .For values of v > V T , the function expresses damage perception if the alert is issued.Costs are initially higher in the latter scenario than in the no-alarm case because of the operative costs associated with triggering civil protection measures; however, the costs grow slowly if q is higher than Q * . The parameters a, b, c and C 0 , a , b , c change with the importance given to the possible combination of actions (i.e., issuing an alert or not) and actual process (i.e., the critical discharge is surpassed or not). Regarding the liability of the decision-maker in determining whether or not to issue an alarm, three cases can be distinguished (FLOODsite, 2008): the "real" case, the "risk prone" case and the "risk averse" case.A different set of parameters (a, b, c and C 0 ,a , b , c ) corresponds to each case, resulting in different costs.The parameters of the utility function are summarized in Table 1. In the "risk averse" case, since the costs associated with a false alarm are negligible, issuing false alarms is preferred over risking the possibility of a missed alarm.The costs of failing to issue an alarm can grow rapidly in a real emergency, and the difference in maximum cost between alarm and no-alarm scenarios is great.In the "risk prone" case some costs for flood events are considered more acceptable than false alarm costs.Areas of low economic value can be affected by low intensity floods, while more valuable areas (economically and socially) will be affected for a higher return period.False alarm costs are higher than in the previous case, because resources must be employed when the alarm is issued.In the "real" case, the real experiences of decision-makers in issuing alarms are evaluated.The cost of a false alarm is evaluated operatively (e.g., 50 employees and their equipment).The missed alarm cost grows when the discharge exceeds the project reference value for structural protection measures.The functions corresponding to each risk case are shown in Fig. 1.The most convenient threshold value V T is identified (for each duration T = 3, 6, 12, 24 hours) by minimising the expected utility cost function: where f (q,v|T ) is the joint distribution of cumulated rainfall volume and peak discharge (as determined in Sect.4.3) and U (q,v|V T T ) is the utility function. Utility-entropy function The definition of measure of risk based on the expected values of utility and entropy (Yang and Qiu, 2005) is defined on the basis of the classical decision model under risk.Three parts are defined: the state space = {θ}, the action space A = {a}, and the payoff function X = X(a,θ), defined for A × Q.The decision model is therefore G = (Q,A,u), where u = u(X) is the decision maker's utility function.Suppose that at least two actions exist in action space, so the decision-maker's utility function is nonnegative and the mean a ∈ A{|E[u(X(a,θ))]|} exists.When the mean a ∈ A{|E[u(X(a,θ ))]|} is nonzero, the measure of risk when taking an action a is defined as: where λ is a constant (0 < λ < 1), R(a) denotes the risk of taking an action a, and H a (θ ) represents the entropy of the distribution of the corresponding state.When mean a∈A {|E[u(X(a,θ ))]|} = 0, then for any action a ∈ A → E[u(X(a,θ ))] = 0.In this case R(a) = H a (θ ).This measure of risk is defined as the expected value of the utility entropy (EU-E).The entropy is the objective measure of the uncertainty of the state of nature θ.The constant λ is defined as the "tradeoff coefficient" and it reflects a tradeoff between the subjective utility of a decision-maker's action and the objective uncertainty of its corresponding state.When the decision-maker wants the expected utility to have a greater effect, then λ = 0; when the decision-maker wants an expected utility to have a smaller effect, then λ = 1. The EU-E measure is relative.It depends on the expected function of entropy and utility and on the expected value of every action.The tradeoff coefficient provides a balance of these factors that can be used for the decision-maker's subjective valuation.The expected value of utility reflects the subjective preference, while the entropy represents the objective uncertainty about a decision. The utility function is introduced according to Bayesian decisional theory and represents the cost of flood damages in cases where an alarm was either issued or missed during a flood event (Martina et al., 2006), as shown in Eq. ( 7).The parameters of the general decision model are explained in Sect.4.5.2. Basin characteristics The Mignone River in Italy originates at the confluence between the Scatenato, Coriglione and Biscione ditches, at 633 m a.s.l.The total length is 62 km, from the Sabatini Mountains (northeast to Lake Bracciano) to the Tyrrhenian Sea (between Tarquinia and Civitavecchia). Near Rota (from the hydrographical left), the Mignone River receives the Verginese ditch tributary and, near Monte Romano (from hydrographical right) it receives the Vesca Stream.The overall contribution is scarce and the hydraulic behaviour is variable, which is typical of a torrential regimen.The basin area is characterized by hilly zones with some relief with steep slopes corresponding to the water-engraved valley.The total area is about 560 km 2 , with an average elevation of 233 m a.s.l.The basin has an essentially horizontal development, bounded in the North by the Cimini Mountains and hill relief towards the sea (near Tarquinia), and in the South by the Sabatini and Tolfa Mountains.Geologically, the Mignone River basin is characterized by volcanic rocks (25 %) in the mountainous areas, while further downstream there are sands and conglomerates (14 %), clay (9 %), and anthropogenic rocks (2 %), but mainly flysch (41 %), as well as, of course, alluvial deposits along the river (9 %).There are no carbonates.The Mignone River area was influenced by the explosive activity of the Vulsini, Vico Lake and Cimino complex, alternating layers of clay and marl.The Mignone River therefore has a low permeability which implies that the flows range from very low to very high values, depending on the rain regimen. Data set Information about historical Mignone River flood events is available on the Sistema Informativo Catastrofi Idrogeologiche (SICI) of the Consiglio Nazionale delle Ricerche-Gruppo Nazionale per la Difesa dalle Catastrofi Idrogeologiche CNR-GNDCI website.The preliminary historicaldocumentary analysis was used to identify the critical hydraulic cross-section with the monitored cross-section "S.S. Aurelia" (drainage area 440 km 2 ), near which the Mignone River overflowed three times during the last century (8 November 1934, 27 December 1959, 16 November 1962).Given cross-section geometry and the stage-discharge curve used by the authorities, it was determined that the critical reference discharge Q * = 131.0m 3 s −1 (Montesarchio et al., 2009).To the critical reference discharge corresponds a return period of 1.25 yr, evaluated with a partial duration series method.It means that reaching critical conditions in Mignone River basin is quite common, and having a skilled warning system available would prevent damage and loss. Hydrometric and pluviometric data Hydrometric and pluviometric data (from 1999 until 2008) were used both for marginal and joint distribution fitting.Hydrometric data were used also to perform the calibration and verification of the rainfall-runoff model, which is based on radar data.A summary of flood events affecting the Mignone River basin from 1999 to 2007 can be found in Montesarchio et al. (2009). Radar data The Polar 55C radar is located 15 km southeast of Rome, Italy, in the Tor-Vergata research centre.The Polar 55C is a C-band Doppler weather radar with polarization agility and a 0.9 • beam width.The radar is capable of transmitting and receiving horizontally and vertically polarized signals on alternate pulses, which allows the reflectivity factor (Z h ), the differential reflectivity (Z dr ), and the differential phase shift ( dp ) to be measured. Radar measurements are obtained by averaging 64 pulses with a range-bin resolution of 75 m covering a 120 km radius from the radar site.The temporal resolution is 5 min.To remove the spurious returns from the data, an algorithm based on polarimetry is applied (Lombardo et al., 2006a). The following Z − R relation was obtained for the Cband radar using a nonlinear regression analysis (Russo et al., 2005): where Z h is the reflectivity factor (mm 6 m −3 ) and R is the rainfall rate (mm h −1 ). In this work we are assuming that uncertainties in radarderived precipitation estimates are negligible, even if it is well known that these estimates are affected by sources of uncertainty (Wilson and Brandes, 1979;Krajewski and Smith, 2002;Habib et al., 2004;Germann et al., 2006;Ciach et al., 2007;Villarini and Krajewski, 2010a).However, in these studies operative radar networks are examined.The Polar 55C is a research radar, with higher measuring performance than operative network.So the radar-rainfall estimates are considered error-free (see also Ntelekos et al. (2006) for a similar application). After the transformation from polar to Cartesian coordinates was performed, a 2 km × 2 km grid was overlaid over the map of entire basin (Fig. 2).For each temporal interval the values of the rainfall rate were obtained at each pixel, and thus it was possible to calculate the cumulative rainfall exponential λ = 0.04 q (m 3 s −1 ) log-normal µ = 3.88; σ = 0.86 by radar with a temporal resolution of 30 min.The radar data have only been available since 2008; as such, a total of five events were used to calibrate the model in the saturated soil condition (i.e., AMCIII).More events, not used in calibration phase, are used also to test the warning procedures. Fitting marginal and joint distribution Given the dependence of rainfall threshold values on soil moisture conditions, it would have been optimal to initially subdivide the available data based on AMC condition and then further subdivide by duration.However, subdividing the data according to AMC classes created very short data series, thereby increasing the uncertainty of the statistical inference process.Therefore, the series were classified firstly according to their duration, and then used in a hydrological simulation framework to obtain also different AMC series. The distributions fitting the cumulated rainfall v and the peak discharge q and their parameters are summarized in Table 2. The marginal probability distributions were determined in order to obtain the joint distributions; the marginals were processed through the Normal Quantile Transformation (NQT) (Kelly and Krzystofowicz, 1997) and by a meta-Gaussian relationship: where are the inverse of the standard normal distribution functions derived from the parameters of the marginal series, f (q) and g(v) are the marginal density functions and γ is related to the Pearson correlation coefficient ρ between Z and W : For each duration: where T is 3, 6, 12 or 24 h.In Fig. 3 the calculated joint distributions are shown for the AMCIII soil condition. Simulation model A rainfall-runoff model was implemented, through which the behavior of the basin was simulated.The calibrated model was used to solve the inverse hydrological problem in the critical section. The rainfall-runoff model used in this work is semidistributed, so that the spatial variability of the physical processes can be taken into account.Even though the critical rainfall threshold is expressed in terms of cumulative rainfall, it is important to assess the response of the basin in case of a distributed spatial input, as the critical situation can also be caused by localized rainfall. In order to outline the rainfall-runoff transformation in both sub-basins, a modified Clark model was employed (Peters and Easton, 1996;Kull and Feldmann, 1998); this permits a semi-distributed approach to be used, which takes into account the spatial variability of the physical processes.In order to outline the hydrological losses, a SCS-CN grid model (SCS, 1971(SCS, , 1986) ) and Lag model were used to outline the propagation of the full flood wave (Pilgrim and Cordery, 1993). Figure 4 shows the calibration and verification of the hydrological model for the AMCIII class.Table 3 summarizes the values of the model's parameters.In order to evaluate the performance of the model, the following two indicators were considered: the Root Mean Squared Error (RMSE) and the efficiency coefficient (CE).Their values are 59.46 m 3 s −1 and 0.24, respectively. Bayesian approach The Bayesian approach is described in Martina et al. (2006).Here, only graphical and numerical results related to the Mignone River case study are reported (Fig. 5, Table 4).For each AMC condition, three curves were obtained, corresponding to the various risk cases.Clearly, higher saturation will result in lower corresponding threshold values. Utility-entropy approach Given the measure of risk of Eq. ( 9), it is possible to evaluate rainfall thresholds as follows.Given a V T value, let be the space of possible state of nature, A the space of action and X the space of consequences of associations between actions and state of nature.Hence, i = {θ 1 , θ 2 } has two dimensions where θ 1 corresponds to the critical discharge being surpassed, and θ 2 corresponds it not being surpassed.A = {a 1 ,a 2 ,} has two dimensions where a 1 corresponds to issuing an alarm when the rainfall threshold is surpassed, and a 2 corresponds to issuing an alarm when the rainfall threshold is not surpassed.Finally, space X = X(a,θ ), defined in the space A × , has as components x 11 (correctly issued alarm), x 12 (missed alarm), x 21 (false alarm), and x 22 (not event, no alarm). The conditional entropy is defined as: In the rainfall threshold case it is: where H (q|v) can assume: or The parameter λ can be varied in order to adjust the weights of the objective and subjective components.Let us consider two cases.A total of nine threshold values are obtained in the case of a perfect tradeoff between objective and subjective components (λ = 0.5) (Table 4).In the second case, the tradeoff coefficient λ is equal to 1; therefore, in Eq. ( 9) only the component related to entropy is considered.The threshold values obtained are a minimization of Eq. ( 15), regardless of the perception of the decision-maker.In this case a single curve of the threshold values for each AMC condition is obtained (Fig. 7). Radar based simulation approach The inverse hydrological problem was solved by identifying the configuration of the rainfall field that leads to exceeding the critical discharge (see Sect. 3.1).Table 4 summarises rainfall threshold values corresponding to each rainfall configuration, for the AMCIII soil condition. Reliability evaluation To estimate the reliability of rainfall thresholds, it is necessary to investigate the presence of any missed or false alarms. A missed alarm (MA) is defined as when the flood event exceeds the critical reference discharge in the critical crosssection, but the recorded precipitation does not exceed the rainfall threshold.A false alarm (FA) is when the rainfall threshold is surpassed, but the observed discharge is lower than the critical reference discharge. Obviously, FAs and especially MAs, invalidate the reliability of rainfall thresholds as a warning tool.A possible way to assess the performance of the proposed method is to use a two-by-two contingency table (Mason and Graham, 1999).The table is structured as follows: the n observations are divided in Event E (critical discharge surpassed) and Not Event E (critical discharge not surpassed).If an event occurred and a warning was issued the outcome is a hit (with h being the total number of hits); if an event did not occur but Fig. 6.Threshold values evaluated via the utility-entropy measure of risk approaches (λ = 0.5) for the Mignone River basin: "real" case (solid line), "prone" case (dotted line) and "averse" case (dashed line) for AMCIII soil condition (upper panel) and uncertainty associated with "real" case AMCIII threshold (lower panel).a warning was issued the outcome is a false alarm (with f being the total number of false alarms); if an event occurred but a warning was not issued the outcome is a missed alarm (with m being the total number of misses); if an event did not occur and a warning was not issued the outcome is a correct rejection (with c being the total number if correct rejections).The total number of warnings is w, the total of no warnings w , the total number of events e and not-events e . The performance can be evaluated in terms of the hit rate (proportion of events for which a warning is correctly provided) and false alarm rate (proportion of not events for which a warning is incorrectly provided), defined as follows (Mason, 1982): The hit rate can be considered as the probability of detection and provides an estimate of the probability that an event will be forewarned, while the false-alarm rate can be considered as the probability that a warning will be incorrectly issued.For a warning system with no skills, warning and events are independent, so that: When a threshold-based forecasting system has some skill, the hit rate exceeds the false-alarm rate.The performance can be measured by (Gandin and Murphy, 1992): A performance is considered good is when skillscore > 0. The contingency tables corresponding to the Bayesian approach (real case), utility-entropy measure of risk (λ = 0.5), utility-entropy measure of risk (λ = 1), hydrologic simulation based on radar data (AMCIII condition) and raingauge data (all AMC conditions) are reported in Table 5. The reliability of the estimated rainfall thresholds is evaluated performing a back analysis on the flood events of the period 1998-2010.The last two years of the dataset were not used in the calibration phase, but only in the validation stage. Even if the numerical values of rainfall thresholds are quite similar, the analysis of the performance of each methodology offers interesting results. The best performance in terms of hits is for the utilityentropy approach measure of risk (λ = 0.5), followed by the utility-entropy measure of risk (λ = 1). The methods based on hydrological simulation performed quite well, especially using radar data (high number of hits, only one MA).The presence of false alarms seems to be related to an overestimation of the cumulative rainfall obtained from radar data.However, the results of efficiency based on radar measures are influenced by the limited availability of data (only the AMCIII conditions were examined). The worst performance is obtained with the Bayesian approach.In fact, the number of hits and false alarms is comparable, and the number of missed alarms is high.This is probably influenced by the utility function parameters that need a calibration on the examined river basin. The previous results are synthetically summarized by the skill scores (Table 6).All the methodologies have a positive skill score, ranging from 0.17 (Bayesian method) to 0.60 (entropy approach).It is interesting to highlight that the thresholds evaluated via hydrological simulation (rain gauge data) offer the same skill score as the utility-entropy approach measure of risk.The skill scores are therefore encouraging for the efficiency of the proposed methodologies as flood warning tools in a future operative framework.However, a wider data set is need to achieve more accurate reliability evaluation of each examined methodology. Discussion and conclusions This work presents a simplified model for the management of alert systems used for flood events.Values in excess of a threshold trigger prevention actions and an emergency system alert.The definition of threshold values is obtained via a probabilistic approach and by a simulation model based on weather radar data. Two probabilistic methodologies were compared by back analysis.In the Bayesian approach proposed by Martina et al. (2006), rainfall threshold values depend on the decisionmaker's perception of risk.Using the utility-entropy risk function approach, a combination of objective (represented by the entropy function) and subjective (represented by the expected value of the utility function) components permits the evaluation of rainfall threshold values, weighing the subjective perception of the stakeholder by using the opportune value of the trade-off coefficient.By imposing a balance parameter λ = 1, the subjective perception of the decisionmaker does not affect the determination of threshold values, which are obtained exclusively by minimizing the information entropy.This methodology is thus more objective and offers the best performance in terms of skill score.Thresholds obtained by hydrological simulation based on weather radar data perform quite well, but there is still a need for more data in order to achieve more accurate performance testing. A question of growing importance is the study of uncertainty of estimated rainfall threshold values (Ntelekos et al., 2006;Villarini et al., 2010b).In fact, every proposed approach to rainfall threshold evaluation is affected by various sources of uncertainty.For example, considering the hydrological model, the uncertainties related to the inputs (raingauge data, radar data, discharge data evaluated with rating curve, basin characterization) influenced the outputs (calibrated basin parameters and simulated discharges).In this work only an overall evaluation of uncertainty is performed.As shown in the lower panel of Figs.5-9, the confidence intervals (95 % and 99 %) of the proposed rainfall thresholds were evaluated.Clearly, the greater the source of uncertainty, the wider the confidence intervals. Future studies should investigate more carefully the influence of each component on rainfall threshold values.Associating an uncertainty value to rainfall thresholds allows the users to make optimal decisions about issuing or not issuing flood warnings. Fig. 1 . Fig. 1.Utility functions for different risk cases.The dotted line represents the function if the alert is issued, the solid line if not. Fig. 2 . Fig. 2. Study area and radar position.The red dot in the lower right-hand corner represents the position of the Polar 55C radar.A 2 km × 2 km grid was overlaid over the map of entire basin, located in the Northeast with respect to the radar.The green dots represents the available rain gauge stations in the study area. Fig. 4 . Fig. 4. Calibration and validation of the AMCIII class hydrologic model for the Mignone River basin on the basis of the observed hydrographs at the "Aurelia" gauge station in December 2008 with radar data. Fig. 5 . Fig.5.Threshold values evaluated by Bayesian approach for the Mignone River basin: "real" case (solid line), "prone" case (dotted line) and "averse" case (dashed line) for the AMCIII soil condition (upper panel) and uncertainty associated with "real" case threshold (lower panel). Fig. 7 . Fig. 7. Threshold values evaluated via the utility-entropy measure of risk approach (λ = 1) for the Mignone River basin, for all the AMC conditions: AMCI (dashed line), AMCII (dotted line) and AMCIII (solid line) in the upper panel.Uncertainty associated with the AMCIII threshold is in the lower panel. Fig. 8 . Fig. 8. Threshold values evaluated by hydrological model calibrated with radar data for AMCIII soil conditions (upper panel) and uncertainty associated with hyeto 1 threshold for the Mignone River basin (lower panel). Fig. 9 . Fig. 9. Threshold values evaluated by a hydrological model calibrated with raingauge data for AMCIII soil conditions (upper panel) and uncertainty associated with hyeto1 threshold for the Mignone River basin (lower panel). Table 2 . Marginal distributions parameters of observed cumulated values v and corresponding peak discharge q, sorted by duration. Table 3 . Parameters values for models calibrated with radar data. Table 4 . Threshold values evaluated by each method for the Mignone river basin. Table 5 . Two-by-two contingency table for assessing the efficiency of rainfall threshold evaluation methodologies. Table 6 . Reliability summary analysis: values of hit rate, false alarm rate and skill score are presented in order to compare the methodologies.
2015-03-21T17:44:09.000Z
2011-07-27T00:00:00.000
{ "year": 2011, "sha1": "e171bd9631aab65d0de145c17daa8246fc753cee", "oa_license": "CCBY", "oa_url": "https://nhess.copernicus.org/articles/11/2061/2011/nhess-11-2061-2011.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e171bd9631aab65d0de145c17daa8246fc753cee", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Geology" ] }
195761719
pes2o/s2orc
v3-fos-license
Systematic exploration of local reviews of the care of maternal deaths in the UK and Ireland between 2012 and 2014: a case note review study Objectives Local reviews of the care of women who die in pregnancy and post-birth should be undertaken. We investigated the quantity and quality of hospital reviews. Design Anonymised case notes review. Participants All 233 women in the UK and Ireland who died during or up to 6 weeks after pregnancy from any cause related to or aggravated by pregnancy or its management in 2012–2014. Main outcome measures The number of local reviews undertaken. Quality was assessed by the composition of the review panel, whether root causes were systematically assessed and actions detailed. Results The care of 177/233 (76%) women who died was reviewed locally. The care of women who died in early pregnancy and after 28 days post-birth was less likely to be reviewed as was the care of women who died outside maternity services and who died from mental health-related causes. 140 local reviews were available for assessment. Multidisciplinary review was undertaken for 65% (91/140). External involvement in review occurred in 12% (17/140) and of the family in 14% (19/140). The root causes of deaths were systematically assessed according to national guidance in 13% (18/140). In 88% (123/140) actions were recommended to improve future care, with a timeline and person responsible identified in 55% (77/140). Audit to monitor implementation of changes was recommended in 14% (19/140). Conclusions This systematic assessment of local reviews of care demonstrated that not all hospitals undertake a review of care of women who die during or after pregnancy and in the majority quality is lacking. The care of these women should be reviewed using a standardised robust process including root cause analysis to maximise learning and undertaken by an appropriate multidisciplinary team who are given training, support and adequate time. Page 10 Researchers in Illinois USA compared statewide and regional reviews of maternal death. They found "The statewide MMRC found more potential preventability and determined that preventability was associated with provider and systems factors, not patient factors. Observed discrepancies between regional perinatal center and statewide MMRC reviews were likely due to the complexity of cases selected for review, the multidisciplinary external composition of the review team, and the de-identification of cases. Multidisciplinary statewide expert panels should be implemented in addition to local and regionalized reviews." Geller, SE et al Maternal & Child Health Journal. Dec2015, Vol. 19 Issue 12, p2621-2626. You may find this interesting Lines 29-30 Please clarify what is meant by a "local team who may not be independent" -independent from whom? Implications? Lines 35-38. Expand on your observation of an apparent tension between 'no blame' and 'just' cultures … emotional tensions… professional hierarchies. How did you see this in your data? Or do these observations arise from the authors' professional experiences? Line 38-39 "solutions identified". By whom? How do these solutions track back to the data presented in the paper? REVIEWER Serena Donati National Institute of Health, Italy REVIEW RETURNED 20-Mar-2019 GENERAL COMMENTS The paper is very interesting for those who manage a maternal mortality surveillance system because it highlights the critical issues related to the review process of maternal death cases. It is clear that the quality of the reviews deserves improvement and that this aspect should be properly monitored and evaluated by an enhanced surveillance system . On the other hand the paper could be too specialized for the general public, the descriptive analyzes perhaps a bit poor and the inferability of the results, as reported by the authors, limited. I advise the authors to shorten and simplify the title of the paper. Reviewer Comments Changes made Line numb er Reviewer 1: Thank you for this important work to examine number and quality of local reviews of maternal deaths. The paper is well written, mostly clear and worthy of publication. I have a few comments and requests for clarity regarding some of the methods and the discussion. Thank you. We welcome the comments to improve the paper. Added to sentence for clarity: These CEs use multi-disciplinary teams of clinicians from outside the region where the woman's death occurred, to review anonymised case notes (medical records) and assess the care given against national guidelines. Assessment is undertaken by these independent reviewers and a consensus regarding whether care was good or improvements were noted, and if so, whether these may have made a difference to the woman's outcome is made at a multi-disciplinary meeting. 53-7 Page 4 Line 43-47. Please clarify whether the 'authors' are the same as the 'investigators'? What are the background/training of the authors and/or investigators? Yes authors are the investigators and 'between investigators' has been removed to reduce confusion. Two authors are midwives and researchers and were the primary assessors of the case notes and reviews. The other author involved in the assessments is a researcher. All three collaboratively worked together to utilise the proforma in a standardised manner. 97 Also are 'notes' the same as 'medical records'? how do 'case notes' relate to these two? The first reference to case notes 'medical records' has been inserted in brackets to add clarity to the case notes term. Notes, case notes and medical records have been used interchangeablythese terms have been changed where needed, to all state 'case notes '. 9, 55, 158, 160 Line 34 I see an extra comma. n=68,), Removed 162 Page 7 Lines 27-29 Please clarify the first sentence -…"had a documented review on the care received contained within the medical records…." I don't quite understand what this means. Revised sentence to say: Of the women who died, 60% (n=140) had a documented local review on the care received (have removed the end of the sentence). 158-9 Page 10 Researchers in Illinois USA compared statewide and regional reviews of maternal death. They found "The statewide MMRC found more potential preventability and determined that preventability was associated with provider and systems factors, not patient factors. Observed discrepancies between regional perinatal center and statewide MMRC reviews were likely due to the complexity of cases selected for review, the multidisciplinary external composition of the review team, and the de-identification of cases. Multidisciplinary statewide expert panels should be implemented in addition to local and regionalized reviews. Added following sentence into the Discussion: A comparison of American local and statewide reviews of 31 maternal deaths found that state reviews found more preventable system rather than patient factors when the cases were anonymised and investigated by an external review team. 227-230 Lines 29-30 Please clarify what is meant by a "local team who may not be independent"independent from whom? Implications? Removed 'not be independent' and added instead: …have provided care or work alongside those who have, which may reduce objectivity. 238 Lines 35-38. Expand on your observation of an apparent tension between 'no blame' and 'just' cultures … emotional tensions… professional hierarchies. How did you see this in your data? Or do these observations arise from the authors' professional experiences? Tension was not seen in the data but arises from observations from professional experience and some literature e.g. Peerally et al, 2017. Added: A balance needs to be maintained between system and individual accountability; reviews should not be a scapegoat exercise while any professional failure must focus on learning and quality improvement. 244 Line 38-39 "solutions identified". By whom? How do these solutions track back to the data presented in the paper? Suggestions that have already been made have had a reference added. Added for clarity: Suggested solutions to support quality balanced reviews include the need for professionalisation… 248, 249 246 Reviewer: 2 The paper is very interesting for those who manage a maternal mortality surveillance system because it highlights the critical issues related to the review process of maternal death cases. It is clear that the quality of the reviews deserves improvement and that this aspect should be properly monitored and Thank you, we agree with the reviewer comment and have added this to the manuscript. 249-251 evaluated by an enhanced surveillance system. On the other hand the paper could be too specialized for the general public, the descriptive analyzes perhaps a bit poor and the inferability of the results, as reported by the authors, limited. We agree the paper is too specialised for the general public. We have therefore specifically aimed the paper at healthcare professionals who understand the concept of case review even if they have not been involved in them, and grouping within a BMJ Open specific topic area will emphasise this. The analysis has been strengthened by the suggested comments from the reviewers and made more generally applicable by reference to additional data from the US, as well as existing maternal mortality surveillance systems. It supports existing evidence that local reviews are often not consistent or robust and do not prevent reoccurrence. While there are always limitations to studies, we consider that this study's limitations do not negate the impact of the findings and the potential transferability of the results. We believe that additional highlighting of the role of enhanced surveillance systems, as the reviewer notes above, strengthens the vitally important message of this article in supporting a rising awareness of the need to improve learning from critical incidents. -I advise the authors to shorten and simplify the title of the paper. The title has been shortened to: Systematic exploration of local reviews of the care of maternal deaths in the UK and Ireland between 2012-2014: a case note review study 1-2
2019-07-02T13:47:54.799Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "86352536165cacfcea272e966560330c156cf48f", "oa_license": "CCBY", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/9/6/e029552.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "20d60c719b40570a6ed0ea699df4853ce1f286f2", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
17463888
pes2o/s2orc
v3-fos-license
RF cavity design exploiting a new derivative-free trust region optimization approach In this article, a novel derivative-free (DF) surrogate-based trust region optimization approach is proposed. In the proposed approach, quadratic surrogate models are constructed and successively updated. The generated surrogate model is then optimized instead of the underlined objective function over trust regions. Truncated conjugate gradients are employed to find the optimal point within each trust region. The approach constructs the initial quadratic surrogate model using few data points of order O(n), where n is the number of design variables. The proposed approach adopts weighted least squares fitting for updating the surrogate model instead of interpolation which is commonly used in DF optimization. This makes the approach more suitable for stochastic optimization and for functions subject to numerical error. The weights are assigned to give more emphasis to points close to the current center point. The accuracy and efficiency of the proposed approach are demonstrated by applying it to a set of classical bench-mark test problems. It is also employed to find the optimal design of RF cavity linear accelerator with a comparison analysis with a recent optimization technique. Introduction In general, engineering systems are characterized by some designable parameters and some performance measures. The desired performance of a system (design specifications) is described by specifying bounds on the performance measures of the system which is set by the designer. The conventional system design aims at finding values of the system designable parameters that merely satisfy the design specifications. In general, there will be a multitude of acceptable designs. However, for contemporary engineering design, other criterion (objective function) can be chosen for comparing the different alternative acceptable designs (optimization problem) and for selecting the best one (optimal system design). Naturally, system performance measures and the objective functions are functions of system parameters values and evaluated through system simulations. For intensive CPU engineering systems, the high expense of the required system simulations may obstruct the optimization process. In practice, robust optimization methods that utilize the fewest possible number of function evaluations are greatly required [1,2]. Another difficulty is the absence of any gradient information as the required simulations cost in evaluating the gradient information is prohibitive in practice [3]. Attempting to approximate the function gradients using the finite difference approach requires much more function evaluations, which highly increase the computational cost. Another objection in estimating the gradients by finite differencing is that the estimated function values are usually contaminated by some numerical noise due to estimation uncertainty. Hence, gradient-based optimization methods cannot be applied here. For such optimization problems, only derivative free optimization (DFO) methods can be applicable [1,[4][5][6][7]. Further, the derivative free trust region methods usually handle such problems more efficiently as the trust region framework constitutes one of the most important globally convergent optimization methods, which has the ability to converge to a solution starting from any arbitrary initial point [8]. In addition, these methods use computationally cheap surrogate-based models that can be constructed by using function evaluations at some selected points. These surrogate models may be response surfaces, radial basis method, neural networks, kriging, etc. The majority of the existing derivative-free trust region techniques have the following features: They require a relatively large number of function evaluations, O(n 2 ) (where n is the number of system design variables) to construct the initial quadratic model. The quadratic surrogate models are constructed via interpolating the objective function at a constant number of points; when a point is obtained a previous point is dropped. In addition, these algorithms usually ignore the valuable information contained in all previously evaluated expensive function values. The work presented in this article introduces a new derivative free trust region approach that neither require nor approximate the gradients of the objective function. It implements a non-derivative optimization method that combine a trust region framework with quadratic fitting surrogates for the objective function [4,5]. The principal operation of the method relies on building, successively updating and optimizing quadratic surrogate models of the objective function over trust regions. The quadratic surrogate models reasonably reflect the local behavior of the objective function in a trust region around the current iterate and they are optimized instead of the objective function over trust regions. Truncated conjugate gradient method by Steihaug [9] is used to find the optimal point within each trust region. The approach constructs the initial quadratic surrogate model using few data points of order O(n). In each iteration of the proposed approach, the surrogate model is updated using a weighted least squares fitting. The weights are assigned to give more emphasis to points close to the current center point. The accuracy and efficiency of the proposed approach are demonstrated by applying it on a set of classical benchmark test problems and comparisons with a recent optimization technique [6] are also included. The linear accelerators (LINACS) provide beams of high quality and high energy in which charged particles move on a linear path and are accelerated by electromagnetic fields. The modern LINAC typically consists of sections of specially designed waveguides that are excited by RF electromagnetic fields, usually in the very high frequency (VHF) range. The accelerating structures are tuned to resonance and are driven by external, high-power RF power tubes, such as klystrons. The accelerating structures must efficiently transfer the electromagnetic energy to the beam, and this is accomplished through an optimized configuration of the internal geometry, so that the structure can concentrate the electric field along the trajectory of the beam promoting maximal energy transfer, by adding nose cones to create a region of more concentrated axial electric field as shown in Fig. 1. RF cavity analysis and design brought researchers and engineers' attention due to its extensive applications [10][11][12][13][14][15][16][17]. Applications include: medicinal purposes in radiation therapy, food sterilization and transmute nuclear fuel waste, etc. Design tools include: the computer code SUPERFISH [18], 3-D code MAFIA [19] and CST Studio Suite [20]. Design of accelerator RF cavities may include optimization of some of cavity parameters. Among the parameters characterizing the operation of the RF cavities are, the average accelerating field Eacc, peak fields to accelerating field (Epk/Eacc, Hpk/Eacc), quality factor, and cavity shunt impedance R-shunt [21]. The parameters considered for optimization depend on the power level fed to the cavity, which limit the average accelerating field, where the constraints on these parameters are imposed by the application. For low power level feed, optimization may focus on maximizing shunt impedance, however for high power operation, limiting the peak fields inside is of concern in order to minimize multipacting [22]. In this work we will focus on the low power fed cavities, where maximizing the shunt impedance is of main concern and will be treated through our new optimization approach. The new proposed trust region (TR) optimization approach is capable of solving the design problems with either 2D or 3D simulators. It is expected to work as well if a 3D simulator was employed with the expense of more computational time. Most of the accelerators use body of revolution cavity structure which can be solved as 2D structure, saving the computational resources. However, the proposed approach was successfully employed in microwave filter design utilizing 3D full-wave EM solver [23]. The new trust region approach The computationally expensive objective function is locally approximated around a current iterate x k by a computationally cheaper quadratic surrogate model M(x) which can be placed in the form: Fig. 1 Cross section of the cavity with nose cones and spherical outer walls. where a 2 R, the vector b 2 R n , and the symmetric matrix B 2 R nÂn are the unknown parameters of M(x). The total number of the model parameters is q = (n + 1)(n + 2)/2. These parameters can be evaluated by interpolating the objective function at q points. Initial model Let x 0 be the initial point that is provided by the user. Initially, assuming that B is a diagonal matrix, then the number of points required to construct the initial model is m = 2n + 1 [7]. The initial m points x i , i = 1, 2, . . ., m, can be chosen as follows [6,24] x 1 ¼ x 0 and where D 1 is the initial trust region radius that is provided by the user, and e i is the ith coordinate vector in R n . The initial quadratic model M (1) (x) will have the parameters a (1) , the vector b (1) , and the n diagonal elements of the model Hessian matrix B (1) . These parameters are computed by requiring that the initial model interpolates the objective function f(x) at the initial m points given in (2). Therefore the initial model parameters are obtained by satisfying the matching conditions: Model optimization At the kth iteration, assume that x k is the current solution point. The model M (k) (x) is then minimized, in place of the objective function, over the current trust region and a new point is produced by solving the trust region sub-problem: where s = x À x k , D k is the current trust region radius, and k Á k throughout is the l 2 -norm. This problem is solved by the method of truncated conjugate gradient by Steihaug [9]. It is identical as the standard conjugate gradient method as long as the iterates are inside the trust region. If the conjugate gradient method terminates at a point within the trust region, this point is a global minimizer of the objective function. If the new iterate is outside the trust region, a truncated step which is on the region boundary is considered. Also, the method treats the case where the minimum is in the opposite direction of the conjugate direction which is due to the non convexity of the model [9]. One good property of this method is that the solution computed has a sufficient reduction property, which was proved by Bandler and Abdel-Malek [25]. Let s * denotes the solution of (4), and then a new point x n = x k + s * is obtained. The achieved actual reduction in the objective function is compared to that predicted reduction using the model by computing the reduction ratio which is given by: This ratio reflects how much the surrogate model agrees with the objective function within the trust region. The trust region radius and the current iterate will be updated such that, if r k is sufficiently high, i.e., r k P 0.7, there is a good agreement between the model and the objective function over this step. Hence, it is beneficial to expand the trust region for the next iteration, and to use x n as the new center of the trust region. If r k is positive but not close to 1, i.e., 0.1 6 r k < 0.7, the trust region radius is not altered. On the other hand, if r k is smaller than a certain threshold, r k < 0.1, the trust region radius is reduced. The updating formula used for updating D k and x k can be expressed as follows: It is to be mentioned that the current center is the point of least function value achieved so far. Model update When a new point is available, the current quadratic model M (k) (x) is updated so that the point of lowest objective function value x k is now the center of the kth trust region. The model will take the form: The parameters: a (k) , b (k) and B (k) are evaluated employing the parameter values of the previous model M kÀ1 (x) in addition to all available function values. The constant a (k) is assigned the value of f(x) k , i.e., a (k) = f(x) k . The model will be updated in two steps. First, the vector b (k) is updated then the Hessian matrix B (k) is updated as follows: Step1: Updating the vector b (k) The vector b (k) can be obtained using only n points. However, using the n recent points may result in ill-conditioned system of linear equations. In order to avoid this, it is proposed to use the least squares approximation with the most recent 2n points. So, the vector b (k) is evaluated such that the model M k (x) fits the last 2n points obtained, x i , i = 1, 2, . . ., 2n, i.e., the following condition should be satisfied: When computing the vector b (k) , the matrix B (k) is assigned temporarily the value of the previous model Hessian matrix, B (kÀ1) , hence the vector b (k) is obtained by solving the following system of linear equations: : ð11Þ The previous system is an over-determined system. The least squares approximation for b (k) is Step2: Updating the matrix B (k) The model Hessian matrix B (k) is evaluated using the following updating formula: where c is a positive constant, 0.5 < c < 1, and the vector p 2 R n , q ¼ ½signðdiagðB ðkÀ1Þ ÞÞ Ã ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi ð1 À cÞ Ã jdiagðB ðkÀ1Þ Þj q : This choice of q, ensures that changes in B (k) occur gradually. The vector p is evaluated such that the model M (k) (x) tries to fit all the available m points obtained so far, x i , i = 1, 2, . . ., m, i.e., the following condition should be satisfied i.e., the vector p is obtained by solving the weighted system of linear equations ; v ¼ w 1 Ã ðfðs 1 Þ À a ðkÞ À s T 1 b ðkÞ À 1 2 s T 1 cB ðkÀ1Þ s 1 Þ w 2 Ã ðfðs 2 Þ À a ðkÞ À s T 2 b ðkÞ À 1 2 s T 2 cB ðkÀ1Þ s 2 Þ . . . To obtain more accurate model in the neighborhood of the current center, the available points are assigned different weights w i , i = 1, 2, . . ., m according to their distances from the trust region center. In the proposed approach the weight w i , associated with each equation, takes the form: where c 1 is a positive constant, c 1 P 1. The previous system in (16) is an over-determined system (m > n). The least squares approximation for p is After getting the vector p, the term qp T is calculated and the matrix is made symmetric by resetting the off-diagonal elements to their average values, i.e., b ij = b ji ‹ (b ij + b ji )/2, then the new Hessian matrix B (k) is updated according to Eq. (13). The model can be improved by generating a new point s new = x new À x k , which is chosen to be on the boundary of the trust region so that it improves the distribution of points around the center of the trust region. A suggested solution to find s new is to solve the following problem: ðs T i sÞ 2 ; such that s T i s < 0 8 i and ksk 6 D; where s new is selected to maximize the sum of squares of the projections of the vector s new on the other s i , i = 1, 2, . . .n s vectors, where n s is the available set of points. After generating s new , the function value f(x new ) is computed. If f(x new ) is found to be less than f(x k ), then x new will be considered as the new trust region center of the subsequent iteration, otherwise, x new will just be added to the available set of points. Algorithm A complete algorithm for the proposed method is given below (see also an illustrative flowchart in Fig. 2). 3. Solve the trust region sub-problem (4) using the truncated conjugate gradient method to obtain s * = x n À x k of the model M (k) (x) over the trust region. 4. Evaluate f(x n ) and compute the reduction ratio by substituting in (5). 5. Update the trust region radius to obtain D k+1 using (6). 6. Determine the trust region center of the next iteration x k+1 based on x k and r k using (7). If ||f(x k+1 ) À f(x k )|| 6 d, the algorithm will be terminated with x opt = x k+1 and f opt = f(x k+1 ). If for two successive iterations, r k is negative go to Step 9, else continue. Examples The effectiveness of the proposed algorithm is demonstrated through two benchmark examples. All results are compared with those obtained by NEWUOA (NEW Unconstrained Optimization Algorithm) by Powell [6]. The performance is measured by the number of function evaluations N required to reach the optimal solution. The 2D Beale function The function is by [26]: where a 1 = 1.5, a 2 = 2.25, and a 3 = 2.625. This function has a valley approaching the line x 2 = 1, and has a minimum of 0 at (3 0.5) T . The initial values used for x 0 and D 1 are (0.1 0.1) T and 0.8, respectively. The results in Table 1 and Fig. 3 compare the optimal value obtained by applying the proposed technique versus NEWUOA with the same number of function evaluation N. It is to be noticed, that starting from the same initial point and after only 11 iterations; the proposed algorithm gives a function value of 0.8065 while NEWUOA gives 14.2031. The 3D Box function The function was proposed by [27]: This function has a minima at (1 10 1) T , and also along the line{(a a 0) T } with value 0. The initial values used for x 0 and D 1 are (0 10 2) T and 9.9, respectively. Table 2 shows a comparison of the optimal value obtained after N function evaluations using the proposed algorithm versus NEWUOA (see also Fig. 4). In the above numerical examples, it is to be noticed that at the beginning of the optimization process, the proposed algorithm is much faster than NEWUOA. However, as the optimization gets closer to the optimum, the methods based on interpolation will be more accurate as expected. This explains why the proposed algorithm is well suited for objective functions that have some uncertainty in their values or subject to statistical variations. This may occur for design of systems whose parameter values are subject to known but unavoidable statistical fluctuations [1,28]. Also, the algorithm may be useful for surrogate-based system design [2,29]. These surrogates are updated during the optimization process, and a few iterations in the optimization process will be sufficient at the beginning. In this case the new technique will produce a significant reduction in few iterations. Optimized design of RF cavity The RF cavity is a major component of linear accelerators [30,31]. The structure of RF cavity must efficiently transfer the electromagnetic energy to the charged particles beam. This can be accomplished through an optimized configuration of its internal geometry, by adding nose cones to create a region of more concentrated axial electric field along the path of the electron beam, as shown in Fig. 1. The most useful figure of merit for high field concentration along the beam axis and low ohmic power loss in the cavity walls is the effective shunt impedance per unit length ZT 2 where T is the transient-time factor (a measure of the energy gain reduction caused by the sinusoidal time variation of the field in the cavity, [32]). One of the main objectives in cavity design is to choose geometry to maximize effective shunt impedance per unit length. This indicates increasing the energy delivered to the beam compared to that thermally lost in the cavity walls. The effective shunt impedance per unit length is usually expressed in mega ohms per meter and is defined by where P is the thermal power losses in the walls of the cavity, V 0 = ò E(z)dz = E 0 L, and E 0 is the average axial electric field along the cavity axis with length L. The technique is applied to an RF cavity with resonance frequency 9.4 GHz, shown in Fig. 5. The objective is to maximize effective shunt impedance per unit length. In order to do that, we optimize the axial z positions of ten points that describe the cavity curvature through a spline curve. The axial positions z = (z 1 , z 2 ,. . ., z 10 ) T in the z-direction are taken as the design parameters. The radial positions of these points are chosen on a logarithmic scale along r-direction. It is to be noted that during the variation of the curvature, the resonance frequency is always kept at 9.4 GHz. The initial values used for the ten radial positions z 0 are all set to 0.6 cm and D 1 is set to 0.02 cm. Cavity design generally requires electromagnetic field-solver that solves Maxwell equations numerically for the specified boundary conditions. In the simulations, POISSON and SUPERFISH are used as the main solver programs in a collection of programs from LANL [18,33]. The solver is used to calculate the static magnetic and electric fields and radio-frequency electromagnetic fields for either 2-D Cartesian coordinates or axially symmetric cylindrical coordinates. The code SUPER-FISH is used to solve for axisymmetric TM0nl modes, for the field components Hphi, Er and Ez. The solution is obtained through solving Hemholtz equation using finite element method FEM over a triangular mesh subject to the proper boundary conditions and symmetries imposed [34]. Design algorithm shown in Fig. 6 is implemented in MAT-LAB code, where an initial case is chosen corresponding to ten z positions of points with cavity curvature is described with spline curve (step 2). Then the spline interpolated curve is sampled at 100 points, where those sampled points are considered connected with piecewise linear, approximating the cavity curvature. This piece wise linear description is fed to AUTO-MESH program to generate mesh (step 3). The solution of lowest TM mode of the cavity is made at step 4 by calling SUPERFISH, and the obtained frequency in step 5 is used to scale the cavity dimensions to keep the resonance frequency at 9.4 GH (step 6). The corresponding scaling is reflected on the obtained cavity shunt impedance (step 7), where this value is fed to the optimizer algorithm to determine the new ten points positions. Then the process is repeated starting from step 2. The results of the effective shunt impedance per unit length for RF cavity in mega ohm per meter after N function evaluations for both the proposed algorithm and NEWUOA are shown in Table 3. It is to be mentioned that starting from the same initial point, the convergence of the proposed algorithm is as best as NEWUOA. However, the advantage of the proposed algorithm is its easy implementation and accessibility for update and modification. The figures of optimal cavity using the proposed algorithm and the NEWUOA are shown in Figs. 7 and 8 respectively. It worth mentioning that one could criticize the proposed optimized structure, that it contains sharp edge nose, which is difficult to manufacture and is a point of field singularity that causes breakdown. One way to override that problem is to add some curvature to the nose sharp tip, which would slightly reduce the realized shunt impedance.
2018-04-03T01:18:38.834Z
2014-08-30T00:00:00.000
{ "year": 2014, "sha1": "4a4185efc87d818a0cf22b76b731a24988397ebd", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jare.2014.08.009", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a4185efc87d818a0cf22b76b731a24988397ebd", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
19867355
pes2o/s2orc
v3-fos-license
Pulsed high magnetic field measurement via a Rubidium vapor sensor We present a new technique to measure pulsed magnetic fields based on the use of Rubidium in gas phase as a metrological standard. We have therefore developed an instrument based on laser inducing transitions at about 780~nm (D2 line) in a Rubidium gas contained in a mini-cell of 3~mm~x~3~mm cross section. To be able to insert such a cell in a standard high field pulsed magnet we have realized a fibred probe kept at a fixed temperature. Transition frequencies for both the $\pi$ (light polarization parallel to the magnetic field) and $\sigma$ (light polarization perpendicular to the magnetic field) configurations are measured by a commercial wavemeter. One innovation of our sensor is that in addition of monitoring the light transmitted by the Rb cell, which is usual, we also monitor the fluorescence emission of the gas sample from a very small volume with the advantage of reducing the impact of the field inhomogeneity on the field measurement. Our sensor has been tested up to about 58~T. I. INTRODUCTION While several methods to measure precisely a magnetic field exist [1], nowadays a very accurate measurement of magnetic field is performed via the nuclear magnetic resonance (NMR) of hydrogen in a water molecule. Devices based on other techniques are calibrated with respect to NMR. According to the NMR technique, the value of the magnetic field B experienced by the hydrogen nucleus, i.e. the proton, is derived by the frequency ν N M R of the microwave inducing at resonant spin flip of the proton where γ p is the gyromagnetic factor of the proton in water. The measurement of γ p with respect to the electron magnetic moment has been first performed in water at 34.7 • C by Phillips, Cooke and Kleppner [2] in 1977 at a field of about 0.35 T. The recommended value of γ p given in [3] is γ /(2π) = 42.57638507(53) MHz.T −1 (Water, sphere, 25 • C). In commercial devices, for fields higher than 10 T the H 2 O molecule is replaced by the D 2 O molecule, which has a lower gyromagnetic factor than hydrogen, to keep the spin flip resonance frequency lower than 500 MHz (see e.g. [4]). This kind of apparatus is designed to measure continuous fields greater than 0.2 T over a volume of a few mm 3 with a precision better than 1 ppm and an accuracy of 5 ppm. In the case of pulsed fields, i.e. fields varying on a timescale shorter than a second, pulsed NMR techniques * carlo.rizzo@lncmi.cnrs.fr have been developed recently to be used as a probe of matter properties (see [5] and refs within), but not yet for metrological purposes. In this paper, we present a new technique to measure pulsed magnetic fields based on the optical transitions of Rubidium in gas phase as a metrological standard. Optical magnetometry based on Rubidium vapor is already used to measure very low magnetic fields because of the precise Rubidium atomic parameters for its ground state [6]. Our goal is to extend to high magnetic fields the measurement capability of Rubidium gas by monitoring the optical transition frequencies between ground state and first excited states. As in the case of NMR, at large applied magnetic fields the transition frequency between well chosen quantum levels depends linearly on the applied magnetic field. We have therefore developed an instrument based on a narrow band and stable laser inducing the D2 transition for Rubidium gas contained in a mini-cell within a volume of 0.13 mm 3 . To be able to insert such a cell in a standard high field pulsed magnet we have realized a fibred probe kept at a fixed temperature. Transition frequency both in the π (light polarization parallel to the magnetic field) and σ (light polarization perpendicular to the magnetic field) configurations are measured by a commercial wavemeter. The design of our sensor follows the pioneering work initiated at NIST-Boulder around 2004 in order to develop miniaturized atomic clocks [7] as reviewed in detail by Budker et al [6]. These microfabricated magnetometers have been used to detect fields of the order of pT produced by the human body and in the domain of low field NMR for remote imaging or chemical species investigation [6]. A particularity of our sensor is that in addition of monitoring the light transmitted by the Rb cell, which is usual, we also monitor the fluorescence emission of the gas sample with the advantage to reduce the impact of the field inhomogeneity on the field measurement. Our sensor was tested up to about 58 T which represents the highest field value to which a gas sample has been exposed using non destructive field generation. Actually, as far as we know, the only other attempt similar to ours dates back to 1971 [8] when a field pulsed up to 33 T in less than a millisecond has been measured by monitoring the mercury line at 253.7 nm. Averaging several hundred measurements an uncertainty on the field value of about 0.04 % has been obtained, which, as far as we understand, translates into an uncertainty of less than 0.1 % per pulse. This uncertainty was essentially limited by the uncertainty on the transition frequency measurement. In the case of magnetic fields obtained in a destructive way, measurements using spectroscopic techniques date back to 1966 when a field of about 500 T obtained by explosive flux compression has been measured observing Sodium and Indium lines [9]. More recently, fields in excess of 200 T has been measured by observing the splitting of sodium doublet around 589 nm in the case of magnetically imploded targets for inertial confinement fusion [10]. Spectroscopy of Sodium atoms has also been used in the case of fields produced by exploding wires to measure fields around 50 T in the eighties [11] and 20 T very recently [12]. Those investigations were limited by the collisional and thermal broadening of the absorption lines due to the explosion process. This is not the case for the present investigation. In section 2 we present our method to perform a full-optical magnetic field measurement based on alkali atoms. The following section 3 describe the pulsed magnetic field coils that have been used in the present work. The section 4 is devoted to the probe and the sensor design that we explain in details. In section 5 we explain how we have calibrated a pick-up coil that we have used to monitor the magnetic field pulse. The experimental set-up is described in section 6. Finally the section 7 presents the results we obtained in terms of spectroscopy signals up to a field of about 58 T and and in terms of comparison between field values given by our Rb sensor and the standard pick-up coil. A final section concludes our paper including perspectives and applications of the present work. II. METHOD A. Full-optical magnetic field measurement Our method can be easily presented for the optical second resonance line 5 2 S 1/1 → 5 2 P 3/2 of an idealized alkali atom without nuclear spin, and therefore without nuclear Zeeman shift and without the electron-nucleus hyperfine coupling [13,14]. In the following, we will consider only the linear Zeeman effect assuming that the second order Zeeman effect [14] due to diamagnetism can be neglected at the field strengths of interest. For an applied magnetic field B, the E g eigenvalues for the [J g = 1/2, m g = ±1/2] ground eigenstates are determined by the Zeeman magnetic coupling and given by where µ B is the Bohr magneton in MHz/T (µ B = 13996.245042(86) MHz/T in [3]) and g 5S ≈ 2 the electron ground g-factor. The ground state splitting for fields of a few tenth of tesla is of the order few hundreds of GHz. The frequency determination of that splitting in the Rubidium ground state measures directly the magnetic field. Also the E e eigenvalues for the |J e = 3/2, m e = (±3/2, ±1/2) excited eigenstates are determined by the Zeeman magnetic coupling and given by where g 5P is the electron Landé g-factor for the 5 2 P 3/2 state. Because g 5P ≈ 4/3, as discussed in the following, the excited state Zeeman splittings are similar to those in the ground state. The optical transition between the |J g , m g → |J e , m e states experiences a magnetic field Zeeman frequency shift ∆ν Z given by and ν optical transition frequency is given by where ν Rb 0 represents the center of gravity for the D2 absorptions in either 85 Rb or 87 Rb reported in [15] and [16]. For optical σ ± or π polarized transitions given by m e = m g ± 1 or m e = m g , respectively, the Zeeman shift appearing in this equation is comparable to those quoted above for the ground state. The inversion of Eq. (4) allows to derive the magnetic field from a measurement of ∆ν Z . The high sensitivity associated to the measurement of optical absorption processes and the resulting fluorescence emission leads to an efficient application of the optical detection. The experimental method is quite simple. Laser light kept at a fixed frequency excites Rubidium gas contained within a cell inserted in a magnet delivering a pulsed magnetic field. The transition of interest between two selected Rubidium states is monitored all along the pulse evolution in order to determine the time when the magnetic field satisfies Eq. (4). The temporal form of the pulse may also be monitored using a pick-up coil. The value of the magnetic field given by the pick up signal is therefore calibrated through the signal produced by the Rubidium gas cell. B. Alkali atoms in a magnetic field The Rubidium ground state eigenenergies in a magnetic field do not satisfy the simple relation of Eq. (8). In fact the two stable Rubidium isotopes, 85 Rb and 87 Rb, have a nuclear moment, I = 5/2 and I = 3/2, respectively, characterized by the nuclear Lande g-factor g I (assumed negative in the following as in [14][15][16]). The eigenenergy contributions by the electron-nucleus hyperfine coupling and the Zeeman nuclear energy lead to complex functional dependencies on the magnetic field. The magnetic response is characterized by the ratio between the magnetic interactions of electron and nucleus with the magnetic field, and the hyperfine electron-nucleus coupling. For the alkali J g = 1/2 ground state, analytical expressions of the eigenenergies are given by the Breit-Rabi formula [17,18]. For the excited state, no analytical formula exists and a numerical approach is necessary to diagonalize the Hamiltonian to obtain the eigenenergies at a given magnetic field. Our magnetic field measurement is based on the existence of two eigenstates, ground and excited, whose energy dependence on the magnetic field is always linear. Those eigenstates will be denoted as extreme in the following. Within the magnetic field regime here explored both ground and excited states are characterized by the electronic angular momentum J and its projection m J along the magnetic field axis, combined with nuclear moment I and its projection m I . The extreme eigenstates correspond to the highest values of all these quantum numbers. The |J g = 1/2, m g = 1/2; I, m I = I ground has the following energy derived from the Breit-Rabi formula: where A g is the dipolar hyperfine coupling of the ground state. Also for the excited state 5P 3/2 , in the hyperfine Paschen-Back regime the eigenstates are characterized by those quantum numbers. The |J e = 3/2, m e = 3/2; I, m I = I state has the following energy derived from the Paschen-Back formula: where A e and B e are the dipolar and quadrupolar hyperfine couplings of the 5P 3/2 state. An additional regime, denoted as the fine Paschen-Back one, is reached when the electron Zeeman energy is larger than the fine structure splitting between the 5P 3/2 and 5P 1/2 excited states. In that regime a linear dependence on B applies to all the eigenenergies. For the Rubidium case this last regime cannot be reached for magnetic field presently available in earth Laboratories. Notice that owing to the smaller fine structure splitting, that regime was explored for sodium atoms in all the high field experiments with exploding wires [9][10][11][12]. Combining together Eqs. (6) and (7), the frequency of the Zeeman shifted σ + optical transition linking the Rb linear dependent states is given by That represents the generalization of Eq. (4) to an alkali atom as Rubidium. Inverting this relation and making use of Eq. (5) for the Rubidium case, we obtain the following relation determining the magnetic field for a measured ν Rb optical frequency: (9) C. Rubidium atomic constants Our optical determination of the magnetic field is based on the D2 transition between the ground state 5 2 S 1/2 and the excited state 5 3 P 3/2 of the rubidium isotopes, whose ν Rb 0 frequency separation at zero magnetic field is known with a 1 × 10 −11 precision [15,16]. We therefore need to know accurately ground and excited atomic constants. The ground state hyperfine splitting is A 5S = 3.417341305452145(45) GHz for 87 Rb as measured by Bize et al. [19], while for 85 Rb the A 5S = 1.0119108130 (20) GHz value was reported by [13,15]. The ground state Landé g-factor was precisely measured for 87 Rb by Tiedemann and Robinson in 1977 [20] with respect to g e , the free electron g-factor. The ratio reported ratio was R Rb = g 5S /g e = 1.000005876(13), measured at 5 mT. Since isotopic effects on Rubidium gfactor were found to be less than 1 ppb [13], this ratio also apply to 85 Rb within the errors. Making use of the g e value given in [3] we obtain g 5S = R Rb g e = 2.002331070 (26) For the 5 3 P 3/2 excited state, the 87 Rb dipolar and quadrupolar hyperfine constants carefully measured by Ye et al. [23] and reported in [16] are A e = 84.7185 (20) MHz and B e = 12.4965(37) MHz, while for the 85 Rb ones those reported in [13,15] remain the most recent determinations. A large indetermination is associated to the Landé g 5P -factors with respect to the g 5S one. The data of [13] point out that for all the alkali atoms, within the reported experimental errors, the g-factor of the first excited P state is ≈ 1.33411, the value predicted by the Russel-Saunders coupling. For the 5P 3/2 state in 87 Rb ref. [13] reported 1.3362(13) as a weighted average of all the measurements available at that time. No new measurement is available. That value is largely determined by fitting the level crossing measurements by Belin and Svanberg [22], and deriving at the same time also the dipolar and quadrupolar hyperfine constants of that state. We have reanalysed those level-crossing measurements by fixing the hyperfine constants to the very precise values of ref. [23] and the 87 Rb nuclear magnetic moment to the value of ref. [24]. A new g 5P 3/2 = 1.3341(2) value was obtained, in agreement with the Russel-Saunders prediction. Following ref. [21], we evaluate the QED and relativistic corrections at the level of 10 −4 -10 −5 . Therefore we will use that value in our analysis. At a magnetic field of 50 T the predicted Zeeman shift is around 700 GHz. Inserting the g-factor uncertainties and a planned 10 MHz accuracy (equivalent to 1/50 of the Doppler width) in the determination of the resonance frequency, we estimate that our determination leads to an accuracy of about 10 −4 for a 50 T magnetic field. This accuracy is orders of magnitude better of the one quoted in previous high magnetic field studies of atoms [9][10][11][12]. None of these previous investigations at very high magnetic fields has included the diamagnetism in their analysis. However, the presence of a diamagnetic correction for experiments to be performed at magnetic fields higher than those presented in this work should be taken into account. This could also be true if a large accuracy is aimed. Diamagnetic constants have only been measured for high excited states of alkalis [14] and no theoretical prediction exists yet for the ground state or excited P states. D. Magnetic field measurement uncertainty In the following, we apply the recommendations of the Joint Committee for Guides in Metrology (JCGM) [25], and all uncertainties are given for a coverage factor of 1. Neglecting the uncertainty on µ B , A e , A g and B e , following Eq. (9) the magnetic field uncertainty u(B) is determined by two separated contributions produced by the u(∆ν Rb ) uncertainty in laser frequency difference between the zero field transition and the B field one, and the u(g 5P ) and u(g 5S ) uncertainties for the g-factors of the lower and upper state. These uncertainties have different sources and lead to A type and B type uncertainties. Supposing the frequency reading described by a Gaussian with u(∆ν Rb ) variance we write for the A type contribution: The g-factors are instead affected by B type uncertainties u(g 5P ) and u(g 5S ), with u(g 5S ) being negligible with respect to u(g 5P ), and produce the following B type contribution to the field uncertainty: Within our analysis we will add in quadrature the above contributions. The very high precision associated to the measurement of optical frequencies could allow to reach a 0.1 ppm uncertainty if the Rubidium atomic constants are all known at the same level, which is not the case at the moment as shown in the previous subsection. III. PULSED MAGNETIC FIELD COIL The pulsed magnetic field coil used in this experiment is a LNCMI standard 60 T coil [26]. It consists in 24 layers of 40 turns each composed of 9.6 mm 2 hard copper wire of rectangular cross section reinforced with Toyobo Zylon fibers using the distributed reinforcement technique [27]. The winding outer diameter is 270 mm and the length is 160 mm. This magnet has a 28 mm free bore diameter to perform experiments. To facilitate the heat dissipation the pulsed magnet is immersed into liquid nitrogen. To maintain the Rubidium cell at room temperature the probe is placed in a double-walled stainless steel cryostat inserted in the magnet bore. Due to the space occupied by the insulation walls and the intermediate vacuum, the bore diameter in the magnetic field is 21 mm. The magnet is connected to a capacitor bank and needs 10 kA representing 3 MJ of magnetic energy to generate 60 T. The rise time of magnetic field is about 55 ms and the time between two consecutive pulses at maximum field, necessary for the coil to cool down is one hour thanks to an annular cooling channel inserted directly in the winding [28]. Figure 1 shows a typical magnetic field pulse corresponding to a maximum field of about 59 T. The magnetic field homogeneity at the center of the field region is estimated to be better than 100 ppm on 1 mm 2 . IV. PROBE AND SENSOR DESIGN The probe head is composed by Rb cell sensor located at the end of a long pipe terminating on a chamber hosting all the electrical and optical connections, as represented into the central part of Fig. 2. The long metallic pipe is required in order to center the cell within the magnet and have the connectors outside the magnet. The sensor overall dimensions are 40 mm length and 19 mm diameter. The probe head contains three home-made optical vacuum-type feedthrough without fiber discontinuity and all the electrical connections, thanks to a twelvecontact connector. A. Sensor The central part of the sensor is a Rubidium cell of 3 mm × 3 mm internal cross section and 30 mm length, as schematized in Fig. 3. The cell is filled with natural Rubidium therefore containing both 85 Rb and 87 Rb isotopes. Laser light arrives into the cell via a single mode optical fiber (SMF IN Fiber) passing trough a planconvex lens (Lens 1) of 2 mm diameter and focal length of 4 mm to be collimated into the vapor region after reflection on an aluminum coated 45 • rod mirror (Mirror 1) of 2 mm diameter. Before entering into the cell the light is polarized at 45 • with respect to the magnetic field direction to be able to induce both π and σ transitions by a 5 mm × 4 mm Nano-Particle Glass Polarizer slab (Polarizer) of 0.26 mm thickness. Light passing through the gas after reflection on 3 mm diameter aluminum coated N-BK7 right angle prism mirror (Mirror 2) and after being focused by an aspheric lens (Lens 2) of 5 mm focal length is collected by a 0.39 numerical aperture, 0.2 mm core multimode optical fiber (Transmission MM Fiber). This allows us to monitor the transmission of the Rubidium gas. Mirror 1 mount is coupled to an external precision mount allowing Z rotation and Z translation. Mirror 2 is glued on a flexible arm allowing X and Y rotation. Such flexible arm is also visible within the overall sketch of the structure hosting the sensor reported in Fig. 4 top. At resonance Rubidium atoms absorb photons by changing their internal state from the ground level to the excited one. This excitation energy is then released as fluorescence. A particularity of our microfabricated magnetometer is to collect part of this fluorescence in a solid angle of about 4π/50 sr. The fluorescence detected in this way and generated from an atomic volume of about 0.13 mm 3 , around one million atoms, is focused by a plan-convex lens (Lens 3) of 2.5 mm diameter and focal length of 2 mm, which is situated at 3.1 mm from the end of the optical fiber, 2.3 mm from the Rb cell and 4.3 mm from its center as sketched in the bottom of Fig. 3. Fluorescence light is then collected by a 0.39 numerical aperture, 0.2 mm core multimode optical fiber (Fluorescence MM Fiber) after being reflected by an aluminum coated 45 • rod lens (Mirror 3) of 2 mm diameter. Since we will use mainly the fluorescence signal to determine the magnetic field, the volume of vapor being at the origin of the fluorescence signal gives also the spatial sensitivity of our system. The fluorescence module, sketched in the bottom part of Fig. 3, constitutes a separate part that is aligned before operation using light propagating in the opposite direction to have a focus at about 4 mm from lens 3. Thanks to the screw threading shown in the bottom of Fig. 3 an external precision mount is coupled to the fluorescence module allowing a proper positioning of the module along y and z axis. Once transmission and fluorescence aligned, all the optics is glued to the PLA structure and the external precision mounts are removed. The flexible arm on which Mirror 2 is mounted allowing its rotation is also glued. All the above optical elements are hosted on a structure built thanks to a PolyLactic Acid (PLA) fused filament deposition by a 3-D printer as shown in Fig. 4 top and bottom. B. Sensor temperature control During operation, the sensor is placed in a cryostat inserted in the pulsed field magnet, as explained before. This kind of magnets is cooled with liquid nitrogen, even if the cryostat have a good level of thermic isolation, at the position of the sensor the temperature can be several degrees under 0 • C. In contact with the cell a heating system (Heater 1 in the top of both Fig. 3 and Fig. 2) is placed to control the Rubidium temperature. During standard operation the power consumption of this heater is around 200 mW. A second heater (Heater 2 in Fig. 4) driven by about 100 mW power, surrounds the whole sensor and it is used in parallel with the first one to stabilize the gas temperature. It also participates to the effort to keep Rubidium temperature around 30 • C, as measured by the temperature sensor located in contact with the gas cell. All our heaters mounted are fabricated by winding 0.1 mm diameter wires of manganin alloy. A third heater consisting of a hot air flow entrance injected through the probe, see bottom part of Fig. 2, is added to the head to increase the total heating power of the sensor. Pulsed magnetic fields are usually monitored with in situ pick-up coils. One of them, consisting in 21 turns of copper wire is therefore also hosted by the PLA structure (see Fig. 4). It is winded in an insulating mandrel designed so that the pick up coil gives a signal that is proportional to the time variation of the magnetic field flux through a surface that is perpendicular to the field direction. Its frequency response corresponds to a bandwidth larger than 500 kHz. For practical reasons the pick-up coil is situated at a distance of 7.5 mm from the volume of gas from which the fluorescence is originated. To provide the time profile of the magnetic pulse one has to integrate the pick-up signal. Once calibrated, a pick-up coil can be also used as a magnetometer. To this purpose, the evaluation of the total area of this pick-up coil is realized by inserting it in a magnetic field provided by a calibration solenoid whose geometrical properties are summarized in Table I. The first layer of the calibration solenoid is winded on a glass fabric/epoxy tube and fixed with epoxy. The second layer, also fixed with epoxy is winded on the first layer after rectification of the additional fixation epoxy to obtain a diameter as regular as possible. The solenoid sketch is shown in Fig. 5. Using textbook formulas for a solenoid of finite length and taking into account the experimental error on the construction parameters, the field at the center of it on the symmetry axis is such that the ratio between the driving current and the obtained field R B/I is 7.253(3) mT/A. During calibration of the pick-up coil, the solenoid is driven by an alternating current of the order of 40 mA at frequencies varying in the range of several tens of Hz. The value of the driving current is measured with a commercial instrument whose accuracy is 0.06 % The signal at the ends of the pick up coil is demodulated using a lock-in amplifier. The accuracy of this instrument for voltage measurements is 0.2 %. This is the limiting accuracy for the pick-up coil calibration. The measured product of the number of turns times the pick-up surface is 0.005215(10) m 2 . This value of the pick-up coil equivalent surface is used to recover the magnetic field value of the pulsed magnet, which is therefore given with respect to the one calculated for the calibration solenoid. An important point is that we assume that in the calibration solenoid the radial homogeneity of the magnetic field is such that it can be neglected and we assume that the field is constant all over the pick-up surface. The homogeneity of the calibration solenoid is expected to be a fraction of a ppm. The homogeneity of the 60 T pulsed magnet is such that a correction has to be considered when comparing the field given by the pick-up and the one given by the Rb sensor since the latter is only sensible to the magnetic field homogeneity in a scale shorter than a millimeter i.e. about 20 ppm. In fact, to recover the magnetic field from the pick-up coil signal one assumes that the field is constant all over the pick-up surface, which is not exact for the 60 T pulsed magnet. The radial profile of the field is parabolic and the field is slightly higher at the border of the pick-up than at its center. With the assumption that the field is constant, the field value inferred thanks to the pick-up coil has been therefore evaluated to be about 0.1 % bigger than the one in its center to which the Rb sensor is sensible. VI. EXPERIMENTAL SET-UP A view of the whole experimental set-up is in Fig. 6. The light beam coming from a DLX Toptica laser is sent to 1) a reference Rb cell contained within a mu-metal shield, 2) a commercial wavemeter monitoring its wavelength continuously, 3) a single-mode fiber to transport it to the sensor after passing through a half wave plate (HWP). The HWP rotation modifies the light polarization in order to control the light transmitted by the input polarizer shown in Fig. 3 top. The transmission from the reference cell is detected by a photodiode (Ph1), while transmission of the sensor Rb cell and fluorescence are monitored by photodiodes Ph2 and Ph3. Ph1 and Ph2 are standard silicon photodiodes. Ph3 is a low noise variable gain photoreceiver with a -3 db optical electrical bandwidth of 7 Khz and a gain of 10 10 V/W. All these signal are stored in a computer (Control PC) via a Hioki oscilloscope (Hioki 2) which also monitor the trigger signal given by the Capacitor Bank Control delivering the optical trigger to start the magnetic pulse. All the instruments within the dashed lines of Fig. 6 are actually inside a sealed box not accessible during the magnet operation for safety reasons. The connection between the box and the outside takes place via another Hioki oscilloscope (Hioki 1) under control from the Control PC. The two Hioki oscilloscopes are synchronized at the microsecond level. Let's note that a heater on the internal side of the cryostat tail, indicated as Heater 4 in Fig. 6, is used to obtain a better control of the whole probe temperature. It consists of a constantan wire wrapped around the inner tube in the insulating vacuum of the cryostat with a 3 W heating power. VII. RESULTS We recorded Rubidium fluorescence spectra and pickup signals during several magnetic field pulses with different maximum fields. Figure 7 shows a typical data acquisition, with the full temporal record on the top and and expanded view around the maximum magnetic field on the bottom. In black, the magnetic field strength derived by the pick-up signal is reported; it allows to record the temporal shape of magnetic field pulse. The blue traces in Fig. 7 shows the fluorescence signal. We observe four narrow Rubidium resonance peaks, two of them during the rise of the field and the other ones during the decreasing phase of the pulse. Each peak is composed by the superposition of four resonances, resolved in the expanded view on the bottom of that figure. The resonances appearing at higher magnetic field correspond to the Rb σ + transitions |J g = 1/2, m g = 1/2 → |J e = 3/2, m e = 3/2 , with a structure produced by the Zeeman nuclear splitting. The resonances observed at lower magnetic field correspond to the σ + transitions Fig. 7 reports a zoom of the higher field data presented in the top part. We distinguish four resonances for rising and descending magnetic field. These resonances correspond to the 87 Rb transitions listed in Table II listed by the increasing magnetic field strength. In addition, approximately at the center of the group of four 87 Rb resonances, there are six resonance of 85 Rb unresolved because their mutual separation is smaller than the Doppler width. Their presence produces an almost flat offset for the two central 87 Rb resonances. In principle this fact could affect the position of the observed center of the involved resonances. As pointed out previously, the transition between |J g = 1/2, I = 3/2, m g = 1/2, m I = 3/2 and |J e = 3/2, I = 3/2; m e = 3/2, m I = 3/2 experiences a linear frequency shift for any value of the magnetic field. Following Eq. (8), at the given laser frequency, the center of the considered resonance appears at the magnetic field reported on the bottom line of Table II. By combining this magnetic field calibration to the pick-up coil signal we derive the magnetic field position of all four high-field resonances, their uncertainties derived from the combiation of Eqs. (11) and (12). Using these data we scale the temporal profile of the magnetic field pulse given by the pick-up coil; thus we obtain the magnetic field strength during the whole duration of the pulse with an accuracy of about 2×10 −4 , more than one order of magnitude better than the calibrated pick-up coil. This accuracy is limited by the knowledge of the g 5P constant and not by our experimental method. The value of the magnetic field given by our Rb sensor B Rb has been compared to the one provided by the pickup coil B PU at several values of magnetic fields. The B PU values have been obtained by taking into consideration the effect due to the size of the pick-up explained in a previous section and the fact that the pick-up coil is located 7.5 mm above the probed atomic volume. Because of the pick-up position with respect to the Rb sensor the value given by the pick-up is 0.2% smaller than the Rb one as inferred from both the calculation of the longitudinal magnetic field profile of the coil and the direct measurement performed moving the pick-up coil along the axis of the magnet. Figure 8 reports the value of the magnetic field B PU measured by the pick-up coil as a function of the values B Rb given by Rubidium spectra. The linear relation is given by fitting the data taking into account the uncertainty of B PU given by pick-up coil calibration and pick-up coil signal measurement. The uncertainty due to the pick-up calibration is 0.2% as explained before, the uncertainty of the measurement of the pick-up signal is about 0.1%, due to the voltage measurement. The final uncertainty is therefore 0.22%. Following the linear fit shown in fig. 8, the value of B PU for B Rb = 0 is compatible with zero within the error. The B PU value can therefore be inferred by the B Rb measurement thanks to the equation B PU = 1.0009(6)B Rb , which demonstrates the good agreement between pick-up coil and our Rubidium sensor measurements. VIII. CONCLUSIONS Our Rb sensor shows uncertainties that are already more than an order of magnitudes better than the one of standard pick-up coils and also a direct access to a micrometer size explored region. It is worth to stress that our experiment leads to a metrological measurement as far as accuracy is concerned. Indeed our measurement is a conversion between the Tesla unit and the frequency unit which can be related to the standard of time. This makes it a very interesting candidate to establish a secondary standard for the definition of the Tesla unit also taking into account that the accuracy of our system will improve in a straightforward way once a more accurate value of the g-factor of the excited states of the Rubidium will be available. In any case, our accuracy is much better than the one obtained in the measurement of high magnetic fields in the case of destructive fields that is limited to about 10%. Our work may be compared with the recent work by Raithel research group [29] where the sub-Doppler feature of electromagnetic induced transparency for twophoton transitions to Rydberg states were used for magnetic field measurements. The sub-Doppler spectroscopy leads to an increase resolution by a factor hundred. However the low accuracy of all the atomic constants associated to the Rydberg states cannot be compared to that for the ground and first-excited states. In our setup the observation of sub-Doppler absorption features rely on technical improvements realizable in the near future. Another straightforward way to improve our system is to move to a cell containing a single isotope of Rubidium to simplify the shape of the spectroscopy signal. As for the perspectives in physics measurements, we have succeeded in observing Rubidium transition at more than 58 T that is a world record as far non destructive field generation is concerned with an uncertainty per pulse never reached before. This opens the way to dilute matter optical tests in high magnetic fields as precise measurements of g-factors of excited states at a level that is interesting to verify Quantum Electrodynamics predictions. In the way toward a sensor of a very high accuracy, which looks possible with our method, a measurement of the Rubidium diamagnetism will be necessary and our techniques combining atomic spectroscopy to non destructive pulsed high magnetic fields will certainly play a role also in this domain. IX. ACKNOWLEDGMENTS This research has been partially supported through NEXT (Grant No. ANR-10-LABX-0037) in the framework of the "Programme des Investissements dAvenir". E.A. acknowledges a financial support from the "Chaires D'Excellence Pierre de Fermat" of the "Conseil Regional Midi-Pyrenées", France. S.S. acknowledges financial support from "Université Franco Italienne". The authors thanks M. Badalassi (INO-CNR, Pisa) and N. Puccini (Universitá di Pisa) for developing and testing the minicell of the present investigation, F. Thibout for filling the cell with Rubidium at "Laboratoire Kastler-Brossel"
2017-03-31T09:01:37.000Z
2017-03-31T00:00:00.000
{ "year": 2017, "sha1": "ef12cac94957977c33d6e5992640fbe028b1aec5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1704.00004", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ef12cac94957977c33d6e5992640fbe028b1aec5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine", "Physics" ] }
125112596
pes2o/s2orc
v3-fos-license
Tensor Factorization based Estimates of Parallel Wiener Hammerstein Models ? Factoring the third-order Volterra kernel of a Wiener-Hammerstein model to recover the impulse responses of its two constituent linear systems is a common example in the multilinear algebra literature. Since recent progress in regularization-based system identification has enabled the practical estimation of the third-order Volterra kernel, these tensor factorization based approaches have become attractive. We extend one of these WienerHammerstein factorization methods to the case of the Parallel Wiener-Hammerstein model, since, unlike the WH model, this structure is a universal approximator for Volterra systems. The efficacy of the method is demonstrated using numerical simulations. INTRODUCTION System identification is the process of extracting mathematical models of dynamic systems from measurements of their inputs and outputs. The identification of linear systems is a well studied problem, and is documented in textbooks including those by Pintelon and Schoukens (2001) and Ljung (1999). Physical systems are inherently nonlinear. When using linear identification tools, the effects of nonlinear distortion must be minimized, but they are always present (Schoukens et al., 2016). The Volterra series, which consists of a series of generalized multi-dimensional convolutions between the input and series of multi-dimensional generalized impulse responses, provides a general model capable of approximating any fading memory nonlinear system to within arbitrary accuracy (Boyd and Chua, 1985). Although the model is linear in the parameters, the number of parameters grows combinatorially with the memory length of the system, and with the degree of the nonlinearity (Westwick and Kearney, 2003). Block-structured models, consisting of one or more branches of alternating dynamic linear and memoryless nonlinear systems are widely used to model nonlinear systems (Giri and Bai, 2010). The simplest of these are the Hammerstein and Wiener models. The Hammerstein model consists of a memoryless nonlinear followed by a dynamic linear system (NL) while in the Wiener system, the order of the elements is reversed (LN). The Wiener-Hammerstein (W-H) system is made up of two dynamic linear systems, separated by a memoryless nonlinearity (LNL). The third-order Volterra kernel of a W-H system can be factorized using tensor operations to yield the impulse responses of the two linear elements (Kibangou and Favier, 2010), (Favier et al., 2012), (de Goulart et al., 2016). While these works did not consider the difficulty inherent in estimating high-order Volterra kernels, recent progress in the development of regularization-based system identification methods has made this plausible (Birpoutsoukis and Schoukens, 2015). Unlike the Volterra series, the W-H model is not a universal approximator, and has limited applicability as a result. Boyd and Chua (1985) showed that the parallel Wiener system is a universal approximator, in that it can represent any system that admits a Volterra series representation to within arbitrary accuracy. Palm (1979) suggested that the Parallel Wiener-Hammerstein (PWH) system, consisting of multiple paths each with the LNL structure, would provide a more efficient representation, but was not aware of any methods for its identification. In this paper, we will extend one of the non-iterative tensor factorizations proposed by de Goulart et al. (2016) to the case of a PWH model. The balance of the paper is organized as follows. This section will conclude with an overview of the notation used throughout the paper. Section 2 will present background material on nonlinear sys-tem models and tensor decompositions, and will conclude with an overview of the algorithm proposed by de Goulart et al. (2016). Section 3 will propose an extension of the algorithm to the PWH model. Sections 4 and 5 will present the results of some numerical experiments, and conclude the paper, respectively. Notation Lower and upper case letters in a regular type-face, a, B, will refer to scalars, bold faced lower case letters, a, refer to vectors, matrices are indicated by bold faced upper case letters, A, and bold faced calligraphic script will be used for tensors, A. The Kronecker product will be denoted by , and will be used for the Khatri-Rao (or column-wise Kronecker) product. The outer, or tensor, product will be denoted ⊗. The operator diag(a) constructs a diagonal matrix with the entries of the vector a on its diagonal, while vec(A) stacks the columns of the matrix A into a vector. Nonlinear System Models A wide variety of nonlinear systems (Boyd and Chua, 1985) may be represented by a finite Volterra series model: where u(t) and y(t) are the input and output at (discrete) time t, respectively, and M is the memory length of the system. The output is computed by a series of multidimensional convolutions with the system's Volterra kernels, h ( ) (τ 1 , . . . , τ ), where is the degree of nonlinearity of the kernel. While the Volterra series is a very general model for nonlinear systems, it is difficult to work with. Block structured models (Giri and Bai, 2010), sequences of alternating dynamic linear and static nonlinear elements, have several advantages. They have relatively few parameters, and are hence easy to simulate and analyze. They can also provide insight into the structure of the underlying system. As shown in Fig. 1, a W-H model comprises two LTI filters separated by a memoryless nonlinearity. In this paper, these will be represented by FIR filters: with memory lengths of M 1 and M 2 samples respectively. The nonlinearity will be modelled as a polynomial, to facilitate computation of the system's Volterra kernels. Thus, the output of the system is given by: The two linear elements are represented by their impulse responses, g(τ ) and h(τ ). Comparing (2) and (1), it can be shown that the 'th degree Volterra kernel of the W-H model is given by (Rugh, 1981): Canonical Polyadic Decomposition Let H 3 be a third-degree tensor that contains the thirddegree Volterra kernel of a system. It is symmetric with respect to any interchange of its indices, and has dimensions M × M × M , where M is the memory length of the system. This tensor can be represented as a sum of R rank-1 terms, known as a polyadic decomposition (Kolda and Bader, 2009). Such polyadic decompositions can be constructed for tensors of higher degrees. These are generalizations of the SVD of a matrix, although unlike the SVD, the vectors need not be orthogonal to each other. Defining the factor matrices as: the decomposition in (4) will be indicated by the shorthand notation: (5) Note that due to the symmetry of the tensor (kernel), all of the factor matrices The rank of a tensor is defined as the minimal R such that the decomposition (4) holds. If R is minimal, then (4) is called the Canonical Polyadic Decomposition (CPD) of the tensor H 3 . The CPD has been independently discovered, and named both the CANDECOMP and the PARAFAC, among others (Kolda and Bader, 2009). For tensors of degree 3 or higher, the CPD has been shown to be unique under mild conditions, modulo changes in the ordering of the rank-one terms in (4), and with respect to the distribution of the scaling between the elements in each of the rank-one terms. CPD of a Wiener-Hammerstein Model The CPD of the third-order Volterra kernel of a W-H system, shown in (3), may be written as (de Goulart et al., 2016): Preprints of the 20th IFAC World Congress Toulouse, France, July 9-14, 2017 where h 1 is a vector containing the second impulse response, h(τ ), and the matrix G 1 ∈ M ×M2 is Toeplitz structured with entries derived from the impulse response of the first linear element, g(t), as follows Note that the last factor matrix in (6) has been postmultiplied by a diagonal matrix, which scales the terms in (4). The scaling could have been applied to any one of the factor matrices. Let the vector g 1 contain the impulse response g(t). The matrix unfolding of the third kernel of a W-H model, given by the CPD in (6), can be written as (de Goulart et al., 2016): Note that due to the symmetry of the kernel, the tensor may be unfolded along any dimension. Lemma 2.1. Provided that the neither g(t) nor c 3 are equal to zero, the rank of matrix unfolding of the thirddegree kernel of a W-H system is equal to the number of non-zero elements in the second impulse response, h(t). Proof Since g T 1 g 1 = 0, both G 1 and G 1 G 1 will have full column rank (M 2 ). Thus each non-zero entry in h 1 will contribute a linearly independent row and column to the matrix X. 2 Compute the SVD of the matrix unfolding, X, and partition it: such that S 1 ∈ M2×M2 . Since the rank of X is bounded by M 2 , X = U 1 S 1 V T 1 . Then, there exists an invertible matrix N ∈ M2×M2 such that (Sørensen and Comon, 2013;de Goulart et al., 2016): The Toeplitz-structured factor matrix G 1 can be written as a basis expansion: where D k is a M × M 2 matrix whose entries are given by the Kronecker delta: where θ = vec(g 1 g T 1 ). Applying the identity vec(ABC) = (C T A)vec(B) to N T V 1 and substituting into (9) results in This must be solved for N and θ. Let and solve be the SVD of M . Given the uniqueness results in (Sørensen and Comon, 2013), the last column of V M will be proportional to vec(N T ) T θ T T . The last M 2 1 elements of this singular vector correspond to θ, which can be reshaped into a M 1 × M 1 matrix, (17) Ideally, G θ will be a rank-1 matrix, see (14), from which an estimate of g 1 can be extracted using a SVD. PARALLEL WIENER HAMMERSTEIN MODELS The Parallel Wiener-Hammerstein model is obtained by summing the outputs of several Wiener-Hammerstein models, as shown in Fig. 2. Similarly the Volterra kernels of a PWH model are obtained by summing the kernels of the individual branches, so that where the subscript p indexes the paths in the model. Let G k be a Toeplitz matrix whose columns are shifted copies of the impulse response g k (t), as in (7), and define the matrices: Then the third-order kernel can be written as the following, not necessarily canonical, polyadic decomposition: Using this polyadic decomposition, any matrix unfolding of the symmetric third-order kernel can be written as Lemma 3.1. The rank of X, the matrix unfolding of the third-order kernel of a PWH model is less than or equal to the least of: M , P · rank(Γ), or the number of non-zero entries in h. Preprints of the 20th IFAC World Congress Toulouse, France, July 9-14, 2017 Proof From (7), each group of M 2 columns in G will have full rank, provided that the corresponding impulse response, g k (τ ), is not identically zero. Thus, the rank of G ∈ M ×P M2 will be the lesser of M , and P · rank(Γ). The rank of G G will be greater or equal to that of G. The number of non-zero entries in h gives the rank of the diagonal matrix. The rank of X is less than or equal to the lowest of these ranks. 2 Let R = min(M, P M 2 ), so that R ≥ rank(X). Then, X = U 1 S 1 V T 1 , were S 1 ∈ R×R , and let N be an invertible, R × R matrix: Since the Khatri-Rao product is the column-wise Kronecker product, we have Since each of the G k is a Toeplitz structured matrix, the basis expansion in (11) may be used. Thus where the matrix D was defined in (12) and θ k = vec(g k g T k ). Thus, for each submatrix N k , we have Therefore each pathway in the model will generate one dimension in the null-space of the the matrix M , as defined in (15). Thus, compute the SVD in (16), and let Θ comprise the last M 2 1 entries from each of the last P columns of the matrix V M . These contain the entries in the null-space corresponding to the parameter vectors θ k . Due to the use of the SVD, each column of Θ will be a linear combination of the θ k . Thus for some weighting vectors w k . Reshape Θ as a M 1 ×M 1 × P tensor by creating an M 1 × M 1 matrix from each of the P columns in Θ, and then stacking them: (1) Compute and partition the SVD of the matrix unfolding of the kernel as described in (8). SIMULATIONS The algorithm described in Section 3.1 was tested on the third-order Volterra kernel of a PWH model with 3 branches, where the first linear elements, g k (τ ), had memory lengths of 32 samples, and the output filters, h k (τ ) each had a memory of 5 samples. Thus, the thirddegree kernel was of dimension 36 × 36 × 36 elements. In the first simulation, the algorithm was applied to a noisefree copy of the kernel. This was followed by a series of Monte-Carlo simulations where white Gaussian noise was added to the kernels at Signal-to-Noise Ratios of 40, 30, 20 and 10 dB. 100 trials were performed at each noise level. All computations were done using Tensorlab v3.0 (Vervliet et al., 2016). Fig. 3 shows the singular values of the matrix unfolding of the third-order kernel. Since all 3 of the h k (t) had memory lengths of 5 samples, a rank of 15 is expected. In the noise-free case, shown with blue asterisks, there is gap of about 8 orders of magnitude between the 15'th and 16'th singular values, while there is little decay beyond the 16'th singular value. This suggests that these represent the noise floor, and that the matrix has rank 15, corresponding to structure that was simulated. Preprints of the 20th IFAC World Congress Toulouse, France, July 9-14, 2017 Fig. 5. Simulated impulse responses, g k (t), from the first filter bank and their estimates obtained from the noise-free kernel. Note that the first two impulse responses were interchanged in the estimates. For comparison purposes, all impulse responses have been scaled to unit norm, with the largest peak in the positive direction. First Linear Elements The tensor constructed from the null-space, as described in (24) had a rank of three, as a three element CPD reconstructed the tensor to within machine precision. The rank-3 CPD produced excellent estimates of the g k (t), to within a rearrangement of the columns, and scaling of the vectors, as illustrated in Fig.5. In each trial of the Monte-Carlo simulations, noise was added to the third-order kernel. Since the kernel is symmetric with respect to any interchanges of arguments, this symmetry was imposed on the noise. Thus, the same noise sample was added to each symmetric point in the kernel. Other than that, the noise samples were independent. The effects of this noise are evident in Fig. 3, where the noise floor has increased from a level of 10 −16 in the noisefree experiment (blue asterisks) to 10 −3 with 30 dB of noise (red crosses), and 10 −2 with 10 dB (green plusses). As a result, gap between the signal and noise subspaces has vanished, and 7 and 10, respectively, of the smallest singular values from the signal subspace have disappeared below the noise-floor. Thus, the information that was present in the singular vectors associated with these small singular values has been spread into the noise subspace. Fig. 6. Impulse responses, g k (t) from the first filter bank, and their estimates obtained from kernels corrupted with 30 dB of noise. The plot shows the 99% confidence intervals obtained from the Monte-Carlo simulations. Note that the bias is more significant than the variance of the estimates. As a result, the nullspace in (23) was also poorly characterized (not shown), as its singular vectors were only a factor of 2-3 smaller than the remaining singular values, as opposed to the difference of 10 8 evident in Fig. 4. The resulting parameter tensor was not well approximated by a rank-3 CPD. Figure 6 shows impulse responses of the first filter bank, estimated from the 30 dB simulations. It compares the simulated impulse responses with the mean plus or minus 2.5 standard deviations of the ensemble of estimates from the Monte-Carlo simulation. While the noise has produced some variability in the impulse response estimates, this is much less significant that the resulting bias error. This bias error appears to be due to the effective truncation of the SVD of the matrix-unfolding of the kernel (see Fig. 3). The results from the 10 dB Monte-Carlo simulation are shown in Fig. 7. Note that the variance is more significant in this case. The results of all of the Monte-Carlo simulations are summarized in Table 1, which reports the systematic and random errors in the impulse response estimates. In all cases, the systematic bias, due to the truncation of the signal subspace, is much more significant that the random errors. CONCLUSION This paper describes an algorithm that extracts the impulse responses of the linear elements of a parallel Wiener-Hammerstein model from its third-order Volterra kernel. This is an extension of the subspace based algorithm proposed by de Goulart et al. (2016) that can obtain the linear elements of Wiener-Hammerstein model from its thirdorder kernel. The algorithm proposed by de Goulart et al. (2016) requires the computation of 2 SVDs, and a single linear regression, and hence provides a closed-form solution without the need for an iterative optimization. In the algorithm developed in this paper, one of the SVDs must be replaced by the computation of the Canonical Polyadic Decomposition of a tensor. Since iterative methods, such as alternating least squares, are used to compute the CPD, our algorithm is technically an iterative algorithm. Nevertheless, the CPD involves a small tensor, and does not impose any structure on the result. As such, it can be expected to function reliably. Thus the proposed algorithm can provide initial estimates for iterative procedures, such as those described by Dreesen et al. (2017). Monte-Carlo Simulation Results: 10 dB g1(τ ) g2(τ ) g3(τ ) g1(τ ) g2(τ ) g3(τ ) Fig. 7. Impulse responses, g k (t) from the first filter bank, and their estimates obtained from kernels corrupted with 10 dB of noise. The plot shows the 99% confidence intervals obtained from the Monte-Carlo simulations. Note that the bias more significant than the variance of the estimates.
2019-04-22T13:08:13.142Z
2017-07-01T00:00:00.000
{ "year": 2017, "sha1": "6374114cf5223a76cc8aa6c342edf4ff00ac9713", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.ifacol.2017.08.1470", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8ce2ad18a8f9c92d65db37ce3d97be6a45756761", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
44517388
pes2o/s2orc
v3-fos-license
Eco-Efficiency Assessment of Material Use : The Case of Phosphorus Fertilizer Usage in Japan ’ s Rice Sector To raise the eco-efficiency of the economy, it is important to not only investigate the eco-efficiency of specific products but also to ascertain whether the resources are used effectively throughout the life cycle. In this paper, we address eco-efficiency of agricultural use of phosphorus in Japan in the years 2005, 2010, and 2011. The increase in revenue from crops due to the use of phosphorus-based fertilizer is considered. The method used allows us to isolate the impact of a single nutrient and to convert this to a monetary value. For impact assessment of P resource use, we combine life-cycle inventory (LCI) data with LIME 2 (Life-cycle Impact Assessment Method based on Endpoint modeling) method. The most significant environment impact of the phosphorus chemical fertilizer life cycle is found to be on climate change by high chemical fertilizer. In 2005, provided service of phosphorus resource use was estimated as the highest while value added service of phosphorus increased, resulting in an uptick in eco-efficiency. During the study period, the lowest eco-efficiency of P resource use resulted in 2011. The results from this study, and the methods used, should be of great interest to industry, the research community, and policy makers concerned with resource efficiency. Introduction Modern agriculture is heavily dependent on phosphorus-based fertilizers, as phosphorus use improves productivity and contributes to food security.However, the production and use of phosphorus-based fertilizers place a significant burden on the environment, largely through eutrophication of water bodies, as leaching and surface runoff carry the phosphorus from the soil.Phosphorus is derived from phosphate rock, which is a nonrenewable resource.Current global reserves are estimated to have a life of 50-100 years [1].Depletion and uneven distribution of phosphorus resources have prompted some countries to reconsider their role as a critical raw material.In response to resource constraints, population pressure, and the widespread environmental damage associated with current development patterns, the international community has sought ways of limiting the demand for resources. Adding phosphorus brought into relation with complex energy transformations in the plant, to soil low in available phosphorus promotes root growth and winter hardiness, stimulates tillering, and often hastens maturity.Experts recommended using phosphorus as a row-applied starter fertilizer for increasing early growth.However, environmental impacts from usage of phosphorus fertilizer could not be negligible.Many studies have addressed environmental performance at a systematic level [2,3].The resource efficiency and eco-efficiency of phosphorus production by a chemical company have been evaluated in China, which is rich in phosphorus rock [4].However, it needs to carefully evaluate cosmetic effects of fertilizer application versus increased profits from yield increases from the perspective of environmental impacts by life cycle assessment.Nogi et al. (2016) evaluated the eco-efficiency of phosphorus-based fertilizer using price as a measure [5,6].However, direct measurement of eco-efficiency using indicators such as crop yield has not yet been attempted at the national level in Japan, which is reliant on imported phosphate.This study focuses on the use of phosphorus and yield effect in rice production in Japan.Phosphorus has unique characteristics, including its essential place in human activities, its strategic role in the production of agricultural products, and its uneven global distribution.Japan has no extractable phosphorus reserves, making its more efficient use a key long-term goal for sustainable development. In addition, to create a sustainable society, an efficient production and consumption life cycle is needed to reduce environmental impacts and limit resource depletion.Eco-efficiency is an important dimension of this, and is conventionally assessed as a quantitative tradeoff between negative environmental impacts, and in this case, the positive economic value of agricultural products.This relation is expressed as a ratio, facilitating comparison of competing strategies.However, eco-efficiency assessment needs to be extended from its current definition to include direct service indicators such as the service (added value) of each product.In this study, eco-efficiency is analyzed from a life cycle perspective, and the main indicator is defined as follows: Eco-efficiency = provided service by resource use (added value of resource use)/environmental impact of resource use. Here, the added value or provided service of the phosphorus resource use in fertilizers is the yield increase.We chose 2000, 2005, and 2011 as the study years since these are the most recent for which data are available.As a first step, we aim to develop an eco-efficiency indicator for the rice sector, as this is both a major crop and food staple in Japan, and is regarded as being at the heart of Japanese life and culture.The study has three main objectives: (1) To quantify "service provided service by the phosphorus resource" as a chemical fertilizer, (2) To quantify environmental impacts of chemical fertilizers through the full life cycle, and (3) To assess the eco-efficiency of the phosphorus resource when used as a fertilizer. Materials and Methods In the assessment of eco-efficiency, the study followed the five assessment stages of ISO 14045, 2012 [7]: (i) Definition of Goals and Scope (covered by Section 1); (ii) Environmental Assessment; (iii) Value Assessment; (iv) Quantification of Eco-efficiency; and (v) Interpretation.The methodological details of each phase are given in the following sections. Environmental Assessment The environmental impact of phosphorus resource use in chemical fertilizers (JPY/kg P) was quantified using Multiple Interface Life Cycle Assessment (MiLCA), which is a life cycle assessment (LCA) support system which enables the researcher to make the basic calculation required for basic LCA, including inventory analysis and impact assessment.In addition, the inventory database for environmental analysis (IDEA) is included as a standard equipment.Original data source in MiLCA LCA software are made from Japanese and global statistics modeling of attributional approach of production process and industry associations.Final emission results are GHGs (Greenhouse Gas emissions) and about 50 elementary flows by using approximately embedded 3000 datasets [8]. The process by which environmental impacts were analyzed and the system boundaries are shown in Figure 1.When establishing the appropriate system boundaries for LCA, a "cradle-to-grave" approach was taken, starting from the extraction of the primary resource, phosphorus, to the point where the products leave the agricultural system.Figure 1 shows the upstream mining and production steps of phosphorus resource use and the downstream usage and outflow.The upstream environmental impacts of production of phosphorus fertilizer and the downstream impacts of phosphorus runoff to the hydrosphere must be summed to give the total impact.In this research, for evaluating environmental impacts, MiLCA software is mainly used.However, that software could not access downstream parts.Thus, handling downstream and upstream parts by respective detail methodologies is extremely important.Details will be explained in the following sections.environmental impacts of production of phosphorus fertilizer and the downstream impacts of phosphorus runoff to the hydrosphere must be summed to give the total impact.In this research, for evaluating environmental impacts, MiLCA software is mainly used.However, that software could not access downstream parts.Thus, handling downstream and upstream parts by respective detail methodologies is extremely important.Details will be explained in the following sections. Environmental Impact of Phosphorus Resource Use up to the Production Stage of Phosphorus Fertilizer To take account of ore transport to Japan, the total travel distance from the main trading partners (China, Morocco, and South Africa) was weighted by the amount imported, using the following equation. , = , × , Here, Di,j is the total distance weighted by the imported quantity; Dactual distance,j is the transport distance/sea route from country j to Japan, calculated using the Ecological Transport Information Tool for Worldwide Transports [9]; and ri,j is the ratio of phosphate ore imported to Japan [10].The port locations in the exporting countries and in Japan were identified using a range of data sources including the World Port Rankings [11] and the Trade Statistics for Japan, broken down by prefecture [10].More details on the distance calculations can be found in S note 1 in the Supporting Information: Final import and distance data for phosphate ore.The total calculated distance (7729 km) was entered into the MiLCA software to derive the environmental impact of producing 1 kg of phosphorus fertilizer ( unit in JPY/kg P).This gave the environmental impact of phosphorus resource use up to the production stage of phosphorus fertilizer.Although in these calculations, more recent data were sometimes available (for example, the 2015 World Port Rankings), we used only 2010 data, for consistency with the base data of MiLCA. Environmental Impacts of Phosphorus Runoff For the third stage, the impact of the phosphorus fertilizer used in the field was estimated using Equation (2). Here, Qpf is the quantity of phosphorus fertilizer used in the field, EPe is the environmental impact from producing 1 kg of phosphorus fertilizer (JPY/kgP), and QuPf is the quantity of phosphorus fertilizer used per kilogram of rice produced (kg Pf/kg Rice), obtained from the Inventory Database for Environmental Analysis (IDEA), which is the standard database for use with MiLCA LCA software [8]. Environmental Impact of Phosphorus Resource Use up to the Production Stage of Phosphorus Fertilizer To take account of ore transport to Japan, the total travel distance from the main trading partners (China, Morocco, and South Africa) was weighted by the amount imported, using the following equation. Here, D i,j is the total distance weighted by the imported quantity; D actual distance,j is the transport distance/sea route from country j to Japan, calculated using the Ecological Transport Information Tool for Worldwide Transports [9]; and r i,j is the ratio of phosphate ore imported to Japan [10].The port locations in the exporting countries and in Japan were identified using a range of data sources including the World Port Rankings [11] and the Trade Statistics for Japan, broken down by prefecture [10].More details on the distance calculations can be found in S note 1 in the Supporting Information: Final import and distance data for phosphate ore.The total calculated distance (7729 km) was entered into the MiLCA software to derive the environmental impact of producing 1 kg of phosphorus fertilizer (EP e unit in JPY/kg P).This gave the environmental impact of phosphorus resource use up to the production stage of phosphorus fertilizer.Although in these calculations, more recent data were sometimes available (for example, the 2015 World Port Rankings), we used only 2010 data, for consistency with the base data of MiLCA. Environmental Impacts of Phosphorus Runoff For the third stage, the impact of the phosphorus fertilizer used in the field was estimated using Equation (2). Here, Q pf is the quantity of phosphorus fertilizer used in the field, EP e is the environmental impact from producing 1 kg of phosphorus fertilizer (JPY/kgP), and Qu Pf is the quantity of phosphorus fertilizer used per kilogram of rice produced (kg P f /kg Rice), obtained from the Inventory Database for Environmental Analysis (IDEA), which is the standard database for use with MiLCA LCA software [8]. When addressing the post-use system, an important question arose: How much phosphorus leaches from the soil from the input of a unit of phosphorus?Equation (3) was used to quantify the phosphorus run-off Q Pr (kg P r /kg P f ). where Q pf stands for quantity of P-fertilizer used in the field (unit in kg Pf/kg Rice) and ∂ is a coefficient showing the share of phosphorus fertilizer applied to the field that will runoff to the environment.This is given as a percentage, but was calculated in kg P r /kg P f .Q Pf was derived from Equation ( 2).The final parameter is the rice produced by application of 1 kg of phosphorus fertilizer (kg Rice/kg P f ) and was also derived from Equation ( 2). The ∂ coefficient (ratio of phosphorus runoff from the field to the hydrosphere) was derived using the balance method of FAO [12].Figure 2 is a diagram of substance flow and water flow in a paddy field.The input flows included phosphorus fertilizer, irrigation water, and rainwater, and the outflows included the phosphorus fertilizer taken up by the crops, surface water, and percolation water.When addressing the post-use system, an important question arose: How much phosphorus leaches from the soil from the input of a unit of phosphorus?Equation (3) was used to quantify the phosphorus run-off QPr (kg Pr/kg Pf). where Qpf stands for quantity of P-fertilizer used in the field (unit in kg Pf/kg Rice) and is a coefficient showing the share of phosphorus fertilizer applied to the field that will runoff to the environment.This is given as a percentage, but was calculated in kg Pr/kg Pf.QPf was derived from Equation ( 2).The final parameter is the rice produced by application of 1 kg of phosphorus fertilizer (kg Rice/kg Pf) and was also derived from Equation (2). The coefficient (ratio of phosphorus runoff from the field to the hydrosphere) was derived using the balance method of FAO [12].Figure 2 is a diagram of substance flow and water flow in a paddy field.The input flows included phosphorus fertilizer, irrigation water, and rainwater, and the outflows included the phosphorus fertilizer taken up by the crops, surface water, and percolation water.For the final, post-use stage, the important question concerned the amount of phosphorus leachate produced by the input of a unit of phosphorus (kg P/ha).Phosphorus is one of the most important mineral nutrients used in agriculture, and as farming systems have increased, the concomitant increase in losses of phosphorus from agriculture land has had serious detrimental effects on water quality and the environment.However, after phosphorus is added to the soil in the form of fertilizer or manure, the large residue buildup may increase crop yields for a number of years.After taking this residual phosphorus into account, the amount of phosphorus in the runoff water was estimated using Equations ( 4)-( 6).The data used in these equations were taken from a series of previous studies [13][14][15][16][17][18] to reflect Japanese agricultural practices and soil characteristics. Here, Poutflow from the field to hydrosphere is the total amount of outflow of phosphorus in surface water and percolation water to the hydrosphere (kg Poutflow/ha), and Papplied in the field is the amount of phosphorous applied as chemical fertilizer in the field (kg Pfertilizer/ha). Here, is the outflow from conventional application of phosphorus as chemical fertilizer in the field to the hydrosphere (kg Poutflow/ha), and is the outflow of residual phosphorus from the soil to the hydrosphere (kg Poutflow/ha).For the final, post-use stage, the important question concerned the amount of phosphorus leachate produced by the input of a unit of phosphorus (kg P/ha).Phosphorus is one of the most important mineral nutrients used in agriculture, and as farming systems have increased, the concomitant increase in losses of phosphorus from agriculture land has had serious detrimental effects on water quality and the environment.However, after phosphorus is added to the soil in the form of fertilizer or manure, the large residue buildup may increase crop yields for a number of years.After taking this residual phosphorus into account, the amount of phosphorus in the runoff water was estimated using Equations ( 4)- (6).The data used in these equations were taken from a series of previous studies [13][14][15][16][17][18] to reflect Japanese agricultural practices and soil characteristics. ∂ = P out f low f rom the f ield to hydrosphere P applied in the f ield (4) Here, P outflow from the field to hydrosphere is the total amount of outflow of phosphorus in surface water and percolation water to the hydrosphere (kg P outflow /ha), and P applied in the field is the amount of phosphorous applied as chemical fertilizer in the field (kg P fertilizer /ha).P out f low f rom the f ield to hydrosphere = P out f low o f f reshly applied P − P out f low o f residual P in the soil (5) Here, P out f low o f f reshly applied p is the outflow from conventional application of phosphorus as chemical fertilizer in the field to the hydrosphere (kg P outflow /ha), and P out f low o f residual p in the soil is the outflow of residual phosphorus from the soil to the hydrosphere (kg P outflow /ha). P applied in the f ield = P f reshly applied − P residual P in the soil (6) Here, P f reshly applied is the amount phosphorus in the chemical fertilizer applied in the field (kg P fertilzier /ha) and P residual P in the soil is the amount of phosphorus accumulated in the soil (kg P fertilzier /ha).In all of the above equations, a value of ∂ = 0. 48 % is used.The derivation of this ratio is described in S_note 2 in the Supporting Information: Ratio of phosphorus runoff from the field to the hydrosphere. Finally, the environmental impact associated with phosphorus runoff (EP r in unit of JPY/kgP) is estimated using Equation (7). Here, EuP r is the environmental impact produced by each kilogram of phosphorus runoff, and is obtained using LIME2, the impact assessment method associated with the IDEA database in the MiLCA LCA software (JPY/kg Pr) and QP r is derived from Equation (3). Environmental Impact of Phosphorus Resource Use in Chemicals As described above, the goal is to estimate the total environmental impact of upstream phosphorus fertilizer production and downstream phosphorus runoff to the hydrosphere.Equation ( 8) was used to derive the total impact EP (JPY/kg-P). This research addressed the phosphorus content of four chemical fertilizers used in rice production: (A) calcium superphosphate; (B) fused phosphate fertilizer; (C) low chemical fertilizer, and (D) high chemical fertilizer.The A and B types contain mainly phosphorus (P), whereas the C and D types also contain calcium (K) and nitrogen (N).The amount of content of nutrients (%) in each type of fertilizer is shown in Table 1 [19].The environmental impacts from the four types were coded as EP A , EP B , EP C , and EP D , and their respective values were derived using Equations ( 1)- (8).The total environmental impact of phosphorus in the four types (JPY) was then estimated using Equation (9). Here, A i is the amount of fertilizer of type i consumed, given by the amount applied per hectare (kg P/ha) and the area of rice plantation (ha). Value Assessment The economic benefit of phosphorus resource use was derived using the total value added (TVA) or total provided service (JPY).This is the economic value added by applying phosphorus fertilizer and corresponds to the addition to net cash flow of all the different factors across the rice production sector.It was calculated using Equation (10). TP rv S = P rv S × P Here, P rv S is the monetary value of the marginal increase in rice production from one unit of phosphorus (JPY/kg P), and is given by Equation (11). P: Amount of phosphorus fertilizer for rice production sector (kg P) [19] where P ha is the amount of phosphorus fertilizer applied per hectare (kg P/ha) and A is the rice plantation area (ha).The P rv S (JPY/kg P) was derived from the increase in rice yield per unit of phosphorus ∆Q (kg rice/kg P) and the gross added value of the rice production sector V (JPY/kg rice), and was given by Equation (12). ∆Q: Increase in rice yield of by one unit of phosphorus (kg rice/kg P). Based on previous studies [20][21][22][23], the increase in rice yield (∆Q) was derived from the agronomic efficiency of the applied phosphorus nutrient, using Equation (13).In deriving the yield (YN,P,K − YO/P)/FP, Y N,P,K and Y O/P are the crop yields with and without the nutrient phosphorus, and F P is the amount of phosphorus applied per field, all in kg ha −1 . ∆Q = More details on ∆Q can be found in S note 3 in the Supporting Information: Share of soil type by paddy field in Japan in 2007.In this paper, yield increase effect is taken as a direct service indicator of phosphorus resource use in the rice sector.Then, ∆Q is calculated by Equation (13).In this case, ∆Q value is calculated based on experimental results of literature reviews thinking of the same condition of agro-ecological resources (soil texture, terrain, and climate) but neglecting of making categories of varieties of rice. Parameter V in Equation ( 12) is the gross value added per unit of rice produced (JPY/kg rice).This is the gross value added by Japan's rice sector, and was taken from the input/output tables of the Ministry of Internal Affairs and Communications [24]. The total value added by each of the four fertilizer types was derived using Equations ( 10)-( 13) and the total provided value of phosphorus in the chemical fertilizers (JPY) using Equation ( 14). (TPrvV Eco-Efficiency Indicators Eco-efficiency is the ratio between the value of the goods produced and the environmental impacts.The eco-efficiency of phosphorus resource use in the rice production sector (Ecoeff.P rice ) was estimated using Equation (15).Ecoe f f .P rice = (TPrvV) P EP P (15) Environmental Impacts of Phosphorus Production (JPY/kg P) To quantify the environmental impacts of phosphorus resource use in fertilizers, we consider the impacts from the material inputs and those associated with energy use. Figure 3 shows the total environmental impacts for each type of fertilizer derived from MiLCA LCA software.Although Figure 3 identified seventeen categories of environmental impact as shown, it includes some small impacts which could be negligible since their environmental impacts are less than 0.001 JPY/kg-fertilizer: (1) Ozone depletion, (2) Ecotoxicity, water (3) Ecotoxicity, ground, (4) Indoor air pollution, (5) Human toxicity, water (6) Human toxicity, ground and (7) Noise. As can be seen, the biggest environment impact category resulted by using four types of fertilizer is urban air pollution throughout the chemical fertilizer life cycle.The most significant environmental impact of the phosphorus chemical fertilizer life cycle is found to be on climate change by high chemical fertilizer. As we described before, although Figure 3 shows 17 categories of environmental impact, out of this, seven categories give small impacts on environments.So, to be certain of the results in the most distinguished categories and to provide more details of the upstream and downstream parts of environmental impacts, Table 2 is provided.It can be seen clearly that urban air pollution following impacts on climate change and resources is the most important environmental impact resulted from the upstream part of phosphorus resource use and eutrophication is the most severe from the downstream part. As basic data only exists for 2010 in the MiLCA database, the impact of phosphorus resource use is assumed to be the same in each of our study years (2000, 2005, and 2010).While the historical trend is important, this cannot be addressed using the MiLCA software, which is unable to process time-series data.As can be seen, the biggest environment impact category resulted by using four types of fertilizer is urban air pollution throughout the chemical fertilizer life cycle.The most significant environmental impact of the phosphorus chemical fertilizer life cycle is found to be on climate change by high chemical fertilizer. As we described before, although Figure 3 shows 17 categories of environmental impact, out of this, seven categories give small impacts on environments.So, to be certain of the results in the most distinguished categories and to provide more details of the upstream and downstream parts of environmental impacts, Table 2 is provided.It can be seen clearly that urban air pollution following impacts on climate change and resources is the most important environmental impact resulted from the upstream part of phosphorus resource use and eutrophication is the most severe from the downstream part. As basic data only exists for 2010 in the MiLCA database, the impact of phosphorus resource use is assumed to be the same in each of our study years (2000, 2005, and 2010).While the historical trend is important, this cannot be addressed using the MiLCA software, which is unable to process timeseries data.Figure 4 shows the added value of phosphorus use (monetary value of marginal increase in rice production from one unit of phosphorus by type of fertilizer).Assuming the rice yield to be constant unless increased by the addition of phosphorus, the marginal value of phosphorus resource use in the target years was highest in 2005.This can be interpreted as indicating that the gross value added for the rice sector (V in Equation ( 12)) was highest in 2005.This is discussed in the next section.In summary, the increase in rice yield from a unit of phosphorus in a chemical fertilizer is estimated to be in a linear relation with the gross added value of rice production. Value of Phosphorus Resource Use (JPY/kg-P) Figure 4 shows the added value of phosphorus use (monetary value of marginal increase in rice production from one unit of phosphorus by type of fertilizer).Assuming the rice yield to be constant unless increased by the addition of phosphorus, the marginal value of phosphorus resource use in the target years was highest in 2005.This can be interpreted as indicating that the gross value added for the rice sector (V in Equation ( 12)) was highest in 2005.This is discussed in the next section.In summary, the increase in rice yield from a unit of phosphorus in a chemical fertilizer is estimated to be in a linear relation with the gross added value of rice production. Eco-Efficiency of Phosphorus Resource Use Figure 5 shows the eco-efficiency of phosphorus resource use.Assuming that the environmental impact does not change in the targeted years, this will reflect fluctuations in the gross added value per unit of rice and consumption of each chemical fertilizer in the rice sector.As the gross added value per unit of rice in the rice industry was significantly higher in 2005, as shown in Figure 4, the calculated eco-efficiency was also highest in that year. Although the amount of phosphorus-based fertilizer used in rice production in Japan decreased, the value added by fertilizer increased between 2000 and 2005.This increased the eco-efficiency because the denominator is the monetized environmental impact, and this could not be varied due to the limitations of the MiLCA software.However, in 2011, the gross value added for the rice sector decreased slightly and the production of rice also decreased.As the decline in the gross value was much greater than that in rice production, eco-efficiency was lower in 2011. Eco-Efficiency of Phosphorus Resource Use Figure 5 shows the eco-efficiency of phosphorus resource use.Assuming that the environmental impact does not change in the targeted years, this will reflect fluctuations in the gross added value per unit of rice and consumption of each chemical fertilizer in the rice sector.As the gross added value per unit of rice in the rice industry was significantly higher in 2005, as shown in Figure 4, the calculated eco-efficiency was also highest in that year. Although the amount of phosphorus-based fertilizer used in rice production in Japan decreased, the value added by fertilizer increased between 2000 and 2005.This increased the eco-efficiency because the denominator is the monetized environmental impact, and this could not be varied due to the limitations of the MiLCA software.However, in 2011, the gross value added for the rice sector decreased slightly and the production of rice also decreased.As the decline in the gross value was much greater than that in rice production, eco-efficiency was lower in 2011. Value of Phosphorus Resource Use (JPY/kg-P) Figure 4 shows the added value of phosphorus use (monetary value of marginal increase in rice production from one unit of phosphorus by type of fertilizer).Assuming the rice yield to be constant unless increased by the addition of phosphorus, the marginal value of phosphorus resource use in the target years was highest in 2005.This can be interpreted as indicating that the gross value added for the rice sector (V in Equation ( 12)) was highest in 2005.This is discussed in the next section.In summary, the increase in rice yield from a unit of phosphorus in a chemical fertilizer is estimated to be in a linear relation with the gross added value of rice production. Eco-Efficiency of Phosphorus Resource Use Figure 5 shows the eco-efficiency of phosphorus resource use.Assuming that the environmental impact does not change in the targeted years, this will reflect fluctuations in the gross added value per unit of rice and consumption of each chemical fertilizer in the rice sector.As the gross added value per unit of rice in the rice industry was significantly higher in 2005, as shown in Figure 4, the calculated eco-efficiency was also highest in that year. Although the amount of phosphorus-based fertilizer used in rice production in Japan decreased, the value added by fertilizer increased between 2000 and 2005.This increased the eco-efficiency because the denominator is the monetized environmental impact, and this could not be varied due to the limitations of the MiLCA software.However, in 2011, the gross value added for the rice sector decreased slightly and the production of rice also decreased.As the decline in the gross value was much greater than that in rice production, eco-efficiency was lower in 2011. Overall Eco-Efficiency Eco-efficiency is affected by both the value of the rice produced and the consumption of phosphorus fertilizer.The added value (JPY/kg P) was associated with ∆Q (kg rice/kg P), V (gross value added per unit of rice production (JPY/kg rice), and consumption of phosphorus fertilizer (kg P/ha).∆Q (change in rice yield by use of phosphorus) was constant in the targeted year.The two main factors that contributed to the decrease in eco-efficiency were as follows: (1) Gross value added per unit of rice production (JPY/kg rice) and (2) Consumption of phosphorus fertilizer (kg P/kg rice). Figure 6a shows the provided service by phosphorus used in rice production.Here, (TPrvV) P (unit in JPY) is the total provided value from the phosphorus fertilizer applied per hectare (kg P/ha) and the area of rice planation (ha).Figure 6b shows the environmental impact of phosphorus use. Here, (EP) p is the total environmental impact of phosphorus fertilizer (JPY), based on the amount applied per hectare (kg P/ha) and the rice plantation area (ha). Overall Eco-Efficiency Eco-efficiency is affected by both the value of the rice produced and the consumption of phosphorus fertilizer.The added value (JPY/kg P) was associated with ∆Q (kg rice/kg P), V (gross value added per unit of rice production (JPY/kg rice), and consumption of phosphorus fertilizer (kg P/ha).∆Q (change in rice yield by use of phosphorus) was constant in the targeted year.The two main factors that contributed to the decrease in eco-efficiency were as follows: (1) Gross value added per unit of rice production (JPY/kg rice) and (2) Consumption of phosphorus fertilizer (kg P/kg rice). Figure 6a shows the provided service by phosphorus used in rice production.Here, () (unit in JPY) is the total provided value from the phosphorus fertilizer applied per hectare (kg P/ha) and the area of rice planation (ha).Figure 6b shows the environmental impact of phosphorus use.Here, () is the total environmental impact of phosphorus fertilizer (JPY), based on the amount applied per hectare (kg P/ha) and the rice plantation area (ha).The decline in the provided service was greater than that in the environmental impact.The decline in total value reflected a decrease in both the consumption of fertilizer and the gross value added for the rice sector in the target year.The decline in environmental impact reflected a decrease in the consumption of fertilizer in the target year. Promoting Eco-Efficiency in Agriculture Eco-efficiency can be promoted by decreasing the use of chemical fertilizer.One way of doing this is to speed-up the introduction of organic fertilizers.These are produced from organic waste using natural processes such as composting or vermicomposting.In Japan, organic fertilizers can be classified by their derivation from animal matter (fish meal, bone meal, dried blood, and other meal), vegetable matter (rape seed meal and soybean meal), organic waste (dried microbes and sewage sludge), and composting materials (cattle manure, swine manure, poultry manure, bark, sewage sludge, and urban refuse). Japan is the country with the greatest use of chemicals in agriculture.A 2004 OECD survey reported that Japan's farmers use almost 16 kg of chemicals per hectare, whereas U.S. farmers use only 2 kg in rice plantations [25].However, the Japanese government now sets strict limits on the use of fertilizers and pesticides, and organic produce is becoming more widespread.The use of organic fertilizers can improve the environment by making locally grown produce available to consumers and reducing the demand for imports of agricultural produce and the precursors of chemical fertilizers. There are more than 30 million hectares of certified organic fields in the world.In Japan, the total area was less than 10,000 Ha in 2010, or 0.2% of the total arable land.In the same year, total production of organic foods in Japan was less than 60,000 tons, including 31,000 tons of vegetables and 11,000 tons of rice [26].Clearly, Japan is unable to meet the demand for organic products, and promotion of The decline in the provided service was greater than that in the environmental impact.The decline in total value reflected a decrease in both the consumption of fertilizer and the gross value added for the rice sector in the target year.The decline in environmental impact reflected a decrease in the consumption of fertilizer in the target year. Promoting Eco-Efficiency in Agriculture Eco-efficiency can be promoted by decreasing the use of chemical fertilizer.One way of doing this is to speed-up the introduction of organic fertilizers.These are produced from organic waste using natural processes such as composting or vermicomposting.In Japan, organic fertilizers can be classified by their derivation from animal matter (fish meal, bone meal, dried blood, and other meal), vegetable matter (rape seed meal and soybean meal), organic waste (dried microbes and sewage sludge), and composting materials (cattle manure, swine manure, poultry manure, bark, sewage sludge, and urban refuse). Japan is the country with the greatest use of chemicals in agriculture.A 2004 OECD survey reported that Japan's farmers use almost 16 kg of chemicals per hectare, whereas U.S. farmers use only 2 kg in rice plantations [25].However, the Japanese government now sets strict limits on the use of fertilizers and pesticides, and organic produce is becoming more widespread.The use of organic fertilizers can improve the environment by making locally grown produce available to consumers and reducing the demand for imports of agricultural produce and the precursors of chemical fertilizers. There are more than 30 million hectares of certified organic fields in the world.In Japan, the total area was less than 10,000 Ha in 2010, or 0.2% of the total arable land.In the same year, total production of organic foods in Japan was less than 60,000 tons, including 31,000 tons of vegetables and 11,000 tons of rice [26].Clearly, Japan is unable to meet the demand for organic products, and promotion of organic farming is expected to increase eco-efficiency, while establishing more sustainable agriculture.However, the use of organic fertilizers will increase costs and the total provided service (TP rv S) of organic fertilizers and their environmental impacts should be investigated.Due to limitations of time and data, this is left for future research. One more possible method to reduce chemical fertilizer usage in agriculture sector is usage of bio-fertilizers or bio-organic fertilizers avoiding environmental pollution resulting from build-up of agrochemicals in soil and water and low nutrient uptake efficiency.This could allow farmers to use cheaper and more environmentally friendly fertilizers.Currently in Japan, bio-fertilizer is used mostly in Hokkaido but country wide usage is still in progress using bio-fertilizer projects and research.Although it is certain that the use of bio-fertilizer will reduce costs, evaluation of the total provided service (TP rv S) of bio-fertilizers and systematic study of their environmental impacts could not be completed in this study due to time and data limitation.This has to be put aside as one of the possible scenarios for future study to promote eco-efficiency in agriculture for specific crops. Conclusions In this study, we evaluated the eco-efficiency of phosphorus resource use by considering the total value added based on crop yield and the life cycle environmental impacts.We concluded that the biggest environmental impact category was climate change through phosphorus resource use in fertilizer throughout the life cycle due to high chemical fertilizer.As an overall result, urban air pollution is the most severe of the environmental impacts due to four types of fertilizers in the upstream part of phosphorus resource use, and eutrophication is the worst impact in the downstream part.The total provided service in monetary value by phosphorus use in chemical fertilizers was highest in 2005 and the highest eco-efficiency was also found in that year.The main factors accounting for the decrease in eco-efficiency over the study period were a decrease in the gross value added by the rice sector and a fall in the consumption of phosphorus fertilizer. This research aims to evaluate eco-efficiency for phosphorus resource use.That means that environmental impacts of phosphorus resource use only need to be explored.However, for production of phosphorus fertilizer, we need other resources such as fossil fuels and machinery (such as iron, steel, copper, etc.).Our current evaluated environmental impacts of phosphorus resource use in chemical fertilizer are combined with those of other resources used for producing the phosphorus in chemical fertilizer.In other words, our estimated results are environmental loads mixed with other resource use in the supply chain.As a future study, methodology for allocating environmental impacts produced by only phosphorus resource use should be developed although it will be a complex and challenging task. There are many parameters used in this research playing specific and important roles.Since this is pioneer research, respective data are rare.Thus, we handle literature reviews for estimation of parameter values carefully, trying to avoid uncertainty as much as we could.However, even in MiLCA LCA software, uncertainty depends on the dataset.It is described that each dataset is evaluated by modified pedigree matrix.Thus, in future research, researchers must evaluate the factors driving eco-efficiency by using decomposition analysis.Sensitivity analysis should also be followed for the most sensitive parameters that are derived from decomposition analysis results.In addition, further research is needed to estimate the impacts of phosphorus accumulation in the soil in specific study years.The current study considered only four fertilizer types, due to limitations in data availability.In future research, this will be extended to cover all fertilizer types used in Japan.In addition, the environmental impacts of chemical fertilizers and organic fertilizers will be compared.The methods used in this will be applied to other nutrients, such as K and N, and research will also be extended to a wider range of crops. Our results can support product development and improvement, strategic planning (budgeting and investment analysis), public policy making, marketing, green purchasing, and awareness raising.As an ultimate goal, the research is aimed to go to the level of eco-efficiency assessment of phosphorus resource use for all types of products and the method proposed in this paper can be applied to other resources by supporting policy analysis and allowing for an evaluation of measures such as optimizing the use of phosphorus resources, increasing the value of phosphorus use, and reducing the environmental impacts of phosphorus. Figure 1 . Figure 1.System boundary of life cycle of phosphorus resource used for chemical fertilizer. Figure 1 . Figure 1.System boundary of life cycle of phosphorus resource used for chemical fertilizer. Figure 2 . Figure 2. Substance and water flow in a paddy field (adapted from [13]). Figure 2 . Figure 2. Substance and water flow in a paddy field (adapted from [13]). Figure 3 .Figure 3 . Figure 3. Environmental impacts of phosphorus resource use for each fertilizer type. Figure 4 . Figure 4. Provided service of phosphorus resource use in each type of fertilizer. Figure 5 . Figure 5. Eco-efficiency of phosphorus resource use in rice production. Figure 4 . Figure 4. Provided service of phosphorus resource use in each type of fertilizer. Figure 4 . Figure 4. Provided service of phosphorus resource use in each type of fertilizer. Figure 5 . Figure 5. Eco-efficiency of phosphorus resource use in rice production. Figure 5 . Figure 5. Eco-efficiency of phosphorus resource use in rice production. Figure 6 . Figure 6.Provided service and environmental impact of phosphorus resource use.(a) Provided service by phosphorus resource use (JPY); (b) Environmental impact of phosphorus resource use (JPY). Figure 6 . Figure 6.Provided service and environmental impact of phosphorus resource use.(a) Provided service by phosphorus resource use (JPY); (b) Environmental impact of phosphorus resource use (JPY). Table 1 . Amount of nutrient (%) in each type of fertilizer.: P 2 O 5 stands for phosphorus pentoxide.This research focused only on P. Therefore, all data related to P 2 O 5 were carefully converted by multiplying a conversion factor of 0.4364 to estimate % of P content in P 2 O 5 . Note Table 2 . Categories of environmental impacts of phosphorus resource use.
2017-10-06T13:55:55.226Z
2017-09-02T00:00:00.000
{ "year": 2017, "sha1": "77c8c774a703ded7b4b6d2386f3ee340eaa9fad6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/9/9/1562/pdf?version=1504486833", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bee6896669bf484de005f27487ec49b5e45cadb5", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Economics" ] }
52098291
pes2o/s2orc
v3-fos-license
Clinical Performance Evaluation of VersaTrek 528 Blood Culture System in a Chinese Tertiary Hospital Background: The aim of this study was to evaluate the clinical performance of VersaTrek 528 compared to BACTEC FX 400 blood culture (BC) systems. Materials and Methods: Simulated and clinically obtained BCs were used in the study. Confirmed bacterial species (n = 78), including 43 Gram-positives, 30 Gram-negatives, and 5 Candida albicans strains, were each inoculated into BC bottles. Clinically obtained BCs were subdivided into two groups, A and B. In group A were 72 BC sets (pair: aerobic and anaerobic) in which a set inoculated with 5 ml blood was processed in the VersaTrek BC system, whilst the one inoculated with 10 ml blood was processed in the FX BC system. In group B, 76 BC sets (pairs) corresponding to 152 VersaTrek bottles and 152 FX bottles were inoculated with the same volume (10 ml) of blood, and processed in each system. Results: In the simulated BC study, 90% (63/70) of the VersaTrek aerobic bottles were positive, which was higher than that of FX 400 (59/70, 84%), but was not statistically significant (P = 0.423). In contrast, FX 400 anaerobic bottles had a higher positive rate than the other BC system (84 vs. 77%), although it was statistically insignificant (P = 0.267). Time to detection of organisms in the two BCs was comparable for both aerobic (P = 0.131) and anaerobic bottles (P = 0.104). In clinical BCs of group A, FX BC system had slightly higher positive rates for both aerobic (11.1 vs. 9.7%, P = 0.312) and anaerobic (8.3 vs. 6.9%, P = 0.375) bottles. However, the difference was not statistically significant. In group B, VersaTrek aerobic bottles had a higher positive rate compared to the other BC system (10.5 vs. 5.2%, P = 0.063). In terms of positive rates of sub-studies A and B, VersaTrek and FX BC systems were comparable. Conclusion: There was no significant difference between the two BC systems in the detection of bacteria and fungi in simulated BCs. In clinical BCs, the performance of the VersaTrek BC system, with inoculation of 5 or 10 ml patient’s blood, was comparable to the FX system with inoculation of 10 ml patient’s blood. INTRODUCTION In spite of the great advances in medical science in the past century, bloodstream infection still remains a growing public health problem worldwide (Goto and Al-Hasan, 2013). Detection of microorganisms in the blood is important to identify and isolate pathogens in clinical laboratories, allowing clinicians to optimize patient's treatment (Jacobs et al., 2017). Blood culture (BC) is among the most common microbiological tests, and remains the gold standard for detection of bloodstream infections (Burd and Kehl, 2011;Lamy et al., 2016). Over the past few decades, improvements in BC media and the availability of software-assisted automated growth detectors have promoted the recovery of bloodstream pathogens, and decreased the time to detection (TTD) of microbial growth (Fiori et al., 2014). Effective utilization of BC systems can help clinicians initiate appropriate antimicrobial treatment to patients within 24 h of BC positivity. The VersaTrek 528 is an automated BC microbial detection system produced by ThermoScientific, and is based on detection of pressure changes due to gas consumption (O 2 ) and/or production (CO 2 , N 2 , H 2 , etc.) by microorganisms in a BC bottle. It is a more advanced and automated instrument compared to the single detection of CO 2 production routinely used in several BC systems. Unlike most BC systems, VersaTrek 528 can detect bacterial pathogens that consume O 2 without CO 2 production. So it can provide a more credible and reliable result, improve recovery and reduce the TTDs of significant pathogens. The BD BACTEC FX 400 instrument is an automated system for detecting the presence of microorganisms in clinical samples. This system is based on the utilization of carbohydrate substrates in BC media, and continuous production of CO 2 by growing microorganisms, which has a positive correlation with fluorescence intensity. It detects the growth of microorganisms through a fluorescence sensor set at the bottom of the BC bottle (Kirn and Weinstein, 2013). This study represents the first evaluation of the performance of the VersaTrek 528 BC system in detection and TTDs of bacteria and fungi, compared with BACTEC FX 400 BC system, using simulated and clinically obtained BCs. Bacteria and Equipment The study was performed at Peking Union Medical College Hospital (PUMCH), a tertiary hospital in Beijing, China. The study was carried out in accordance with the institute's guidelines and procedures, including ethics approval by the Human Research Ethics Committee . All the clinical isolates and BC specimens used in this study were collected at PUMCH. The 78 bacterial species used in the simulation study were isolated from positive BCs and stored at −80 • C until use. Identification of these isolates was performed by matrix-assisted laser desorption/ionization timeof-flight mass spectrometry (MALDI-TOF MS) and 16S-rRNA gene sequencing. The 146 clinical BC specimens were collected from adult patients with suspected bloodstream infections during the period 2016-2017. The bacterial species included in the simulation study represent the majority of bacteria isolated from BCs at PUMCH. Thermo Fisher Scientific VersaTrek 528 BC system and BD FX 400 BC system were used in the study. The VersaTrek 528 aerobic BC bottles (VT-S) and VersaTrek 528 anaerobic BC bottles (VT-F) were incubated in the VersaTrek 528 BC system. FX 400 aerobic BC bottles (FX-S) and FX 400 anaerobic BC bottles (FX-F) were incubated in the FX 400 BC system. The 78 clinical bacterial isolates were used in the simulated BC study. Isolates were recovered from frozen stocks and cultured overnight on appropriate agar medium at 37 • C and 5% CO 2 . Colonies from agar plates were re-suspended in stroke-physiological saline solution to 0.5 McFarland (bacteria: 1.5 × 10 8 CFU/ml, Candida: 10 6 CFU/ml) and diluted to a final concentration of approximately 10 CFU/ml. One milliliter from the last suspension was inoculated in both aerobic and anaerobic BC bottles for each of the two BC systems as described previously (Almuhayawi et al., 2015), i.e., VersaTrek 528 aerobic BC bottle (VT-S), VersaTrek 528 anaerobic BC culture bottle (VT-F), FX 400 aerobic BC bottle (FX-S) and FX 400 anaerobic BC bottle (FX-F) vials. The BC bottles were each inoculated with 5 ml sterile human blood prior to inoculation with previously obtained organisms. Seventy clinical bacteria isolates were inoculated in aerobic BC bottles of each of the two BC systems under study, and 73 isolates were inoculated in both anaerobic BC bottles of the two BC systems (B. fragilis and Peptostreptococcus spp. isolates were only inoculated in anaerobic BC bottles; C. albicans isolates were only inoculated in aerobic BC bottles). Clinical Blood Culture Specimens Clinical BCs (n = 148), collected from adult patients with suspected bacteremia or fungemia, during the period 2016-2017, were included in the study. Skin or access ports were disinfected with alcohol to reduce the contamination rate (Garcia et al., 2015). Blood was collected by peripheral venipuncture rather than by intravenous catheter (Dawson, 2014). The clinical BC study specimens were grouped into two, A and B. In group A were 72 clinical BCs collected from the emergency department and Medical Intensive Care Unit. For these BCs, 15 ml of blood was aseptically collected from the left arm and inoculated into one VersaTrek 528 aerobic BC bottle (VT-S: 5 ml) and one FX 400 aerobic BC bottle (FX-S: 10 ml). A further 15 ml of blood was collected from the right arm and inoculated into one VersaTrek 528 anaerobic BC bottle (VT-F: 5 ml) and one FX 400 anaerobic BC bottle (FX-F: 10 ml). Group B consisted of 76 BCs collected from the Intensive Care Unit and Infectious Disease wards. For these BCs, 20 ml of blood was aseptically collected from the left arm and inoculated into one VersaTrek 528 aerobic BC bottle (VT-S: 10 ml) and one FX 400 aerobic BC bottle (FX-S: 10 ml). Another 20 ml was collected from the right arm and inoculated into one VersaTrek 528 anaerobic BC bottle (VT-F: 10 ml) and one FX 400 anaerobic BC bottle (FX-F: 10 ml). The collected BC bottles were quickly transported to the laboratory and loaded in both BC systems. The experimental flow chart and grouping status are shown in Figure 1. Conventional Methods Blood culture bottles inoculated with blood from patients were incubated in the respective BC system until signaling for positivity or for a maximum of 5 days (Altun et al., 2016). BCs flagged positive for microbial growth were Gram stained and the results immediately reported to the patient's physician. According to the Gram stain results, the positive BCs were sub-cultured onto relevant agar plates, and any growing organisms were identified by MALDI-TOF MS and 16S-rRNA sequencing. A BC that flagged positive, had organisms seen on Gram stain, and grew on subculture, was considered a true positive. BCs that were flagged positive but showed no organism on the Gram smear and no growth on subculture were re-incubated. BCs that were still negative at the end of the 5th day of incubation were also terminally sub-cultured. Those that were persistently negative for subculture were classified as false-positive detections and excluded from analysis (Fiori et al., 2014). Bottles that did not signal positive at the end of 5 days were sub-cultured on agar plates (48 h in at 37 • C) for confirmation. Definitions and Data Analyses The positive rates for the two BC system were compared by using the Chi-square test of McNemar. Yates'correction was used when n was less than 40. The TTD was compared by the Wilcoxon matched-pairs signed-rank test. Values of P < 0.05 were considered to be statistically significant. Statistical analyses were performed using GraphPad Prism 7. Simulated Blood Cultures In total, 140 aerobic and 146 anaerobic BC bottles were studied in two different BC systems, using the same clinical bacteria (n = 78) isolates (B. fragilis and Peptostreptococcus spp. isolates were only inoculated in anaerobic BC bottles; C. albicans isolates were only inoculated in aerobic BC bottles). During the 5 day incubation period, 90% (63/70) VT-S, 84% (59/70) FX-S, 77% (56/73) VT-F, and 84% (61/73) FX-F BC bottles, signaled positive in the two BC systems (Supplementary Table S1). There was no significant difference in the detection of organisms between VT-S and FX-S (90 vs. 84%, P = 0.423). Likewise, there was no significant difference in the detection of organisms between VT-F and FX-F (77 vs. 84%, P = 0.267). Table 1 shows the detection of bacteria in both BC systems by Gram stain category, bacterial species, and aerobic status. There was no significant difference in the detection rate of organisms among Grampositive, Gram-negative and fungal isolates between the two BC systems. VersaTrek 528 aerobic bottles showed a higher positive rate for Gram-positive bacteria (92.5%) compared to the other method (80%), but the difference was not statistically significant (P = 0.182). Likewise, FX 400 anaerobic bottles showed a higher positive rate for Gram-negative bacteria (63.3 vs. 70%, P = 0.683) with no significant statistical difference. Three isolates (1 each of E. coli, P. aeruginosa, and Peptostreptococcus spp.) were only detected in VersaTrek 528 BC system, and no one isolate was only detected in the FX BC system. At the single-species level, there were no significant differences in the detection of organisms between the two BC systems ( Table 1). Nine bacterial species (S. aureus, S. epidermidis, E. faecium, S. agalactiae, S. pyogenes, S. mitis, E. coli, A. baumannii, C. albicans) had a 100% (5/5) detection rate in VT-S. There were 7 bacterial species (S. aureus, E. faecalis, S. mitis, K. pneumoniae, E. coli, A. baumannii, C. albicans) with 100% (5/5) detection rate in FX-S. Eight species (S. aureus, E. faecalis, E. faecium, S. agalactiae, S. mitis, K. pneumoniae, E. coli, B. fragilis) had a detection rate of 100% (5/5) in VT-F, and 10 species (S. aureus, S. epidermidis, E. faecalis, E. faecium, S. agalactiae, S. pyogenes, S. mitis, K. pneumoniae, E. coli, B. fragilis) had a 100% (5/5) detection rate in FX-F. S. aureus and S. viridians were both detected (100%, 5/5) in all four BC bottle types. The results of Gram-stain and MALDI-TOF MS FIGURE 1 | The experimental flow chart and grouping status in the study. Bacteroides fragilis and Peptostreptococcus spp. isolates were only inoculated in anaerobic BC bottles; Candida albicans isolates were only inoculated in aerobic BC bottles. In sub-study A, 5 ml of blood was inoculated into VersaTrek 528 blood culture (BC) bottles (aerobic and anaerobic) and 10 ml of blood was inoculated into FX 400 BC bottles (aerobic and anaerobic). In sub-study B, 10 ml of blood was inoculated into VersaTrek 528 and FX 400 BC bottles (aerobic and anaerobic). VT-S means aerobic bottles of VersaTrek 528 BC system; FX-S means aerobic bottles of FX 400 BC system; VT-F means anaerobic bottles of VersaTrek 528 BC system; FX-F means anaerobic bottles of FX 400 BC system. identifications were concordant with the previously confirmed identity of the isolates inoculated, without any false positives and contaminated BC bottles. The performance of the specific BC bottles in terms of TTD is shown in Figure 2. The TTDs for the positive VT-S (n = 63) and FX-S (n = 59) bottles were similar ([median,13.8;IQR, vs. [median, 13.1; IQR, 11.3-18.0], P = 0.8). Also, the TTDs for the positive VT-F (n = 56) and FX-F (n = 61) bottles were similar ([median, 15.2; IQR, 12.0-23.9] vs. [median, 14.2; IQR, 12.3-19.2]; P = 0.310). The median TTD for FX-F bottles was shorter than that of VT-F. The 54 isolates that signaled positive in both VT-S and FX-S bottles, and the 52 isolates that signaled positive in both VT-F and FX-F, were used in TTD comparison analysis (Figure 2). There was no significant difference in the TTDs of VT-S and FX-S ([median, 13.8; IQR, 11.4-16.1] vs. [median, 13.7; IQR, 11.5-18.0]; P = 0.131, n = 54). Likewise, there was no significant difference in the TTDs of VT-F and FX-F bottles ([median, 15.1 IQR, 11.9-23.7] vs. [median, 14.1; IQR, 12.9-19.0]; P = 0.104, n = 52). This suggests that VT-S has a similar median TTD with FX-S, and FX-F has a shorter median TTD than VT-F (P = 0.104). In all, TTDs of the two BC systems were comparable for both aerobic (P = 0.131) and anaerobic bottles (P = 0.104). Clinical Blood Culture Specimens The organisms detected in the two BC systems are shown in Table 2. There were 15 (15/148, 10%) positive bottles in VT-S, 12 (12/148, 8%) in FX-S, 5 (5/148, 3%) in VT-F, and 7 (7/148, 5%) in FX-F. There was no significant difference in the detection of organisms in aerobic bottles (10 vs. 8%, P = 0.508) and anaerobic bottles (3 vs. 5%, P = 0.625) of the two BC systems. False positive results were recorded in 2 VT-S and 3 VT-F bottles which flagged positive, but the Gram stain yielded nothing and there was no growth in sub-cultures on relevant agar plates. In group A study, the FX BC system detected more isolates in both the aerobic (11.1 vs. 9.7%, P = 0.312) and anaerobic (8.3 vs. 6.9%, P = 0.375) bottles compared to the other BC system. In sub-study B, VersaTrek 528 aerobic bottles had a higher positive rate than the other system (10.5 vs. 5.2%, P = 0.063). There was only one positive anaerobic bottle in FX BC system, and none in VersaTrek 528 BC system ( Table 3). In terms of positive rates of sub-studies A and B, VersaTrek 528 and FX BC systems were comparable. DISCUSSION For decades, BC has been considered the "gold standard" for detecting bacteremia (Almuhayawi et al., 2015). Many laboratories use a set of aerobic and anaerobic bottles per specimen in routine blood culturing (Mirrett et al., 2004;Coorevits and Van den Abeele, 2015). Clinical microbiology laboratory directors and supervisors must frequently face the problem of selecting the optimal BC system for use in their laboratory (Mirrett et al., 2003). The aim of this study was to compare the performance of VersaTrek 528 and FX 400 BC systems. In the simulated BC study, we analyzed 78 commonly encountered clinical bacterial isolates. After 5 days incubation, VersaTrek 528 aerobic BC bottles had a higher detection rate than FX 400 aerobic BC bottles (P = 0.423), and FX 400 anaerobic BC bottles had a higher detection rate than VersaTrek 528 anaerobic BC bottles (P = 0.267). However, the differences were not statistically significant. This suggests that despite differences in BC systems, they generally have similar performances in detecting microorganisms. Furthermore, there were no significant differences in the detection of Grampositive, Gram-negative bacteria and fungi, between the two BC systems. In terms of TTDs for the two BC systems, they were comparable for both aerobic (P = 0.131) and anaerobic bottles (P = 0.104). Rapid detection of growth in BCs is critically important as it guides the initiation of appropriate antibacterial therapy early. However, the number of isolates studied for certain bacterial species was relatively small to draw definite conclusions, although some trends were observed. Overall, there was no significant difference in the performance of the two BC systems in simulated BC. In clinical BC specimens, we grouped the specimens into groups A and B in order to analyze the performance of the two BC systems based on the blood volume used. Since the number of microorganisms circulating in the bloodstream may be comparatively small, the size of the blood volume used strongly affects BC sensitivity and TTDs. Indeed, the blood volume inoculated in a BC, is the most important factor influencing the detection of bloodstream microorganisms as bacterial or fungal density in the bloodstream is very low in a majority of patients with bloodstream infections (Lamy et al., 2016). Overall, the possibility of detecting bloodstream infections depends on the concentration and volume of bacteria and fungi in the blood (Arpi et al., 1989). However, getting sufficient volume of blood (10 ml) to fill a set of BC bottles from a single venipuncture may be difficult, especially in the elderly, young, and patients with shock. However, in this study, there was no significant difference between VersaTrek 528 and FX 400 BC systems, in both substudy A and sub-study B. This suggests that the VersaTrek 528 BC system which used an inoculation volume of 5 or 10 ml blood, performed equally (in the detection of organisms) to the BACTEC FX 400 BC system which used an inoculation volume of 10 ml blood. No significant difference was observed in sub-study A (P = 0.375) and sub-study B (P = 0.063). These findings suggest that the performance of the VersaTrek system with inoculation of 5 or 10 ml patient's blood is comparable to FX 400 system with inoculation of 10 ml patient's blood. Due to limited number of positive BC bottles in the study, we didn't compare TTDs of these bottles in the clinical sample study group. Five false positive bottles were detected in the VersaTrek 528 BC system, but none in the FX 400 BC system. High leukocyte counts, over-filled bottles and/or errors in incubation, are major causes of false positive BCs (Reimer et al., 1997). However, there is a scarcity of data on the possible causes of BC false positives. The causes for false positive bottles in the VersaTrek 528 BC system still need to be explored. There were three bottles with contaminants in the VersaTrek 528 BC system despite the progress in skin antisepsis. These included 1each of S. felis mixed with S. cohnii, C. tuberculostearicum and S. hominis. These organisms are known colonizers of the skin and were detected only in one bottle of a BC set. This study has some limitations. The first is that compared with the in vitro environment of a septic patient (who may have used antibiotics), we didn't add antimicrobial agents in the simulated BC study. Secondly, the number of positive bottles in the clinical BC study group was small, and thus we didn't compare TTDs of the positive bottles. Finally, due to the small number of fungal isolates during the course of the study, we cannot make conclusions regarding the performance of the two blood BC systems for fungi. CONCLUSION There was no significant difference between the two BC systems in the detection of organisms. The VersaTrek system with inoculation of 5 or 10 ml patient's blood, was comparable to the BACTEC FX 400 system with inoculation of 10 ml patient's blood. Further clinical studies are warranted to investigate the feasibility of inoculating 5 ml patient's blood instead of 10 ml for VersaTrek 528 BC bottles in clinical practice, and how to reduce the rate of false positive bottles and contaminants. AUTHOR CONTRIBUTIONS QY and Y-CX conceived and designed the work. MZ, XX, JD, HS, LZ, and QY performed the survey. PY and TK analyzed the data and wrote the manuscript. XM, LW, HZ, and WC collected the clinical samples. All authors read and approved the final manuscript.
2018-08-28T06:48:31.045Z
2018-08-28T00:00:00.000
{ "year": 2018, "sha1": "5178dece955c67dc52a4d2328f2536a32a527226", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fmicb.2018.02027", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8e9b723914e625a980c0528535ae5ae960d4955a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249665234
pes2o/s2orc
v3-fos-license
Associations of serum high-sensitivity C-reactive protein and prealbumin with coronary vessels stenosis determined by coronary angiography and heart failure in patients with myocardial infarction Background To explore the associations of serum high-sensitivity C-reactive protein (hs-CRP) and prealbumin (PAB) with the number of diseased coronary vessels, degree of stenosis and heart failure in patients with myocardial infarction (MI). Methods A total of 39 MI patients treated in the Cardiology were selected as the observation group, and another 41 patients with normal results of coronary angiography during the same period were selected as the control group. The general data of patients were recorded in detail, the content of serum hs-CRP and PAB in the peripheral blood was detected, and the number of diseased coronary vessels and the degree of stenosis were detected via coronary angiography. Results Compared with those in control group, the blood pressure and heart rate significantly rose, the content of indexes related to the severity of MI were significantly increased, the content of hs-CRP was significantly increased, and the content of PAB was significantly decreased in observation group. Hs-CRP was positively correlated with the number of diseased coronary vessels, degree of stenosis and heart failure in patients, but PAB was negatively correlated with the above factors. The survival rate of MI patients with high content of hs-CRP was obviously lower than that of patients with low content of hsCRP Conclusions Serum hs-CRP and PAB are closely associated with the number of diseased coronary vessels, degree of stenosis and heart failure in MI patients. Summary Background: To explore the associations of serum highsensitivity C-reactive protein (hs-CRP) and prealbumin (PAB) with the number of diseased coronary vessels, degree of stenosis and heart failure in patients with myocardial infarction (MI). Methods: A total of 39 MI patients treated in the Cardiology were selected as the observation group, and another 41 patients with normal results of coronary angiography during the same period were selected as the control group. The general data of patients were recorded in detail, the content of serum hs-CRP and PAB in the peripheral blood was detected, and the number of diseased coronary vessels and the degree of stenosis were detected via coronary angiography. Results: Compared with those in control group, the blood pressure and heart rate significantly rose, the content of indexes related to the severity of MI were significantly increased, the content of hs-CRP was significantly increased, and the content of PAB was significantly decreased in observation group. Hs-CRP was positively correlated with the number of diseased coronary vessels, degree of stenosis and heart failure in patients, but PAB was negatively correlated with the above factors. The survival rate of MI patients with high content of hs-CRP was Kratak sadr`aj Uvod: Cilj je bio da se istra`i povezanost serumskog visokoosetljivog C-reaktivnog proteina (hs-CRP) i prealbumina (PAB) sa brojem obolelih koronarnih sudova, stepenom stenoze i sr~anom insuficijencijom kod pacijenata sa infarktom miokarda (MI). Metode: Za posmatra~ku grupu izabrano je ukupno 39 pacijenata sa IM le~enim kardiolo{ki, a za kontrolnu grupu je izabran jo{ 41 pacijent sa normalnim rezultatima koronarne angiografije u istom periodu. Detaljno su evidentirani op{ti podaci pacijenata, detektovan je sadr`aj serumskog hs-CRP i PAB u perifernoj krvi, a koronarografijom je utvr|en broj obolelih koronarnih sudova i stepen stenoze. Rezultati: U pore|enju sa onima u kontrolnoj grupi, znaajno su porasli krvni pritisak i puls, zna~ajno je pove}an sa dr`aj indeksa koji se odnose na te`inu IM, zna~ajno pove}an sadr`aj hs-CRP, a zna~ajno smanjen sadr`aj PAB u posmatra~koj grupi. hs-CRP je bio u pozitivnoj korelaciji sa brojem obolelih koronarnih sudova, stepenom stenoze i sr~anom insuficijencijom kod pacijenata, ali je PAB bio u negativnoj korelaciji sa navedenim faktorima. Stopa pre`ivljavanja pacijenata sa IM sa visokim sadr`ajem hs-CRP bila je o~igledno ni`a nego kod pacijenata sa niskim sadr`ajem hs-CRP. Introduction Acute myocardial infarction (AMI) is a heart disease, whose pathogenesis is that persistent myocardial ischemia and hypoxia are caused by interruption or decline in coronary blood flow due to various factors based on the coronary artery disease, ultimately leading to myocardial cell necrosis, usually accompanied by such clinical complications as cardiac insufficiency and heart failure (1,2). With the intensification of population aging in China in recent years, the morbidity rate of AMI has increased year by year, making AMI one of the common diseases causing death and affecting the quality of life (3). It has been a major concern for clinicians and researchers to judge the number of diseased coronary vessels, degree of stenosis and incidence of heart failure through clinical indexes and realize early prevention and treatment. High-sensitivity C-reactive protein (hs-CRP) is a sensitive marker for inflammation and atherosclerosis in the body. As a cytokine, it is involved in the formation of vascular plaques and the aggregation and adhesion of leukocytes, and it is closely related to the vascular endothelial injury (4) and the occurrence and development of AMI (5). Prealbumin (PAB) synthesized and released by hepatocytes is an acute phase reactive protein, which can effectively reduce the damage of toxic metabolites to the body, and has a close correlation with the severity of AMI (6). Sun et al. (7) found that PAB can be significantly consumed in the case of heart failure, so that the content of PAB in the peripheral blood of patients greatly declines. In this study, the associations of serum hs-CRP and PAB with the number of diseased coronary vessels, degree of stenosis and heart failure in AMI patients were analyzed, so as to provide a theoretical basis for the early diagnosis and treatment of AMI patients in clinic. Objects of study AMI patients treated in our hospital were selected, and they met the diagnostic criteria for AMI of the American College of Cardiology (ACC), the European Society of Cardiology (ESC) and the American Heart Association (AHA): >50% stenosis of more than 1 vessel definitely confirmed by coronary angiography. All patients enrolled underwent echocardiography and coronary angiography to confirm the diagnosis. Exclusion criteria: 1) Patients with chronic wasting dis-eases, such as malignant tumors, 2) those with severe dysfunction of liver or kidney, 3) those with acute infections recently or hematological diseases, 4) those with autoimmune diseases, or 5) those with a history of myocardial diseases. In observation group (n=39), there were 21 males and 18 females aged 48-82 years old. Another 41 patients without coronary artery disease according to coronary angiography during the same period were selected as the control group, including 20 males and 21 females aged 46-79 years old. The patients in both groups signed the informed consent and agreed to be enrolled in the study. The experimental scheme in this study was approved by the Ethics Committee of Xianju County Peoples Hospital. General data of patients The general data of patients were recorded in detail, including age, gender, height, weight, blood pressure and heart rate at admission, smoking history, medical history and medication history. Judgment of severity of coronary artery disease Coronary angiography was performed for all patients by the same physician in the Cardiovascular Intervention Department using the Judkin's method. Each blood vessel was subjected to projection at more than 3 positions, and the severity of disease and the number of diseased coronary vessels were evaluated independently. According to the angiography results, over 50% stenosis indicated the positive. The eight main vascular segments selected included the proximal right coronary artery, the middle right coronary artery, the proximal circumflex branch, the middle circumflex branch, the first diagonal branch, the middle anterior descending branch, the proximal anterior descending branch and the left main coronary artery. The coronary artery diseases were classified into single-vessel disease, double-vessel disease and multivessel disease. Score of coronary artery stenosis The degree of coronary artery stenosis was evaluated using the Gensini scoring system according to obviously lower than that of patients with low content of hs-CRP. Conclusions: Serum hs-CRP and PAB are closely associated with the number of diseased coronary vessels, degree of stenosis and heart failure in MI patients. Laboratory examination The venous blood was drawn immediately after admission, placed for 30 min and centrifuged at 3,000 rpm for 15 min. The supernatant was harvested and stored in an ultra-low temperature refrigerator for later use. The content of serum hs-CRP and PAB was detected by professional technicians using an automatic electrochemiluminescence immunoassay analyzer (E601, Roche, USA) and a full-automatic biochemical analyzer, respectively, strictly in accordance with the instructions. The venous blood was drawn immediately after admission, placed for 30 min and centrifuged at 3,000 rpm for 15 min. Then the content of AMI-related indexes was detected, including myoglobin (MYO), creatine kinase-MB (CK-MB), N-terminal probrain natriuretic peptide (NT-proBNP) and cardiac troponin I (cTnI). Follow-up The patients were followed up during hospitalization and via telephone for 2 years. The incidence of heart failure was recorded, and the patients with heart failure were subjected to Killip cardiac function grading (grade I-IV). Based on the content of hs-CRP, the patients in observation group were divided into high-concentration group and low-concentration group. The survival time of patients was recorded, and the survival curves were plotted. The detailed records about follow-up, examinations and hospitalization were kept. Statistical analysis The data in this study were expressed as mean ± standard deviation. Statistical Product and Service Solutions (SPSS) 22.0 software (SPSS Inc., Chicago, IL, USA) was used for the data analysis. c 2 test was performed for the intergroup analysis of enumeration data, and one-way analysis of variance was used for the comparison among groups. Homogeneity test of variance was performed. Bonferroni's method was adopted for pairwise comparison in the case of homogeneity of variance, while Welch's method was adopted in the case of heterogeneity of variance. The survival status of patients was evaluated using Kaplan-Meier analysis, and the associations of indexes were detected using Pearson correlation analysis. P<0.05 suggested that the difference was statistically significant. General data of patients The general data of patients in the two groups were recorded in detail after enrollment. As shown in Table I, there were no statistically significant differences in age, gender, body mass index (BMI) and smoking history between observation group and control group (P>0.05). The blood pressure and heart rate in observation group were significantly higher than those in control group (P<0.05). Content of indexes related to severity of MI The fasting peripheral blood was drawn in the two groups to detect the content of MI-related indexes. As shown in Table II, the content of MYO, CK-MB, NT-proBNP and cTnI in the peripheral blood significantly rose in observation group compared with those in control group (P<0.01). Content of serum hs-CRP and PAB The fasting peripheral blood was drawn in the two groups to detect the content of serum hs-CRP and PAB. The results showed that observation group had obviously increased content of hs-CRP (P<0.01), but obviously decreased content of PAB in the peripheral blood compared with control group (P<0.01) ( Table III). Analysis of correlations of serum hs-CRP with number of diseased coronary vessels, degree of stenosis and heart failure According to the Pearson correlation analysis (Figure 1), the content of hs-CRP in the peripheral blood of AMI patients was positively correlated with the number of diseased coronary vessels (r=0.1627, P<0.01), degree of stenosis (r=0.4621, P<0.01) and heart failure (r=0.2126, P<0.01). Analysis of correlations of PAB with number of diseased coronary vessels, degree of stenosis and heart failure According to the Pearson correlation analysis (Figure 2), the content of PAB in the peripheral blood of AMI patients was negatively correlated with the number of diseased coronary vessels (r=-0.1554, P<0.01), degree of stenosis (r=-0.1951, P<0.01) and heart failure (r=-0.1122, P<0.01). Survival analysis Based on the content of hs-CRP in the peripheral blood, the patients in observation group were divided into high-concentration group and low-concentration group. These patients were followed up for 2 years, and the survival curves were plotted. As shown in Figure 3, the survival rate was far lower in high-concentration group than that in low-concentration group (P<0.05). Discussion The pathogenesis of AMI is that secondary thrombosis occurs due to coronary artery erosion or unstable plaque rupture, further causing coronary artery occlusion and myocardial ischemic necrosis (8). hs-CRP is a classic inflammatory marker, and a reactive protein secreted by the liver upon the stimulation of inflammatory factors such as IL-6 and IL-1, which can effectively reflect the degree of coronary sclerosis and the inflammatory process in the body. In clinic, hs-CRP and myocardial injury markers are jointly used to diagnose AMI (9, 10). Ljuca et al. (11) found that cardiovascular events such as ischemic cardiomyopathy and AMI can significantly raise the content of hs-CRP in the peripheral blood of patients. Hs-CRP can be used to effectively assess the inflammatory response in vivo and the severity of AMI, and it is closely related to the morbidity and mortality rates of cardiovascular events (12). Besides, PAB is an acute phase reactive protein able to not only effectively evaluate the synthetic ability of the liver, but also reflect the severity of inflammatory response in patients with cardiovascular disease (13). Matsunaga et al. (14) found that the content of PAB in the peripheral blood of patients with unstable angina pectoris is far lower than that in normal people. In this study, the content of hs-CRP and PAB in the peripheral blood of MI patients was detected. The results manifested that the content of hs-CRP in the peripheral blood of MI patients was obviously higher than that in normal people, while the content of PAB showed the opposite result. The content of hs-CRP was closely related to the severity of MI, and its increase in the peripheral blood of MI patients can significantly raise the mortality rate of patients. Fu et al. (15) showed that the elevation of hs-CRP will increase the incidence rate of adverse cardiovascular events, and cause vascular stenosis or stent thrombosis, and even death of patients. In a large-scale retrospective study of Kalyoncuoglu et al. (16) on patients who died of cardiovascular events, it was found that hs-CRP is closely related to the mortality rate of patients with cardiovascular disease. Chi et al. (17) argued that hs-CRP has a close correlation with the degree of inflammatory response in the MI region. In this study, the correlations of the content of serum hs-CRP with the number of diseased coronary vessels, degree of stenosis and heart failure in MI patients were analyzed. The results manifested that hs-CRP was positively correlated with the number of diseased coronary vessels, degree of stenosis and heart failure. High-concentration hs-CRP can lead to vascular intima injury, vasoconstriction and unstable plaque shedding, ultimately accelerating the occurrence and development of MI (18). Shimizu et al. (19) studied and found that hs-CPR is one of the best markers predicting MI currently. Furthermore, this study confirmed that the content of PAB in the peripheral blood of MI patients was negatively correlated with the number of diseased coronary vessels, degree of stenosis and heart failure. Wang et al. (20) found that the content of PAB in the peripheral blood is of important significance for predicting the occurrence and development of adverse events in MI patients, and PAB can serve as an independent predictive factor for MI complicated with heart failure. Moreover, Wang et al. (21) found that the content of PAB can reflect the area of MI to a certain extent. Conclusions In conclusion, hs-CRP and PAB in the peripheral blood of MI patients are closely correlated with the number of diseased coronary vessels, degree of stenosis and heart failure, so they can be used as predictors of MI, which provide a theoretical basis for the accurate clinical diagnosis of MI. Conflict of interest statement All the authors declare that they have no conflict of interest in this work.
2022-06-15T15:28:23.121Z
2022-06-12T00:00:00.000
{ "year": 2023, "sha1": "f415416125aafde6573143af001d7d338cc28785", "oa_license": "CCBY", "oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/1452-8258/2023/1452-82582301009Z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b190a5d80d201cf6ae8e13c1562f82d05b92902", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
218502336
pes2o/s2orc
v3-fos-license
Ideal electrical transport using technology-ready graphene Producing and manipulating graphene on fab-compatible scale, while maintaining its remarkable carrier mobility, is key to finalize its technological application. We show that a large-scale approach (CVD growth on Cu followed by polymer-mediated semi-dry transfer) yields single-layer graphene crystals undistinguishable, in terms of electronic transport, from micro-mechanically exfoliated flakes. hBN is used to encapsulate the graphene crystals $-$ without taking part to their detachment from the growth catalyst $-$ and study their intrinsic properties in field-effect devices. At room temperature, the electron-phonon coupling sets the mobility to $\sim1.3 \times10^5$ cm$^2$/Vs at $\sim10^{11}$ cm$^{-2}$ concentration. At cryogenic temperatures, the mobility ($>6\times10^5$ cm$^2$/Vs at $\sim10^{11}$ cm$^{-2}$) is limited by the devices' physical edges, and charge fluctuations $<7\times10^9$ cm$^{-2}$ are detected. Under perpendicular magnetic fields, we observe early onset of Landau quantization ($B\sim50$ mT) and signatures of electronic correlation, including the fractional quantum Hall effect. synthesis of large-scale arrays intended to populate pre-patterned photonic circuits in a backend-of-line approach [20]. We separate the crystals form the catalyst by electrochemical delamination in NaOH aqueous solution [22], while supporting them via a polymeric membrane, allowing easy handling and deterministic placing in dry conditions over arbitrary substrates [20,23], SiO2/Si in this case (see Figure 1a). After cleaning in organic solvents, we obtain graphene crystals with a highly spatially-uniform Raman response, comparable to that of exfoliated flakes on the same substrate (negligible D peak, full width at half maximum of the 2D peak FWHM(2D) ~ 25-30 cm -1 , see black curve in Figure 1b), indicating excellent material quality (see Supporting Information file (SI), Figure S1, for more details) [24,25]. Our working hypothesis at this stage is the following: the electrical transport properties of devices based on these crystals are solely limited by the SiO2/Si substrate, rather than being affected by growth and transfer. To verify this decisive point, we proceed with encapsulation in hBN flakes (Figure 1a) using the pick-up sequence described by Purdie et al. [26]. Importantly, we do not make any selection among the transferred CVD-G crystals, randomly choosing the CVD-G/hBN contact area that undergoes the pick-up. An optical microscopy image of a typical hBN/CVD-G/hBN stack is shown in Figure 1c. As routinely observed in heterostructures of atomically thin crystals, a self-cleansing mechanism results in large blisters where the contaminants aggregate [27], separating flat areas where atomically sharp interfaces ensure the best electronic environment [28]. We conveniently individuate such areas by scanning Raman spectroscopy, obtaining spectra as the one shown in Figure 1b (dark cyan curve) and false-color maps as the one in Figure 1d. The main parameter we monitor is FWHM(2D) [25] (see SI, Figure S2 for an analysis of other relevant peaks' parameters), which averages at 16-17 cm -1 over the regions that we chose for fabrication of edge-contacted backgated Hall bars (see SI for details on the processing). Figure 1e shows the statistical distribution of FWHM(2D) measured over the active channels of three of such devices (D1-3, see SI Figure 5 S3 for optical microscopy and atomic force microscopy images), proving high spatial uniformity and no relevant differences in their response to Raman scattering. In Figure 2a we show the resistivity (ρ) of D1-3 as a function of the back-gate voltage (Vbg, applied to the underlying p-doped Si substrate) measured at room temperature and in vacuum (p ~ 10 -5 mbar, details on the measurement setup are given in SI). The three devices show a narrow resistivity peak corresponding to charge neutrality, positioned at Vbg ≤ 0.5 V, indicating minimal residual hole doping, with a maximum ρ0 = 1.10-1.25 kΩ. Away from the neutrality region, ρ reduces to as low as 65 Ω (measured at Vbg = -30 V in D1). Although Vbg can be applied to D3 only over a limited range (±2 V), due to an exponentially increasing leakage current, the narrow resistivity curve allows performing most of the relevant quantitative analysis also for this sample. In Figure 2b we show a series of double-Log plots of the room T conductivity σ = 1/ρ as a function of the charge carrier density n = Cbg(Vbg -V 0 bg)/e (where e is the electronic charge, V 0 bg is the gate voltage at the charge neutrality point and the back-gate capacitance per unit area Cbg is determined by low-field Hall effect measurements, see Figure S4). We observe a linear Log(σ) vs Log(n) dependence, followed by a saturation when n approaches ~10 10 cm -2 . By intersecting a fit to the linear part with a horizontal line set at the minimum conductivity σ0 = 1/ ρ0, we estimate the charge carrier fluctuations n * for D1-3 to be within 3.2-4 × 10 10 cm -2 . This range corresponds to the expected concentration of thermally excited carriers at room T [29], implying that any disorder-induced inhomogeneity stands below this intrinsic broadening. For the sake of comparison, Ref. [19] reported n * in this order only at cryogenic temperatures. In Figure 2c we show the mobility calculated according to the Drude model µD = σ/(ne) as a function of n for D1-3 (the regions |n| < n * are excluded, since they correspond to a regime of coexisting electrons and holes [30]). As typically observed in high-quality hBN-encapsulated graphene [11], the curves show a plateau in the vicinity of 10 11 cm -2 , where we observe a device-independent µD ~ 1.2-1.3 × 10 5 cm 2 /Vs; for comparison, 6 note that Ref. [19] reported values lower by approximately a factor two at equal n and T. At higher carrier density, µD decreases due to electron-phonon scattering, as modelized by Hwang and Das Sarma [5], whose theoretical curve is plotted as a dashed line in Figure 2c and represents a widely accepted upper bound for ideal environmentally-isolated single-layer graphene. In this sense, our data closely resemble the "textbook" ones reported by Wang et al. [11], which were obtained with exfoliated flakes. The recent findings on µD exceeding this limit in WSe2-covered CVD-G [31] obviously cannot be compared to our results due to the different dielectric material employed; nevertheless, the reference hBN-encapsulated CVD-G devices reported there [31] show inferior µD with respect to D1-2 over the whole n range considered. Additionally, in SI ( Figure S5) we show that σ(n) for D1-2 is well described by the relation σ -1 = (neµL + σ0) -1 + ρs , where µL is a density-independent mobility (given by longrange scattering) and ρs is a constant resistivity offset (due to short-range scattering) [7]. We obtain (independently on the device) µL = 1.5 × 10 5 cm 2 /Vs (1.2 × 10 5 cm 2 /Vs) for electrons (holes), roughly matching the µD plateau values, and ρs = 46 Ω (36 Ω) for electrons (holes), which corresponds to the expected magnitude of the resistivity due to electron-phonon coupling in ideal graphene [6]. which reflect slight differences in the electrostatic disorder. To quantify this variability, we again employ double-Log σ(n) plots ( Figure 2e) and estimate n * = 6.6 × 10 9 -1.9 × 10 10 cm -2 for the three devices. To the best of our knowledge, n * values in the 10 9 cm -2 range (obtained for D2-3), indicating extremely low potential fluctuations, have not been reported previously for CVD-G. Moreover, the device structure employed here is quite simple and does not include single-crystalline graphite gates that would further reduce n * by screening of remote disorder [32]. In Figure 2f we plot µD for D1-3 as a function of n (solid lines, excluding the regions |n| < n * ), together with µD = 4eW/(πh 2 n) 1/2 (dashed lines, where h is the Planck's constant), which is the expected carrier-dependent mobility for ballistic transport over distance W [33], which we set equal to the devices' width (2.5 μm, 2 μm and 3 μm for D1, D2 and D3, respectively). This functional dependence captures very well the behavior of the samples at large n, indicating that the devices' finite dimensions represent the primary limitation to the carriers' motion. In the low-density range, we observe a slight electron-hole asymmetry, with the highest mobility reached at |n| ~ 10 11 cm -2 , where µD starts to approach the size-limited curves (see Figure 2f inset). The peak values for D1-3 (averaged over a finite n interval to account for fluctuations in the resistance signals) are in the range 4.1-6.6 × 10 5 cm 2 /Vs (2.1-3.6 × 10 5 cm 2 /Vs) for electrons (holes), and identify our devices as the highest performing CVD-G-based to date. In Figure 3 we analyze our findings by studying the correlation between mobility and charge fluctuations. We do so by considering both µD(~10 11 cm -2 ) (triangles) and the field effect mobility defined as µFE = (dσ/dn)/e (circles, where the slope dσ/dn is obtained over the linear regions visible in Figure 2b and 2e), both for electrons and holes (filled and empty symbols). µD and µFE -values of mobility estimated via two different methods in a similar carrier density range -show a reasonable agreement over the whole plot, corroborating the discussion above, which is based on the Drude mobility (µD). The data corresponding to measurements at 300 K collapse in a very narrow region, pointing at a universal behavior, i.e. determined solely by thermal broadening and insensitive to the sample details. When the devices are cooled to 4.2 K, the data show a more marked scattering, with D2-3 clearly positioning at higher μ and lower n * with respect to D1, reflecting the lower level of disorder. The overall behavior is well described by the relation µ -1 α n * by Couto et al. [34] (dashed line in Figure 3), as generally accepted for high-quality graphene on substrates. 8 To further support the observation of ultra-high carrier mobility in CVD-G, we measure the transport properties of D3 at T = 0.3 K, in the presence of a perpendicular magnetic field B. In Figure 4a we show a false-color map of the longitudinal conductivity σxx as a function of Vbg and B (up to 200 mT), where a typical fan of Landau levels (LL) can be appreciated. The condition to observe this phenomenology is governed by the competition between the cyclotron gap separating the LL ~ 400 √ (T) ( √| | − √| − 1| ) K (where N is the LL index) and the disorder-induced level broadening Γ = ħ/2τq , where τq is the so-called quantum scattering time, which quantifies the carriers' scattering in presence of B. When a large enough field Bonset is applied, Δc equals Γ and the conductivity begins to oscillate, thus composing the fan-shaped diagram. In D3 we observe Bonset as low as ~50 mT for the first oscillations at filling factor ν = nh/eB = ±2, while it increases to ~100 mT for larger fillings (see light-red-to-white colored areas). In Figure 4b we show τq as a function of n, extracted from the onset field of the oscillations (see SI, Figure S6 for details on determination of Bonset). Close to charge neutrality, we observe τq < 0.1 ps due to residual disorder in the n * -region, with a marked growth to ~0.15 ps at higher density. To the best of our knowledge, the largest τq reported for graphene is 0.3 ps by Zeng et al. [35], who made use of exfoliated graphene, hBN-encapsulation and top and bottom graphite gates. The ×2 factor over our values (which, interestingly, corresponds to the difference in n * between D3 and their sample) can be mostly ascribed to the screening effect of the single-crystalline gates. Restricting our comparison to CVD-G only, Ref. [17] reported Bonset = 400 mT at T = 1.6 K, while Ref. [19] showed resolved LL at 1.8 T and 9 K. In Figure 4c we show that the low-filed oscillations lead to a fully developed quantum Hall effect already at B = 200 mT, with zeroes in σxx accompanied by quantized Hall conductivity σxy following the half-integer sequence of single-layer graphene [3,4]. These observations prove ultra-low LL broadening and suggest the possibility of accessing correlation-driven phenomena making use of CVD-G. In Figure 4d we show additional magnetotransport data on D3 (up to 12 T), where, 9 starting from ~1 T, interaction-induced broken-symmetry states [36] are observed at ν = -3, -1 and 0. At charge neutrality (ν = 0) the sample becomes fully insulating for B > 3 T (Figure 4e), as expected for an interaction-induced spin-valley antiferromagnet [37]. Along filling factor ν = -1/3 (gray dashed line in Figure 4d), we observe a zero-resistance region for B ≥ 8.5 T, indicating a fractional quantum Hall (FQH) state [38,39]. In addition to vanishing longitudinal resistance, quantum Hall states result in plateaus at |ν| × e 2 /h in the two-terminal conductance G = ISD/VSD and ν × e 2 /h in σxy. Our measurements show plateau-like features, where G ~ 1/3 × e 2 /h (Figure 4f), while σxy stands far from the expected value (Figure 4f, inset). In the SI we consider possible origins for this discrepancy and discuss how, in addition to higher magnetic field sources, more specialized device structures are likely needed for a thorough investigation of FQH in CVD-G. Nevertheless, FQH states require ultra-clean two-dimensional electronic systems [40], among which hBN-encapsulated CVD-G is to be included. In conclusion, we presented unprecedented electrical transport performances for CVD-G. We synthetize and transfer single-layer graphene crystals using technology-ready approaches, and subsequently make use of hBN-encapsulation to investigate their intrinsic electronic response. Our devices mimic the transport properties of those based on micromechanically exfoliated flakes, including room-T mobility exceeding 10 5 cm 2 /Vs, onset of Landau quantization at ultra-low field, and signatures of FQH. While preparing this manuscript, we became aware that CVD-G detached from Cu via hBN-mediated dry pick-up, under high magnetic fields, shows similar evidence of FQH [41]. In addition to the ideal CVD-G quality, our work highlights a major weakness in the status of technological application of graphene, i.e. the lack of a large-scale analogue of hBN single-crystals. Although large-area few-layer hBN can be synthetized by CVD on metals, such material does not provide adequate environmental screening, resulting in poor graphene mobility if compared to devices employing exfoliated hBN flakes [42]. Regarding the use of CVD-G in fundamental research 10 topics, this would require several improvements over the samples presented in this work. Apart from modifications in the device structure, increasing the size of the processable (bubble-free) regions within the hBN/CVD-G/hBN heterostructures is a clear priority. Engineering clean interfaces over large areas in CVD-G-based hetero-stacks is also of great technological relevance and might benefit from assembly in vacuum conditions [43] and post-assembly thermal and/or nano-mechanical treatment [29]. 11 FIGURES. to the widths of the Hall bars [33]. The inset shows separate enlarged views of the main panel for hole and electron doping, with Log scale on the x axis, highlighting the peak region of µD. 13 The shaded rectangle indicates the averaging interval (0.75-1.25 × 10 11 cm -2 ) used to calculate the values of µD reported in the text and in Figure 3. The circles are field-effect mobility (the error bars are from the linear fits to σ(n)). The filled (empty) symbols are for negatively (positively) charged carriers. The dashed line is a model from Ref. [34], describing an inverse proportionality between µ and n * . Supporting Information for Ideal electrical transport using technology-ready graphene Sergio CVD growth We synthesize the graphene single crystals on electropolished Cu by chemical vapor deposition (CVD) in a commercial reactor (Aixtron 4" BM-Pro), set at p = 25 mbar and T = 1060 °C. The 24 Cu foil is annealed in Ar flow for 10 minutes at T = 1060 °C. The growth takes place for 15 minutes in 90% Ar, 10% H2 and 0.1% CH4. A quartz enclosure controls the gas flow on the sample [21], limiting the nucleation density. Raman spectroscopy of CVD-G and hBN/CVD-G/hBN We use scanning Raman spectroscopy to characterize the samples based on CVD-grown graphene (CVD-G). We employ a Renishaw InVia confocal spectrometer equipped with a 100× objective, with laser light at 532 nm wavelength, at ~ 1 mW laser power. The Si peak at 520 cm -1 is used to calibrate the spectra. Figure S1. Raman correlation plots for CVD-G after transfer to SiO2/Si. 25 In Figure S1 we show four plots correlating the main Raman parameters, measured on a CVD-G crystal from the same growth batch of the ones used for the devices discussed in the main text, after transfer to SiO2/Si (200×200 μm 2 area). Pos(G) averages at 1584.2 cm -1 , indicating minimal uniaxial (biaxial) strain ~0.05% (0.02 %), calculated considering the influence of the finite doping level (see below) [S1]. The average FWHM(2D) is 27.7 cm -1 and can be ascribed [S2], which is corroborated by the field-effect electrical transport measurements (main text Figure 2). Device fabrication We process the hBN/CVD-G/hBN samples by e-beam lithography, reactive ion etching and thermal evaporation of metals. We first pattern the Hall bar mesa and etch the samples in CF4/O2. A second PMMA mask, followed by metal evaporation and liftoff, is used to define the electrical contacts (Cr/Au 5/70 nm), which connect to CVD-G via the exposed edges of the heterostructure [11]. The devices are glued on dual-in-line chip carriers using Ag conductive paste and wire-bonded with Al wires. Gate-dependent Hall effect and estimate of the gate capacitance To determine the back-gate capacitance per unit area (Cbg) of the devices, we perform Hall effect measurements as a function of the back-gate voltage in presence of |B| = 0.5 T, oriented perpendicular to the sample plane. We measure Rxy(Vbg) for positive and negative field orientation, and antisymmetrize the data according to Rxy = (Rxy(+B) -Rxy(-B))/2, to eliminate longitudinal components due to slight contact misalignment, obtaining curves as the one shown in Figure S4, left panel. Since the carrier concentration is given by n = B/e × 1/Rxy = Cbg(Vbg -28 V 0 bg)/e, Cbg can be obtained from a linear fit to 1/Rxy as a function of (Vbg -V 0 bg), as shown in Figure S4, right panel. For D2, we estimate Cbg = 0.97 × 10 -8 F/cm 2 29 Room temperature conductivity for D1-2 Figure S5. Room temperature conductivity as a function of carrier density for D1-2 (gray and red solid lines, respectively). The black dashed lines are fits to the data using the relation σ -1 = (neµL + σ0) -1 + ρs [7]. The fit is performed separately for holes and electrons and excludes the saturation region close to n * . The fitting parameters µL and ρs for the two devices are within 3% of the values given in the main text. Estimate of the Landau quantization onset In order to extract the quantum scattering time (data in Figure 4b), we quantify Bonset for each oscillation of the low-field LL fan (Figure 4a). Here we use the ν = -6 oscillation as an example. We first inspect zoomed-in parts of the conductivity false-color map to estimate an onset region (see Figure S6, left). This provides us an interval of densities over which this specific 30 oscillation starts to be observable, from -1.86 to -2.16 × 10 10 cm -2 in this case. For each density measured in such an interval, we look at the longitudinal resistance as a function of the magnetic field ( Figure S6, right, top panel) and identify the oscillation minimum corresponding to the filling factor of interest (-6 in this case). To quantitatively address the onset, we consider the first derivative of the resistance (bottom panel, calculated after smoothing the resistance signal) and identify the onset field as corresponding to the maximum negative point in dRxx/dB before the oscillation minimum. The onset field values are then averaged over the density interval to obtain Bonset, and τq is calculated. Imperfect quantization in the FQH regime As reported in the main text, we do not measure a precisely quantized value of σxy in correspondence of ν = -1/3. Since field-symmetrized data at ±12 T do not provide a substantial improvement, we exclude contacts' misalignment as the origin of this discrepancy. More likely, the lack of quantization is due to two factors related to the simple Hall bar geometry. The first one is the roughness of the etch-defined edges, which, despite common knowledge on topologically-protected phases, can strongly influence the edge states' transport: edge-free geometry [35, S5] or electrostatically-defined channels [S6] can mitigate this issue. Additionally, and possibly more importantly, perfect equilibration of the edge states at the metallic leads is required in order to observe quantization of σxy. Optimal equilibration is difficult to achieve in devices such ours, where a single global back-gate controls the carrier density both in the channel and in the contact regions. Maher et al. [S7] showed that a local bottom gate geometry allows tuning the sample to low filling factors, while keeping highly doped and efficient contacts via the Si back-gate. This strategy results in precise quantization in the fractional quantum Hall regime, absent otherwise. Implementing these advancements in the device fabrication should facilitate a complete establishment of FQH in CVD-G, of which our current data provide preliminary evidence.
2020-05-06T01:01:04.045Z
2020-05-05T00:00:00.000
{ "year": 2020, "sha1": "359cdce5b38ffa39a05bf34e298554b2cbb9fa7d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "359cdce5b38ffa39a05bf34e298554b2cbb9fa7d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
4106958
pes2o/s2orc
v3-fos-license
Prevalence, severity and early outcomes of hypoxic ischemic encephalopathy among newborns at a tertiary hospital, in northern Tanzania Background Hypoxic Ischemic Encephalopathy (HIE) remains a problem of great concern worldwide especially in developing countries. The occurrence of a neurological syndrome can be an indicator of insult to the brain. We aimed to determine the prevalence, HIE proportions, neurological signs and early outcomes of newborns that developed birth asphyxia at KCMC Tanzania. Methods A prospective study was conducted at KCMC from November 2014 to April 2015 among newborns with birth asphyxia. Sarnat and Sarnat score was used to assess newborns immediately after birth to classify HIE and were later followed daily for 7 days or until discharge. Results Of the 1752 deliveries during the study period, 11.5% (n = 201) had birth asphyxia. Of the 201 newborns, 187 had HIE. Of these 187 with HIE; 39.0% had moderate HIE and 10.2% had severe HIE according to the Sarnat and Sarnat classification. Neurological signs that were observed during the study period were; weak/absent reflexes (46.0%), hypotonia (43.3%) and lethargy (42.2%). Mortality was 9.1% among the 187 newborns with HIE. Mortality was higher among newborns with severe HIE 84.2% (16/19) compared to those with moderate HIE 1.4% (1/73). On the 7th day after delivery, 17.1% (32/187) of the newborns did not show any change from the initial score at delivery. Conclusion Prevalence of birth asphyxia is high in our setting and most of the newborns (49%) end up with moderate/severe HIE. Good obstetric care and immediate resuscitation of newborns are vital in reducing the occurrence of HIE and improving the general outcome of newborns. Background Birth anoxia remains a vital cause of morbidity and mortality in neonates. Hypoxic ischemic encephalopathy affects the tissues of the body and can lead to permanent brain damage. Hypoxic ischemic encephalopathy results from lack of oxygen before, during or after birth [1]. The countdown report of 2015 estimated that neonatal deaths account for 45% of the 5.9 million child deaths that occurred in 2015 globally. Birth asphyxia is responsible for a large number of neonatal deaths after preterm births [2]. Perinatal asphyxia is thus a serious problem for child survival globally, more in developing countries. Apart from increased mortality, perinatal asphyxia results in serious neurological consequences ranging from cerebral palsy and mental retardation to epilepsy [3]. Reaching the sustainable development goal of 2030 of reducing preventable neonatal mortality to less than 12 deaths per 1000 live births requires research and evidence interventions that target neonatal period [4]. Deaths due to perinatal asphyxia causes shock and pain to the mother and family at large. However, it can sometimes be avoided by close monitoring during the birth process with several assessments which include umbilical pH, 1 h post delivery blood gas, Apgar scores and neurological changes ranging from twitching, hypotonia, and seizures [3]. An Apgar scores at 5 min provides useful prognostic data before other evaluations are available. Low Apgar scores at 1, 5 and 10 min have been found to be markers with possible increased risk of death or chronic motor disability [3]. Neonatal encephalopathy preceded by perinatal hypoxic ischemic insult is a main contributor to global child mortality and morbidity. Brain injury in infants is a process that evolves over hours to days providing an opportunity for neuro-protective interventions. Advances in neuroimaging, techniques in monitoring of the brain and tissue biomarkers have improved the ability to diagnose, monitor and care for newborn infants. However, challenges remain in the availability of these imaging modalities in low income settings like Tanzania [5]. The gap between developed and developing countries with regards to neonatal mortality still remains wide. A child born in a developing country is 14 times more likely to die during the first 28 days of life than a baby born in a developed country; with sub-Saharan Africa and South East Asia being most affected [6]. These could be attributed to the lack of investigating and monitoring modalities. The aim of this study was to determine the prevalence, describe the severity and early neonatal outcomes of hypoxic ischemic encephalopathy among newborns with birth asphyxia (Apgar score < 7 at 5 min) at KCMC referral hospital born between November 2014 and April 2015. Study design and area This was a prospective study which enrolled newborns immediately after delivery and followed them daily for a period of 7 days after delivery. The study was conducted at the labor ward and newborns followed up at the neonatal ward (P3) at KCMC referral hospital situated in northern Tanzania. KCMC is a teaching and referral hospital in Moshi urban district. As a zonal referral center, KCMC receives normal and complicated cases from the local community in Kilimanjaro region and nearby regions in the Northern Tanzania and from nearby districts in Kenya. The neonatal ward at KCMC hospital is a modest 6 room ward. Of these, 3 rooms are for acute cases with approximately 60 cots whilst the other 3 rooms have 15beds mainly for recovering newborns awaiting discharge. The neonatal ward admits a considerable number of neonates per year with most of those delivered being from within the facility. The unit does not have high tech advanced diagnostic facilities like most of SSA countries. Babies with asphyxia as assessed by the Sarnat and Sarnat staging are usually kept in the cool area of the ward where heaters are not switched on. Management offered is usually observation, oxygen therapy depending on the severity of hypoxia and prevention of sepsis. Blood PH, mechanical ventilation, Electroencephalogram (EEG), brain imaging or therapeutic cooling/hypothermia is not available at this set up. Study population, sampling and enrollment procedure The study population was all newborns seen at KCMC between November 2014 and April 2015 who had an Apgar score < 7 at the 5th minute. Newborns with birth asphyxia but with obvious congenital malformations were excluded from the study. The formula for precision by Leslie Kish was used to calculate the sample size [7]. The minimum sample size required was 327 based on prevalence of HIE of 30.9% observed in Dar es Salaam [8], confidence interval of 95% and error set at 5%. At the labor ward a standardized data extraction sheet was used to collect information from the patients' files and records of birth registry. Information collected was on maternal demographic information, gestation age at delivery based on last menstrual period, duration of labor, mode of delivery, birth complications, birth weight, Apgar score and occurrence of signs of HIE. Sarnat and Sarnat classification (Table 1), was used to record neurological signs and classify the severity of HIE of the newborns with birth asphyxia immediately after birth, and daily up to 7th day after birth or discharge from the neonatal ward. Data processing and analysis Data collected was entered and analyzed using the statistical package for the social science (SPSS) version 20. Categorical data was summarized using proportions, while mean and median with their measures of dispersion were used to summarize continuous data. Categorization of severity of HIE Newborns who were born with birth asphyxia were classified according to the Sarnat and Sarnat scoring criteria for HIE. The tool has 8 items which are assessed. Neonates scoring 1-10 were classified as having mild HIE, 11-14 as moderate HIE while those scoring 15-22 were classified as having severe HIE. The highest score obtained on any of the 7 days of follow up was used to assess the severity of birth asphyxia. Ethical approval Approval to conduct the study was obtained from KCMUCo Research and Ethics Committee. Permission to conduct the study was also obtained from the Executive Director of KCMC Hospital and the heads of department of Pediatrics and Obstetrics and Gynecology. Consent was obtained from mothers. All newborns were managed in the neonatal ward following the standardized protocol for the unit. Prevalence of birth asphyxia A total of 1752 newborns were delivered at the labor ward during the study period (November 2014 -April 2015). Of these newborns, 201 (11.5%) had birth asphyxia which was defined by an APGAR score less than 7 at 5 min after birth. Of the 201 newborns with asphyxia, 187 had hypoxic ischemic encephalopathy (HIE) giving a prevalence of 10.7% (187/1752). These 187 neonates with HIE were followed up daily for 7 days or until discharge if it happened before 7 days. Characteristics of the participants The median gestation age at delivery of the 187 newborns was 38 weeks (range 27 -43). Their median birth weight was 2900 g (range 1100 -5000). More than half 52.9% of the participants were females and delivered by cesarean section (70.1%), see Table 2. Of 131 cesarean sections, 96% were emergencies. Severity of HIE According to the Sarnat and Sarnat classification, majority of the 187 newborns with HIE had mild HIE 50.8% (n = 95) and 10.2% (19) had severe HIE, Table 3. Common neurological signs observed Figure 1 shows common clinical signs that were observed in the newborns with HIE. Weak/absent reflexes (46.0%), hypotonia (43.3%) and lethargy (42.2%) were the most common observed signs. Among the least Neonates with moderate HIE and premature newborns were still hospitalized by the 7th day with no response or change in HIE status compared to others, Table 4. Discussion In this study 11.5% of the neonates had birth asphyxia and 10.7% were found to have HIE. The prevalence of HIE observed is lower compared with the findings of a study done in Tanzania at Muhimbili National Hospital in 2007 where the prevalence was found to be 30.9% [8]. It's however higher than among the studies done in high income countries. In Spain the prevalence of HIE was found to be 2.42 per 1000 infants between 2000 and 2008 [9]. Birth asphyxia is among the leading causes of death in the neonatal period and in Tanzania it contributes to 23% of the neonatal deaths. Apart from mortality, neonates who develops HIE have a risk of serious long term neuro-motor sequelae like cerebral palsy and epilepsy among the survivors. Hence improvement in monitoring of mothers in labor and of the newborns with HIE should be strengthened in our setting. Most newborns who had HIE in this study had the mild form (50.8%). This is similar to findings done in South India where the proportion of newborns with mild HIE was 56% [10] and from another study conducted in Rotunda hospital in Ireland where of the 237 newborn assessed, 65.4% had the mild form of HIE [11]. Of note to practitioners' majority (98%) of newborns with mild HIE in this study improved and were discharged similar to observations at Muhimbili National Hospital in Tanzania, in Pakistan and India respectively where a total of 92.3% of the newborns were discharged to their mothers [5,10,12]. This indicates that with the appropriate care newborns with HIE can improve sufficiently to be discharged early. During the study period, the most commonly occurring neurological signs were; weak/absent reflexes, hypotonia and lethargy. This compares closely with the findings from Liaquat University Hospital in Pakistan where among the most observed signs were depressed neonatal reflexes, lethargy and papillary abnormalities [12]. However, in Besat hospital in Iran, seizures (9.1%) were a common presenting sign, unlike in this study where only 8.6% of the participants had seizures being among the least occurring sign [13]. Presence of these signs should alert the health care providers at all levels to monitor newborns closely or timely referral from lower level health facilities. Overall mortality observed among newborns with HIE was 9.1%. This finding is within observations of previous studies at Mulago Hospital in Uganda 12.9% [14], in Cameroon 10% [6], at tertiary hospital in Johannesburg South Africa 14.3% [3], at Liaquat teaching hospital in Pakistan 15% [12] and at Ayub Teaching hospital in Pakistan 16% [1]. Mortality was high among LBW newborns and those with severe HIE. This means strategies to reduce low birth weight should be reinforced. Further improving quality of monitoring labor cannot be overemphasized to prevent intrapartum related asphyxia. Nearly 4 in 10 newborns with mild HIE had not responded by 7th day in the ward, so as 41% of LBW and 46% of preterm newborns. At seventh day of follow up, overall 17.1% (32) of the newborns had shown no response compared to initial assessment. The risk of long term sequelae is high in these 3 groups and this shows the need for long term follow up of children with HIE. Limitations Inclusion of preterm babies could have influenced some of the findings especially the neurological signs and mortality. Clinical assessment alone was used to categorize the severity of HIE which does not give a complete picture of the effect of hypoxia on the brain. Future studies should complement clinical assessment with EEG or CT scan for complete monitoring and evaluation of newborns with HIE. Conclusions The overall prevalence of hypoxic ischemic encephalopathy at KCMC tertiary hospital was found to be 10.7%. Most newborns presented with the mild form of HIE (50.8%) and were discharged to their mothers during the study period. Mortality during the first week of life was 9.1%, highest among neonates with the severe form of HIE and with low birth weight. At 7th day approximately 4 out of 10 neonates with moderate HIE, who were preterm or LBW did not show any improvement in response to treatment offered. We recommend widespread usage and monitoring of newborns with asphyxia using the Sarnat and Sarnat classification chart in low income settings where sophisticated monitoring is not possible. Essential practices to improve monitoring of labor (before and during) are needed. As well as much emphasis on appropriate care of newborns with HIE. Limitations Usage of the Sarnat and Sarnat score as compared to other scored that have been clinically validated may have lead to either overestimation or under-estimation of the prevalence.
2017-06-28T04:38:42.191Z
2017-05-25T00:00:00.000
{ "year": 2017, "sha1": "6722dd02dcc2cbcf4a1d16879deddd989e17a8a5", "oa_license": "CCBY", "oa_url": "https://bmcpediatr.biomedcentral.com/track/pdf/10.1186/s12887-017-0876-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6722dd02dcc2cbcf4a1d16879deddd989e17a8a5", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
248973696
pes2o/s2orc
v3-fos-license
A Contrastive Rhetoric Study of Persuasion in TED Talks Narratives Since 1984 thousands of stories have been told on stage in the platform known as TEDEX or TED TALKS. These were inspiring stories covering diverse areas of life and meant to persuade the audience of better well-being. The present paper investigates the powerful persuasive features present in twenty randomly selected Ted Talks narratives: ten American English and ten Egyptian Arabic narratives. The paper employs Cockcroft and Cockcroft’s model of persuasion (2013) with its three tripartite divisions of Aristotle’s Ethos, Pathos and Logos. The contrastive analysis is done within Marc Alexander’s (2009) adapted version of Mann and Thompson’s Rhetorical Structure Theory (1988), which best suits the data under investigation. The paper adds more items under the presentational and subject matter relations introduced by Mann and Thompson’s RST, so that more types of utterances are easily identified and categorized. It also compares and contrasts the techniques used in both languages to examine the types employed for persuasion of the two different types of audience. Objectives of the Study: The study aims at comparing and contrasting ten TED Talk English narratives to ten Arabic narratives (narratives of each language consisting of appr.130minutes in total) to see the similarities and differences as to the structure and persuasive techniques of each Talk.The research attempts to test if the diverse issues tackled by multi-cultural speakers entail the employment of different persuasive techniques.A thorough analysis of narratives on multi-levels is done: beginning from the coherence of narratives, going through the choice of words, and ending Review of Literature: Chang (2015) examined the rhetorical structure of talks from TED conferences to explore the possibility of their being incorporated into the instruction of oral presentation in English-language classrooms.The analysis identified seven major move types (and their respective component steps) and established a genre prototype based on move frequencies, lengths, associations, and patterns of occurrence.Sallomi and Nayel (2017) presented a paper addressing the persuasive techniques used in both English and Arabic religious sermons.The study aimed at identifying the persuasive techniques adopted in the selected sermons from both languages showing how these techniques are devoted to persuade the audience.After examining the corpus, the researchers have found out that though most persuasive techniques are present in both sermons, still some points of difference are available between the two. Iuliia Rychkova (2020) explored the role of storytelling in the most-viewed TED Talks, on various topics performed at conferences for non-experts.The study aimed to identify common narrative structural patterns and functions in the sampled talks.The qualitative interpretation of story structure was based on Labov's (1972) diamond-shape model, while Propp's (1928) narratemes were used to investigate the common plot development patterns in the sampled TED Talks.The aim of the study was to identify the most effective way to produce a persuasive discourse and hence, sway the audience's opinion. Nahla Nadeem (2021) aimed to provide a conceptualization of how narratives function in TED talks.She used Bamberg's positioning theory as a theoretical framework to build a communicative model of TED Talk narratives.Using a multi-modal discourse analysis approach, the model was applied to the narratives used in Guy Winch's TED Talk in 2015.The model provided an analytical tool for investigating the dynamic interaction and semiotic signaling involved in the communicative performance of TED Talk narratives.While the previous studies examined TED Talks as to the structure of the narrative applying Bamberg's positioning theory, Labov's or Propp's models, the present study offers a contrastive study between English and Arabic narratives using a different theory and model.Marc Alexander's adaptation (2009) of Mann and Thompson's Rhetorical Structure Theory (1987) is used as the umbrella theory, then Cockroft and Cockroft's Model of persuasion ( 2013) is employed with its tripartite division, aiming at analyzing the persuasive techniques used in English and Arabic narratives. Theoretical Preliminaries: Rhetorical Structure Theory (RST): Rhetorical Structure Theory or (RST for short) was originally developed by William Mann and Sandra Thompson in 1987 as a pragmatic framework aiming at analyzing the underlying structures of written texts.Their framework aim at finding out how coherent the units constructing a text is, regardless of its type: they work on different types and sizes of texts like personal letters, advertisements, articles, travel brochures and even recipes (p.80). They identify the most common type of text relation as the "nucleus-satellite" relation (p.80).This same idea is reiterated by Marc Alexander (2009): "the relations, units and direction of effect are all decided by the analyst" (p.15)."Nucleus" means that unit or "span" of the text, which may or may not be an independent clause that is crucial to the speaker/writer's objective, and is not subject to "deletion" or "substitution", whereas the "satellite" is only there as an ancillary to the "nucleus".Mann and Thompson (1988) also speak of "schemas".In simple words, schemas are the types of functional relations that hold between the nucleus and its satellite/s.They identify a number of schemas: Solution hood (where the nucleus is the question and the satellite is the solution), Motivation and Enablement, Elaboration, Circumstance, Background, Evidence and Justification, Relations of Cause (Non/Volitional Cause, Non/Volitional Result, and Purpose), Antithesis and Concession, Condition, Interpretation and Evaluation, Restatement and Summary, Sequence, Method, and finally Summary.However, they point earlier (1987) to what they call the "Joint schema", which is different from all the other schemas in that it is a relation between two nuclei used for example in listings (p.94). Anna Mauranen (1993) is the first to distinguish between generic and rhetorical moves.By generic, she means the multi-nuclear and the subject matter, whereas rhetorical means the presentational relations.Echoing this, a listing is put on the RST website, for further clarification, where Taboada and Mann (2005) group the relations according to their end aim: for instance, presentational relations are meant to "increase some inclination in the reader, such as the desire to act or the degree of positive regard for, belief in, or acceptance of the nucleus" (para.3).As for the subject-matter relations, they only aim at helping the reader capture the relationship between rhetorical thrusts without any positive action.Finally, the multinuclear relations are those existing between two equal "spans", and not between a nucleus and a satellite.These include contrast, joint, list, sequence, and con/disjunction.Presentational relations include antithesis, background, concession, enablement, evidence, justify, motivation, preparation, restatement, and summary.Subject matter relations include circumstance, condition, elaboration, evaluation, interpretation, means, non-volitional cause, non-volitional result, otherwise, purpose, solutionhood, unless (a strange term, yet put as such in their taxonomy), volitional cause, and volitional result. Later in 2006, Taboada and Mann published an article on RST reiterating more or less the same basic ideas in Mann and Thompson's theory.They state that RST "[…] explains coherence by postulating a hierarchal, connected structure of texts, in which every part of a text has a role, a function to play, with respect to other parts in the text" (p.425).Consequently, RST "captures the underlying structure of texts" (p.429).They consider a unit as any independent clause plus its subordinates.Nonetheless, this has one shortcoming: that fine details within the text can be easily glossed over.In addition to the types of schemes postulated earlier by Mann and Thompson, they add six more schemas to make them 30 schemas in total.These are the preparation, restatement, unconditional, means, unless and joint.Furthermore, Mann stated that it is not compulsory to use trees as the only representation of discourse structure. As aforementioned, many linguists tackle RST adding or modifying some features; however, in 2009, Marc Alexander made a significant adaptation of the RST model, applying it to one of Agatha Christie's mysteries.He argues: "The rhetorical structure of persuasive narratives has not been investigated to the same extent as other styles of rhetorical analysis, such as those in politics, classical studies or education" (p.13).Alexander found that applying Mann and Thompson's RST in its original form, to long persuasive monologues like detective stories turns out to be very difficult, because of the long, complex relations between units.He argues that RST is "insensitive to text size" (p.100).He also believes that "rhetoric [in its original sense] is often used to mean persuasive techniques found in non-literary texts" (p.14). That same idea is stated by Chafe (1996) who believes that: "a tree diagram falls short of capturing the gradual development of ideas through time under the influence of both cognitive and social goals and constraints" (pp. 55-56).For this reason, Alexander thought of doing away with the tree idea, and substituting it with the tabular form, which in turn, would allow a much easier grasp of relations among schemas.Alexander's (2009) contribution to RST can be seen in a number of points, the first of which is that he gave the "ties" names and not the moves.His adaptation allows "the rhetorical moves of the discourse itself to dictate the hierarchical structure of the text" (p.17).He also built on Mann's postulate that it is not a must to use trees.He prefers tables with one column structure and calling the analyzed parts "rhetorical thrusts", be it phrases or clauses, as far as they serve a function in the ties found in between parts within the text.In his article, he employs the thirty-one relations postulated by Mann and Thompson; He also adds others, so that some of the functions can be seen clearly.The added parts are "claim', "series", "theory", "simile", "situation", "apparent acceptance", "acceptance query", "refutation from evidence", "concrete example", and "conclusion from previous".However, he does not mention where they belong: to the presentational or to the subject matter relations.Later, in the findings of this paper, these new nomenclatures will be set in their places so that any researcher would easily categorize the functions they meet in further research. In addition, Alexander (2009) coins a new term, "TASK", by which he means "preparation move" and a move is not an independent clause as his predecessors said, but rather any group of words that has a meaning and function.Calling it a "thrust", he only adds that it should have a "persuasive function".In his analysis of the detective story Murder on the Orient Express, he designed a tabular form for every sub-episode in the story, giving it a title.For further clarification, he uses large initial letters and black border as opposed to the small capitals and grey borders for the sub-moves.He also precedes the satellite thrusts with one, two, or three full stops depending on the kind of subordination they provide for the main nucleus.His aim is to make the table understandable for the reader as far as the relationship among thrusts is concerned, without the need for further reading after the table. Cockroft and Cockroft's Model of Persuasion: Robert and Susan Cockroft (1992Cockroft ( , 2005) ) based their model of persuasion on that designed by Aristole in 1926.They even use the same terms of structural principles he coined: Ethos, Pathos and Logos, three sides of one triangle working simultaneously and not linearly.By Ethos, they refer to the speaker's personality and stance.Garver (1994) summarizes the speaker's qualities based on Aristotle's words: "Trust is built up progressively by impressions of someone's moral strength (arete), benevolence (eunoia), and […] "constructive competence" or the ability to offer shrewd, practical but principled advice (phronesis) (pp.132-8).For a rhetorician to affect audience, he has to affect them on the two levels of psychology and values (Cockroft &Cockroft, 2005, p. 17).In other words, the audience are usually affected by the speaker's individuality, who he is, what values he stands for, how he understands his audience and hence how he addresses them. While the age and gender are two important sociolinguistic variables that affect the audience's receipt of the persuasive message, the persuader's stancea vital part of the persuasive process-is dynamic as well in a sense that it can be open or close, rigid or flexible, structured or disorganized.Audience may refuse a persuader if she is for instance a female or because there is a generation gap between them.Likewise, they might build a resistance against a persuader if what he stands for is against their values. Understanding the audience is a key step in achieving the required effect.The persuader has to know how to be flexible or humorous when necessary."It is this "warmth of thought [], energy and exuberance of personality which […] will assist the persuader, finding the expression via changing mood and tone" (Cockroft &Cockroft, 2005, p. 35).It takes both "creativity" and "talent" on the part of the persuader to understand and persuade his audience.Burke (1969) argues that a persuader can realize his target by knowing how to speak his audience's language by "speech, tonality, order, image, attitude, idea, [in short, when he identifies his ways with the target audience]" (p.55). By Pathos, Aristotle means appealing to the audience's emotions.Cockroft & Cockroft (2005) add the term "engagement" to this principle to mean "orient[ing] emotional appeals precisely towards audience and topic, and to found them on sources of feeling accessible to speaker and audience […]" (p.17).They also add that employing "powerful imagery creates empathy for a persuader to achieve his goals; he has to make the audience feel both sympathy and empathy towards the topic he is tackling.To achieve this, audience must visualize the emotions he is (Language & Literature) 3(2022) 66 raising, so the persuader can resort to techniques like graphic vividness, emotive abstract words, repetitions, metaphors or any other tools depending on what the persuader thinks will move the audience's emotions.Moreover, Cockroft & Cockroft speak of "freeze-framing" in what they term as "the laser analogy".They simply state that in the same way that the energy is built up in a laser tube through the alignment of mirrors; emotions are built up by the persuader, intensified, and then transformed. Journal of Scientific Research in Arts Logos-the third tripartite side-includes "the process of identifying the issues at the heart of the debate; the range of diverse arguments in the discourse; the structure of thought these arguments compose; and the sequencing, coherence and logical values of these arguments" (Cockroft & Cockroft,p.18).Logos is employed not only to appeal to the audience's minds, but also to their emotions.That is why logos is an important aspect of the persuading process; it is in fact at the heart of persuasion.Logos is divided into invention and judgement.By invention Cockroft &Cockroft mean a method of thinking up arguments on any given topic, and by judgement [they] mean the evaluation of these arguments as they bear on the issue at hand" (p.81).The present paper is only concerned with the first of these parts as the second one is concerned with judging to what extent the argument succeeded in persuading the audience by referring to the audience. Logos includes ten models of persuasion.The definition model, the root meaning model, the cause and effect model the similarity model and the oppositional model.Then there is the degree model, the model of testimony, the part/whole model and finally, the associational model.This latter includes four main varieties: subject/adjunct, lifestyle/status, place/function, and time/activity (Cockroft&Cockroft,pp.85-106). Cockroft and Cockroft offer a persuasive repertoire that help researchers in their analysis of texts.They speak of sound patterning, lexical and syntactic choices.Sound patterning for them, "create and enhance meaning" (p.165).On the phonetic level, alliteration, assonance, consonance, dissonance, onomatopoeia, and rhyme are examples.Alliteration is repetition of the first consonant; assonance, repetition of medial vowel, and consonance is the repetition of medial and final consonants.As for onomatopoeia, it is when the sound refers to the meaning, and finally, rhyme is the repetition of same sounds in the same line.68 The items added in the table were met during analysis, the researcher put each based on how they contribute to the understanding of the relation between each and every piece of discourse.For instance, rhetorical queries (I adopt Alexander's term) are employed to increase the inclination of the audience-a basic function in presentational relations-and not only to make them further understand the utterance in question.In addition, I tried to put it as close in function to the other related utterances, like restatements that already belong to the presentational relations.As for the imaginary or virtual monologues or dialogues, these are used to help the audience visualize the situation more vividly, so I inserted them under the subject-matter relations that aim at audience recognition of the relation in question, only without making them do any kind of action. Following Marc Alexander's adaptation of RST, the present paper examined each English TED Talk separately, first dividing it into episodes or parts, then putting each episode in an analysis table like that of Alexander's, to analyze how its parts relate to one another.In the forthcoming tables, presentational and subject-matter relations are put according to the link that holds between the "rhetorical thrusts".I follow Alexander's method in using bold with main thrusts and full stops to denote the level of subordination, which make it extremely easy for readers to follow the rhetorical link between moves only by looking at the tabular form. An example table follows to show the method of analysis.It is taken from a Talk entitled; "I grew up in a Cult: It was Heaven and Hell", by Lilia Tarawa.The following episode is an example of the hell she talked about when she was attending school. ..RESULT 2(VOLITIONAL) In the table above, there are two main claims (nuclei), each followed by a number of subordinate thrusts (satellites).From the table, the reader can understand the relation between the main claim and the other subordinating sentences: for instance, the narrator claims that Fervent-a leading figure in her tribe-punishes his son violently by pulling out his belt.A fearful thrust then ensues when they are directed as a class-to watch the incident, and as a result, Lilia refused willfully to respond; and a further result was that she stopped respecting Fervent for good.Another analysis table is put as a sample from the Arabic narratives.The table below is taken from a talk entitled "The Magic of Chasing Dreams" by Hesham ElGamal.The episode selected is one in which he likens human beings to icebergs: …SUMMARY In a similar vein, the Arabic narrative is divided into main claims: this time 3 main claims are detected.A case in point is when ElGamal likens human Beings to icebergs.He then elaborates on his claim first by restating the metaphor, and second by mentioning the details of an iceberg, what it looks like and how humans are the same, with a clear use of prepositional phrases referred to as circumstance. As to the types of rhetorical relations employed, a significant similarity was noticed in both English and Arabic narratives.The bisection the narratives according to Alexander's adaptation of RST, show that both English and Arabic narratives employ a hefty amount of subject-matter relations in comparison to the presentational relations.A quantitative analysis showed that in the English data, 60% of the rhetoric used was subject-matter relations, whereas 40% of the narratives was presentational.In a similar vein, subject-matter relations in the Arabic narratives amounted to 64%, whereas the presentational formed only 36%.TED talk speakers aim more at making their audience understand the relations in question and get persuaded, rather than direct them to take an action on the spot. Both narratives show a number of common prevailing techniques in terms of the persuasive triangle: Ethos, Pathos and Logos proposed by Cockroft and Cockroft (2005).The twenty English and Arabic narratives tackle different topics about surviving hardships, accepting others, and moving from failure to success through overcoming challenges.Not all of the speakers are specialists in their fields; however, they are all successful people.They all rely on narrating a part of a personal dilemma that they managed to overcome, learned from and achieved success.Their figures and topics encourage their audience to listen, understand and act accordingly.Thus, they all succeed in achieving persuasion by involving themselves as human beings in stories that make them close to their audience.As a result, they succeed as far as Ethos is concerned in appealing to the listeners. Subject-matter relations presentational relations As for the Pathos, narrators appealed to the audience's emotions through prevalent number of strategies like metaphorical images, emotive abstract words, listings, irony and paradoxes.The following chart shows the different occurrences of each strategy in the English narratives: The chart shows that the employment of emotive positive and negative words is the main strategy that speakers rely on to affect their audience's emotions.They represent 60% of the total strategies used.This is followed by graphic vividness and metaphors that represent 28%; heapings-up form 8%, and finally irony and paradoxes are the least used, each making only 2%. In the ten English talks, speakers rely on moving the audience's emotions through building emotional tension by the usage of a myriad of emotive abstract positive and negative words."Incredible, challenging, ashamed, wounded inside, traumatic, painful, horrible, rewarding, exciting, fantastic, effective, terrifying and beautiful" are cases in point. Other emotion-moving technique is the use of graphic vividness and metaphors.Images like " they look like dead parrots", "let me take you on a journey", "the way we think eats away at our mental health", "can you slice through the psychological scar tissue of your programming?", or "my perception later turned into a formula", are examples on how the narrators depend on drawing a virtual image before their audience to move their emotions. Moreover, listings or heapings-up contribute to this emotional build-up.This is an example from one of the talks where the speaker describes a moment he felt My arms are tingling.The pain is crushing me."Another example is seen in: "The way we name ourselves is a reflection of who we are, our declarations, family histories, the things we believe, the morals we abide by, our homes, cultures, transformations,…" Irony is very much limited in usage, but is not less effective. An example of irony is when the speaker is talking about her life in Italy in an earlier life, she is mocking how emotional her folks are when she says: "It's like an opera, you take the garbage out, they got to kiss everybody cos you might not come back."In addition, targeting sarcasm at people who complain about the traffic, they are described as: "they're riding with a committee in their heads."Finally, paradoxes are also employed and have great emotional effect on the audience; a speaker is talking about how people have become lately: "we're wealthier, but unhappier; more prosperous, but more depressed; we have faster and faster transportation, but faster and faster to complain about it." Likewise, Arabic texts exhibit the same strategies addressing emotions; i.e. emotive abstract words, graphic vividness, heapings-up and irony.Yet, hyperbole is also used together with instant repetitions.The chart below indicates their frequency in the Arabic narratives: The chart shows that, like English narratives, emotive abstract words are the most frequently used to appeal to emotions, positive and negative words like: ‫"مرتاح/صادق/ضحكة/حلوة/رومانسية/إعجاب/نعمة/خايف/بالوى/اكتئاب/بيضعف/"‬ As for the logical models used, different types were used, however, the associational model is the most frequent type of logical persuasive techniques used, followed by the cause-effect model.Examples of various types of association can be seen in lifestyle/status like: " When I wake up in the morning, I crack open a can of Redbull, then drink several more cans throughout the day"; "We have become human doings, we have more people on antidepressants"; and " She and her family go on all exciting adventures together on the weekends."Other examples belong to the subject/ adjunct type, like: "she has a rewarding career"; "That's very scary" and "I'm a normal boring person."" So, by 16 I sat glued to fitness competition on television" and " It's October 10, I'm lying on a stretcher at the back of an ambulance" are instances of the third type of associational model known as time/activity.Moreover, narrators depend on the logical cause and effect, volitional or nonvolitional to address the minds of their audience.Cases in point are: "I haven't gotten that much rest in a long time, and now my body's breaking down.";"What shocked me wasn't their poverty, but their happiness"; "The malleability of a person's story must be self-determined, because no one can speak the names of billions in one breath", and "I want to share the tools I created to survive because remaining silent, I become part of the problem." Parallelism, marked branching and rhetorical questions are three significant strategies used to appeal to the audience's logos.The chart below shows their frequency in English narratives: Parallelism represents 32%, left-branching 33% and rhetorical questions 35%-a frequency which means that approximately the three techniques are used equally in the English narratives. Parallel structures depend on repeating a certain sentence form to engage the audience's minds and affect their emotions.Instances of parallelism can be noticed in: "did you love to dance?/did you love to draw?, I was already doing what I loved/ I was already fulfilled/ I was already happy/ I was already living my purpose, you were interacting/you were sitting there/ you were talking to them, Like a Mohammed turned Mo/ or a Lisa Pizza turned Iman, and since then, I've researched it, I've worked on it, I've thought about it."Sometimes the query is put in a hypothetical dialogue between participants to make the audience visualize the situation as if really happening in front of them.Real conversations bring life to narrations.For instance, in one of the narratives, the speaker imagines a conversation between a person and a life coach, in which the person asks the life coach: and the resulting effects, especially when comparing two attitudes, has a great effect on persuading them of the message the speaker aims at conveying.In addition, using the subject/ adjunct associational model is also prevalent in both languages; associating an attitude, person or object with positive or negative adjuncts has a profound effect on the recognition of the audience and their persuasion.Other associational models are employed, like for instance lifestyle/status and time/ activity, and they are used to compare and contrast different attitudes of the same person before and after change, or between two persons living two contrasting life styles.In the same vein, similarity and oppositional models appear in both languages to compare and contrast people or objects. Graphic vividness and metaphors are seen to be employed on a wide scale in both types of narratives.Nonetheless, the type of images employed differ from one culture to another.They contribute to making the audience visualize the message aimed at.It is worth-noting here that in English narratives, listings or heapings-up also contribute to this visualization and emotional build-up, whereas in Arabic narratives, no listings are used.On the other hand, parallel structures in English and Arabic are extensively used.Left-branching in English narratives are also evident, whereas in Arabic rarely used.Random repetitions in both types of narratives are employed, no special types are employed. Conclusion: This paper attempted to answer a number of research questions concerning the analysis of English and Arabic TED Talk narratives.Using Alexander's adaptation of Mann and Thompson's Rhetorical Structure Theory and Cockroft and Cockroft's Model of Persuasion, the researcher managed to provide answers to all the questions.RST especially the adapted version of Marc Alexander is a trusted method of analyzing long narratives, fifteen pages long: through the use of the tabular form, episodes of narration are easily pinpointed and categorized.Moreover, assigning bold and full stops make the understanding of the relations very easy for the reader.It is also worth mentioning in this respect that the analysis of Arabic narratives is as easy as the English ones.The analyzed data, mounting to twenty pages each, were easily understood as cohesive texts through Alexander's tabular form.The Arabic narratives are largely the same as the English.The two prevailing models used are the cause/effect followed by the associational.As for the persuasive techniques, English and Arabic narratives show more similarities than differences in spite of the fact that these are completely two different languages and speakers and audiences come from two different cultural backgrounds.The pathos strategies employed in the English and Arabic narratives are emotive abstract words, graphic vividness, heapings-up and irony.The Arabic narratives use furthermore instant repetitions, and hyperbole.Regarding the logos strategies, Arabic and English both employ three main techniques; namely, marked syntactic branching, parallelism and rhetorical queries.Narrators mainly use the usual Arabic sentence structure and almost use no fronting.In addition, Arabic narrators depend heavily on quasi-dialogues (taken from real Egyptian culture, or virtual conversations) more than English narrators. Journal of English and Arabic narratives randomly selected, look almost alike in the way the narrators address their audience.No long introductions are used in the narratives, in most of the cases, a very short background is provided, and then the core objective is introduced.Moreover, regardless of the narrator's background or profession, personal experiences are shared with the audience.Narrators in both languages-definitely having prowess in the topics they cover, though not always specialized-employ various and plenty subordinating moves, giving more weight to subject-matter relations, over those of the presentational ones. "I'm freaking out.Sirens are blaring.I am laying on a stretcher.I am trembling. 43.How come that my circumstances are not an obstacle?My whole life and struggle are not an obstacle?‫عقبة؟‬ ‫مش‬ ‫بتاعى‬ ‫الكفاح‬ ‫و‬ ‫حياتى‬ ‫عقبة؟‬ ‫مش‬ ‫ظروفى‬ ‫ايه‬ ‫."يعنى‬In other narratives the rhetorical queries are meant to be a part of a monologue, in a dialogue with the self, the narrator tells the audience how he wondered: 44.Shall I succeed?Shall people like me? Shall I be rich?‫يا‬‫فلوس؟‬ ‫هكسب‬ ‫يترى‬ ‫هتحبنى؟‬ ‫الناس‬ ‫ترى‬ ‫يا‬ ‫هنجح؟‬ ‫ترى‬Queries are not only imaginary, but sometimes they are used to narrate real life events to the audience: 45.He told me: why are you thanking me?I said: Weren't you the one who helped publish my book?He said: Son, I don't know you or your book ‫انت‬ ‫بيقولى:‬ ‫لقيته‬ ‫و‬ ‫معرفكش‬ ‫انا‬ ‫يابنى‬ ‫قالى:‬ ‫نشر؟‬ ‫لدار‬ ‫ووديته‬ ‫كتابى‬ ‫نشرت‬ ‫حضرتك‬ ‫مش‬ ‫هو‬ ‫قولتله:‬ ‫ايه؟‬ ‫على‬ ‫بتشكرنى‬ ‫ايه‬ ‫اسمه‬ ‫كتابك‬ ‫معرفش‬ 46.As if a chip is taken from a part of my brain and inserted in another part and everything would just go smoothly ‫تانية‬ ‫حنة‬ ‫فى‬ ‫تركب‬ ‫و‬ ‫مخى‬ ‫فى‬ ‫حتة‬ ‫من‬ ‫بتتفك‬ ‫فيشة‬ ‫فى‬ ‫كأن‬ ‫سالسة‬ ‫بكل‬ ‫تمشى‬ ‫الدنيا‬ ‫و‬ Parallelism is evident as well in the Arabic Talks.Repetition of the same sentence structure is abundantly employed by speakers.Parallel sentences are easy to understand and memorize on the part of the audience-an effect that a speaker would want to achieve.The following are examples of such repetitions: Journal of Scientific Research in Arts (Language & Literature) 3(2022) 67 Data Analysis: After applying RST analysis to twenty TED Talk narratives, the following items were found missing in the table proposed by Mann and Thompson and not added by Alexander in his adaptation.The added items are either explanatory to the already mentioned, or they are basic types not originally included.They are added in italics to the original table: Journal of Scientific Research in Arts (Language & Literature) 3(2022) Table 2 : The Classroom Episode Table 3 : The Journal of Scientific Research in Arts (Language & Literature) 3(2022) Journal of Scientific Research in Arts (Language & Literature) 3(2022) 75 Relieved, honest.Laugh, beautiful, romantic, admiration, bliss, afraid, disasters, depression, weaken)are abundant in the narratives, they form 64% of the strategies used in Pathos.This is followed in frequency by graphic vividness, which forms 25%.This is evident in instances like: Like English narratives, rhetorical queries in Arabic narratives are also common and significant.Egyptian Tedex narrators use a large number of rhetorical queries Journal of Scientific Research in Arts (Language & Literature) 3(2022)
2022-05-23T15:05:40.613Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "1505f4b70cbd211098e93d6185e2c7c076e931c8", "oa_license": "CCBYNC", "oa_url": "https://jssa.journals.ekb.eg/article_235560_ef4b7c131d679de0c14d22f1efcfee1e.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "94826e9b61a8879ee86a6fbaef596658915e6876", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [] }
121448161
pes2o/s2orc
v3-fos-license
Quantum simulation of a triatomic chemical reaction with ultracold atoms on a waveguide We study the scaling and coordinate transformation to physically simulate quantum three-body collinear chemical reactions of the type A+BC $\rightarrow$ AB+C by the motion of single ultracold atoms or a weakly interacting Bose-Einstein condensate on an $L$-shaped waveguide. As an example we show that the parameters to model the reaction F+HH $\to$ H+HF with lithium are at reach with current technology. This mapping provides also an inverse scattering tool to find an unknown potential, and a way to transfer the knowledge on molecular reaction dynamics to design beam splitters for cold atoms with control of the channel outcome and vibrational excitation. Introduction.Ultracold atoms and ions are relatively easy to isolate, prepare, manipulate and detect by means of highly controllable operations that preserve their quantum coherence in the time scale of processes of interest.They have thus become natural candidates for performing physical, rather than numerical, simulations in which the effective Hamiltonians governing their dynamics can be made equal to the Hamiltonians of very different, simulated, quantum systems.These simulations are thus based on a formal analogy and may predict the behavior of the simulated system under conditions hard to realize and/or calculate in the original one.The simulating system may also be interesting in its own, beyond the parameters relevant for the simulation, and lead to genuinely new phenomena and applications [1].This opens exciting perspectives for many-body physics [2], and also for few-body systems. In this letter we show that this approach can be applied to molecular dynamics and chemical reactivity by studying the analogy between reactive collinear threebody chemical reactions and the motion of a single cold atom, or possibly a weakly interacting condensate, on a potential surface designed by a magnetic or optical waveguide.We put the emphasis on the chemical reaction, but the same procedure may shed light also on non-reactive collisions.What we propose and what is facilitated by ultracold atoms is basically a quantum dynamical version of the rolling ball analogy of chemical reactions with the ball ensemble substituted by a condensate or an ultracold-atom wavepacket, and the mechanical model potential by a magnetic or optical waveguide.Quantum effects are important for state to state (rather than averaged) results as well as for reactions involving a light atom transfer such as hydrogen. Most chemical reactions occur with steric requirements, i.e., a preferred direction of attack.The collinear configuration for the reaction path corresponds in many "abstraction" reactions involving halogen and alkali atoms to the lowest potential barrier and to the preferred orientation within a narrow cone of acceptance [3].More-over collinear reactions may be induced by orienting cold polar molecules with strong electric fields via the second order Stark effect [3].They are also a standard workbench for testing new calculational methods, examining the range of validity of several approximate theories, and exploring parameter variations over a wide range of values, difficult to implement with full 3D calculations.Accurate quantum calculations involve two mathematical coordinates and are still time consuming and specially troublesome when heavy atoms or high energies are involved.The results of interest are usually the branching ratios among the channels or the distribution of produced molecules among the possible vibrational states. Simulation Setting.The collinear chemical reaction A+BC → AB+C, corresponds to the collision of an atom A and a nonrotating diatomic molecule BC with the three atoms aligned.We assume that the Born-Oppenheimer approximation holds and separate the fast electronic and the slow nuclear motions.In terms of nuclear masses, positions and momenta in a laboratory frame the nuclear motion is governed by the quantum-mechanical Hamiltonian where V is the effective interaction between the three nuclei.The first step is the transformation from the "chemical reaction" variables to the atomic "simulation variables".A second important task is to show that the required parameters for the cold atom experiment are available with current technology.Mass-weighted coordinate system.Let us introduce the center of mass (CM) coordinate and the relative coordinates The corresponding momentum operators are P CM := with mass factors a , and scaling parameters m and l that we can choose freely.The corresponding momentum operators are and the kinetic energy T takes the form The connection between the simulation variables In the following we ignore the trivial center of mass motion and assume that the potential does only depend on the relative differences between the particle positions.Then the corresponding time-dependent Schrödinger equation associated with the Hamiltonian (1) in the new variables is where we have set τ = t/l 2 and Equation ( 5) is the important result for the simulation, and describes 2D quantum motion of a quantum particle of mass m on the potential V Q .Potential energy surface.We now specify the potential surface V q for the interaction between the three particles of the reaction.This might be an ab initio or, more generally, a semiempirical potential.Here we assume the semiempirical London-Eyring-Polanyi-Sato (LEPS) surface [4][5][6], V q (q 1 , q 2 ) = ( 7) where , and q 3 = q 1 + q 2 .D i , β i and q i0 are the dissociation energy, the Morse parameter and the equilibrium distance of the i-th diatomic molecules that we can construct from the three atoms.The adjustable parameter ∆ is optimized for each reaction.In the asymptotic regions, before and after the reaction happens, one of the atoms is far from the others and the potential energy is the one of a diatomic molecule [4].In the LEPS surface, this is given by the Morse function where j = 1 for the products' channel with the diatomic molecule AB, or j = 2 for the reactants' channel with the diatomic molecule BC.This potential near the equilibrium distance q j0 can be harmonically approximated by where K j = 2D j β 2 j is the force constant.Applying Eqs.(4) to the potential in the asymptotic regime where V q (q 1 , q 2 ) ≈ V j (q j ), we obtain for the simulating frame that the energy surface in the asymptotic regions of the products' and reactants' channels are, taking into account Eq. ( 6), where we have defined for the products' channel (j = 1) whereas for the reactants' channel (j = 2) The function χ 1 is a rotation in the (Q 1 , Q 2 ) plane, so the potential Eq. ( 10) is, for the products, simply a rotated harmonic oscillator in the (Q 1 , Q 2 ) plane.In terms of the oscillation frequencies of the diatomic molecules ν j , the frequencies ν j of the harmonic oscillators in Eq. ( 10) are where µ AB and µ BC are reduced masses for the diatomic molecules.The value of l can be fixed from these last equations, so that the potential parameters of the simulation can be made realistic.Initial atomic velocity.To set the initial velocity of the cold atom v Q1 in the reactants' channel we first estimate the velocities involved in the chemical reaction.If the reaction happens at temperature T the rms mean velocities for the atom A and diatomic molecule BC along a given direction are respectively (k B T /m A ) 1/2 and [k B T /(m B + m C )] 1/2 , where k B is the Boltzmann constant.We may then assume which, following from Eqs. (3), corresponds to the atom velocity Example and numerical values.As an explicit example we consider the reaction F+H 2 → FH+H, where F→A, H→B, and H→C, so To simulate the reaction we propose 7 Li atoms.One advantage of 7 Li is that the interatomic repulsive interactions are extremely tunable with a Feshbach resonance.The zero crossing of the s-wave scattering length is the shallowest known, so that only modest field stability is needed to achieve a non-interacting gas [7].We thus have m = 1.1526 • 10 −26 kg and set l = 6.55 • 10 −6 .Defining the valley depths V j = D j l 2 (j = 1, 2) according to Eq. ( 6), the parameters in the asymptotic region for the reactants' channel are ν 2 = 5.66 kHz, and V 2 = 2.4 µK, whereas in the asymptotic region of the products' channel, ν 1 = 5.34 kHz, and V 1 = 3 µK, The choice of a light atom such as lithium is also dictated by the requirement of achievable transverse frequencies in the reactants' and products' channels with standard techniques (see below).To illustrate the scaling of distances and velocities note that a displacement of 1 Å of the atom F along the reactants' channel corresponds to a displacement of 7.8 µm of the lithium atom according to Eqs. (2).If the reaction occurs at room temperature, T = 298 K, Eq. ( 13) sets for the lithium atom a velocity v Q1 = 5 mm/s along the asymptotic region of the reactants' channel.The control of matter waves at such low velocities is at reach [8].In Fig. 1 we plot the potential energy of the chemical reaction H 2 +F → HF+H given by Eq. (7) to see the transformations from the chemical reaction parameters {q 1 , q 2 } into the "laboratory" simulation waveguide on which the 7 atom moves.Note the advanced saddle, and the deeper product's valley, responsible for the exoergicity and the vibrational excitation of the resulting HF molecule. The experimental realization with ultracold atoms involves (I) the preparation of a propagating matter wave in a guide and (II) the realization of a guide with the appropriate shape.A Bose-Einstein condensate, rather than the repetition of the experiment with single atoms, provides the ideal setting since the fate of the whole quantum wave packet can be measured in one single experiment.The propagation of a Bose-Einstein condensate into straight magnetic or optical guides has already been demonstrated experimentally [9,10].More recently, the production of guided atom lasers shows that a large control of the matter wave parameters such as the mean velocity (5-30 mm/s), the transverse mode occupations, the internal state, or the linear atomic density can be achieved [11][12][13][14].Using different outcoupling mechanisms, the matter wave can be prepared in the transverse ground state [12,13].In these latter schemes, the diluteness of the matter wave suppresses the role of interactions providing a well-suited system for the quantum scattering experiments of interest, without the need of Feshbach resonance tuning.The second aspect deals with the potential modeling to design simple reactive chemical reactions.Different strategies can be envisioned (i) with wires sculptured on atom chips by a focused atom beam technique [15,16], (ii) with adiabatic radio-frequency potentials [17,18], (iii) with high resolution time averaged optical potentials "painted" by a tightly focused rapidly moving laser beam on a 2D canvas formed by a static light sheet [19].A canvas of 60 µm diameter, and a radial condensate thickness of less than 1 µm as the ones realized in [19], are enough for the spatial range and resolution needed for the simulation, see Fig. 1b.Moreover the potential depth can be controlled by velocity or intensity modulation, and no decrease in the number of condensate atoms is observed after 2.5 s, again more than enough for implementing a process of the order of milliseconds.Reaction could be detected with an in situ high resolution imaging, whereas the coherent vibrational excitation is measurable after a few ms time-of-flight.A high flexibility in the guide design is also provided by combining properly these various techniques and/or using time-dependent optical or magnetic potentials [20,21].The simplest realization would involve a crossed red-detuned dipole beams configuration in combination with a well positioned repulsive potential wall realized by a sheet of blue-detuned laser light [22].Alternatively, one could study the motion of an ion into a well-designed guide.Ultracold ions have already been transported in complex structures [23,24], but their propagation in guides has not been investigated so far. Discussion and Outlook.We have worked out the mapping between a quantum-mechanical collinear triatomic chemical reaction and the motion of ultracold atoms on a tilted, L-shaped waveguide.As an example we have shown that the parameters for simulating the reaction F + H 2 → FH + H using 7 Li can be implemented with currently available technology.This approach is thus complementary to other proposals for simulating chemical reactions [25], which are more ab initio and do not need any previous calculation of the potential surface or application of the Born-Oppenheimer approximation, but require a quantum computation with hundreds of coherently manipulated qubits.This is currently out of reach for a reaction like the one discussed.The present approach is less fundamental, since it assumes a potential surface and the Born-Oppenheimer approximation to hold, but also easier to implement.As an inverse scattering tool, the capability to manipulate the potential parameters may be used to fit experimental results of the chemical reaction and find the right potential. By a straightforward generalization, we could also simulate collinear four-atom reactions by an ultra-cold atom in a three-dimensional potential.As a further application of the mapping, the vast knowledge and experience accumulated on chemical reaction dynamics, in particular for triatomic systems in the collinear configuration, is now ready to be transferred to design crossed laser beams or waveguide bends with different properties.They could be used, for example, as control devices for asymmetrical beam splitting into the channels or for controlling the transverse vibrational excitation.An example of this is the recent design of an atom diode or one-way barrier [22]. FIG. 1 : FIG. 1: (Color online) (a) Contour map of the potential energy surface (7) for H2 + F → HF + H.(b) Contour map of the potential for the 7 Li atom that simulates the chemical reaction.In both cases the energy is in units of the zero point energy of the reactants' valley, and the surface is truncated well below zero energy (the asymptotic value when all atoms are far apart) to better visualize the saddle and reaction path.
2011-02-23T11:58:45.000Z
2011-02-23T00:00:00.000
{ "year": 2011, "sha1": "de3f45868483aea7ab8c5d5b738ac502ecb33d40", "oa_license": null, "oa_url": "https://e-archivo.uc3m.es/bitstream/10016/32370/1/simulation_JPB_2011_ps.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "de3f45868483aea7ab8c5d5b738ac502ecb33d40", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258954555
pes2o/s2orc
v3-fos-license
New PEO-IAA-Inspired Anti-Auxins: Synthesis, Biological Activity, and Possible Application in Hemp (Cannabis Sativa L.) Micropropagation Auxins play an important role in plant physiology and are involved in numerous aspects of plant development, such as cell division, elongation and differentiation, fruit development, and phototropic response. In addition, through their antagonistic interaction with cytokinins, auxins play a key role in the regulation of root growth and apical dominance. Thanks to this capacity to determine plant architecture, natural and synthetic auxins have been successfully employed to obtain more economically advantageous plants. The crosstalk between auxins and cytokinins determines plant development and thus is of particular importance in the field of plant micropropagation, where the ratios between these two phytohormones need to be tightly controlled to achieve proper rooting and shoot generation. Previously reported anti-auxin PEO-IAA, which blocks auxin signalling through binding to TIR1 receptor and inhibiting the expression of auxin-responsive genes, has been successfully used to facilitate hemp micropropagation. Herein, we report a set of new PEO-IAA-inspired anti-auxins capable of antagonizing auxin responses in vivo. The capacity of these compounds to bind to the TIR1 receptor was confirmed in vitro by SPR analysis. Using DESI-MSI analysis, we evaluated the uptake and distribution of the compounds at the whole plant level. Finally, we characterized the effect of the compounds on the organogenesis of hemp explants, where they showed to be able to improve beneficial morphological traits, such as the balanced growth of all the produced shoots and enhanced bud proliferation. Introduction Phytohormones are naturally occurring compounds capable of modulating plant developmental, physiological, and metabolic processes, even at low concentrations (Fonseca et al. 2014;Hemelíková et al. 2021).The application of phytohormones to manipulate plant development started in the 1930s, when ethylene and the related compound acetylene were used to alter flowering and fruit formation in pineapple (Bartholomew 2014).Since then, the exogenous application of phytohormones to plants has become a staple in agricultural and horticultural practices (Rademacher 2015).A better understanding of plant hormones and the structural requirements essential for their biological activity has allowed the creation of synthetic analogues, which have found use not only in agriculture, as growth promotors and herbicides, but also in plant science as tools to study different biological processes (Rigal et al. 2014;Jiang and Asami 2018). Auxins, amongst which indole-3-acetic acid (IAA) is the most abundant one, were the first class of phytohormones to be discovered and had been postulated to regulate plant growth a century before their chemical identity was revealed (Enders and Strader 2015).Auxins are key regulators of many aspects of plant development, including cell division, elongation and differentiation, fruit development, and organ photo-and gravitropism (Enders and Strader 2015).Canonical auxin signalling is dependent on nuclear Transport Inhibitor Response 1/Auxin Signalling F-box protein (TIR1/AFB) auxin receptors, which are capable of both binding the auxin and acting as F-box ubiquitin ligases mediating ubiquitination and protein degradation of Aux/IAA transcriptional repressors.These repressors interact and modulate the activity of several Auxin Response Factors (ARFs), the latter being able to recognize auxin response elements (AREs) in the promoter region of auxin-controlled genes with variable specificity and affinity (Gallei et al. 2020).Additionally, very rapid cellular non-transcriptional responses to auxin, such as the triggering of changes in plasma membrane potential, have been known for decades (Dubey et al. 2021).Since recently, processes that were believed to be regulated through the canonical TIR1/AFB pathway, such as the regulation of root growth, are now being reconsidered, as such responses are too fast to involve transcription and protein expression, suggesting that an unknown nontranscriptional branch of TIR1/AFB signalling exists (Friml et al. 2022).Selective auxin agonists, such as RubNeddins (RNs) (Vain et al. 2019), and antagonists, such as 4-(2,4-dimethylphenyl)-2-(1H-indol-3-yl)-4-oxobutanoic acid (auxinole) and 2-(1H-indol-3-yl)-4-oxo-4-phenylbutanoic acid (PEO-IAA) (Hayashi et al. 2012), can be used to study and regulate various plant growth and development processes.These anti-auxins have been suggested to bind to TIR1, block the formation of the TIR1-IAA-Aux/IAA complex, and thus inhibit the expression of auxin-responsive genes (Hayashi et al. 2012). Thanks to the capacity to determine plant architecture, auxins, anti-auxins, and cytokinins, on their own or in combination, have been successfully employed to yield plants with delayed senescence and improved grain yield, drought resistance, seed set, flowering, etc. (Shi et al. 2014;Tamaki et al. 2015;Koprna et al. 2016;Liang et al. 2020;Klos et al. 2022).Moreover, through their antagonistic interaction auxins and cytokinins play an essential role in the regulation of root and shoot growth (Aloni et al. 2006;Umehara et al. 2008;Kurepa and Smalle 2022), which is of particular importance in the field of plant micropropagation, where the ratio between these two groups of phytohormones needs to be tightly controlled in order to achieve proper shoot growth and rooting (Holmes et al. 2021). Hemp (Cannabis sativa L.), a traditional multi-purpose crop which over the centuries has found applications in many areas, such as pharmaceutical, textile, paper and construction industries, animal feeding, or biofuel production (Crini et al. 2020), is one of many species that could benefit from advancements in micropropagation techniques.Even though large-scale hemp cultivation has traditionally been done through seed cultivation, using heavily mechanized agricultural practices similar to other grain crops (Monthony et al. 2021b), for pharmaceutical uses clonal methods for plant propagation tend to be favoured, as they allow the production of genetically and phenotypically uniform, pathogen and disease-free plants with consistent growth rates (Crini et al. 2020;Monthony et al. 2021b).Unfortunately, hemp clonal propagation in vitro has been proven to be particularly challenging due to the strong apical dominance (Smýkalová et al. 2019;Dreger and Szalata 2021) and the tendency to form callus (Movahedi et al. 2016), which is associated with bud organogenic recalcitrance (Monthony et al. 2021a).Achieving direct regeneration in hemp is problematic (Galán-Ávila et al. 2020), often resulting in a reduced regenerative responsiveness in vitro.This is usually attributed to the significant genetic variability within each variety, which is further exacerbated by diversity in the ploidy status and occasional polyploidisation event (Mansouri and Bagheri 2017;Crawford et al. 2021;Balant et al. 2022), as well as to the variability in the representation of female and male plants; even though most individuals are dioecious, monoecious cultivars also exist (Balant et al. 2022). During the last couple of years, several synthetic auxin and cytokinin derivatives such as meta-topolin (mT) (Lata et al. 2016), thidiazuron (TDZ) (Lata et al. 2009;Piunno et al. 2019;Dreger and Szalata 2021), 6-benzylaminopurine (BAP) and 1-naphthalene acetic acid (NAA) (Burgel et al. 2020) have been tested to better control the growth of new tissues, highlighting the potential use of novel synthetic phytohormone derivatives in hemp clonal propagation.Moreover, we have previously demonstrated that the weak anti-auxin PEO-IAA, applied in combination with the cytokinin N-benzyl-9-(tetrahydro-2H-pyran-2-yl) adenine (BAP9THP), is able to efficiently suppress the apical dominance of newly forming shoots in hemp.Such co-treatment resulted in a balanced multiple shoot culture which could be reliably rooted (Smýkalová et al. 2019).Therefore, the use of molecules with an anti-auxin character to suppress auxin activity and manipulate cytokinin to auxin ratio in explants, thus facilitating bud organogenesis or embryogenesis, appears to be promising. In this study, we aimed to further expand the library of available anti-auxins that could be used both for improving plant micropropagation as well as a tool in fundamental plant research.Thus, we synthesized several 4-([1,1'-biphenyl]-4-yl)-2-(1H-indol-3-yl)-4-oxobutanoic acid derivatives, evaluated their anti-auxin activity in various auxin bioassays in vivo, studied their effect on root and hypocotyl growth on the model plant Arabidopsis thaliana, and tested their possible use in hemp explant clonal propagation. Reagents and General Synthetic Methods The reagents and solvents were purchased from commercial suppliers and used without further purification.The microwave irradiation-assisted reactions were performed in CEM Discover SP microwave reactor; the reactions were carried out in 10 mL glass vials that were sealed with silicone/PTFE caps.Reaction progress was monitored by thin-layer chromatography (TLC) on aluminium plates coated with silica gel 60 F254 (Merck, USA) and the components were visualized by UV light (254 and 365 nm) and staining solutions (vanillin or potassium permanganate).The purification of the products was performed by column chromatography on silica gel (40-63 micron Davisil LC60 A, Grace Davison, UK). 1 H (500 MHz) and 13 C (125 MHz) NMR spectra were recorded in DMSO-d 6 or acetone-d 6 as solvents at room temperature on a Jeol ECA-500 spectrometer equipped with a 5 mm Royal probe.Complete assignment of the 1 H and 13 C NMR resonances was achieved using a combination of standard NMR spectroscopic techniques, including heteronuclear single quantum coherence (HSQC) and heteronuclear multiple bond correlation (HMBC) experiments.Asterisk (*) indicates tentative assignment of solvent-overlapping signals based on 1 H, 13 C-HMQC experiment.High-resolution mass spectrometry (HRMS) spectra of test compounds were recorded with a micrOTOF-Q III Bruker spectrometer in electron spray ionization mode.The LC-MS analyses were performed on an ACQUITY UPLC® H-Class system combined with UPLC® PDA detector and a single-quadrupole mass spectrometer QDa™ (Waters, Manchester, UK) as previously described (Bieleszová et al. 2019). SPR Analysis Surface plasmon resonance (SPR) experiments were done in accordance to previously described protocols (Lee et al. 2014).TIR1 was expressed in insect cell culture using a recombinant baculovirus.The construct contained sequences for three affinity tags, namely 6 His, green fluorescent protein (GFP), and FLAG.Protein purified using the His tag was used for SPR assays by passing it over a streptavidin chip loaded with biotinylated IAA7 degron peptide in the presence of IAA and test compounds. The SPR buffer was Hepes-buffered saline with 10 mM Hepes, 3 mM EDTA, 150 mM NaCl and 0.05% Tween-20.Compounds were premixed prior to testing with the protein to a final 50 μM concentration.Binding experiments were run at a flow rate of 30 μl min −1 using 2 min of injection time and 4 min of dissociation time.Data from a control channel (a mutated IAA7 peptide) and from a buffer-only run supplemented with DMSO (final 1%) were subtracted from each sensorgram following the standard double-reference subtraction protocol. DESI-MSI and DESI-MS/MSI Analyses Arabidopsis thaliana wild-type ecotype Col-0 seeds were sterilized with 70% EtOH with 0.1% Tween-20 solution for 10 min (2 ×) and rinsed with 96% EtOH for 10 min.After 2 days of stratification (4 °C in dark), seeds germinated on sterile ½ MS medium (2.2 g/L Murashige and Skoog medium, 1% sucrose and 0.7% agar-all from Duchefa Biochemie, the Netherlands, 0.5 g/L MES PUFFERAN from Carl Roth GmbH, Germany, pH 5.6) in long-day light conditions (22 °C/20 °C, 16 h light/8 h dark, 100 μmol m −2 s −1 ).Ten-day-old seedlings were transferred to horizontally divide heterogeneous media containing ½ MS (1.5% agar) medium supplemented with 0.05% dimethyl sulfoxide (DMSO) and 0.25% acetonitrile (ACN) (top half of the plate) or 0.05% DMSO, 0.25% ACN, and 10 µM of the tested compounds (bottom half of the plate).Plates were covered with aluminium foil and kept in a growth chamber with long-day light conditions (22 °C/20 °C, 16 h light/8 h dark, 100 μmol m −2 s −1 ) in a vertical position for 24 h.Around 50-70 mm-long plants were freshly collected together with untreated control samples for Desorption Electrospray Ionization-Mass Spectrometry Imaging (DESI-MSI) and Liquid Chromatography-Tandem Mass Spectrometry (LC-MS/MS) analyses.The whole plant was rapidly washed in ultrapure water for 10 s to remove surface medium and mounted on Superfrost glass slides (Thermo Fisher Scientific, Waltham, MA, USA) using non-conductive doublesided tape (Plano GmbH, Wetzlar, Germany) and stored in − 80 ℃ freezer.Sample slides were rapidly dried in a vacuum desiccator (Merck), scanned, and then subjected to DESI-MSI acquisition.The acquired spectra were recalibrated using the exact mass of palmitic acid (m/z 255.2324) and processed into an imzml format at the HDImaging (Waters).Subsequent analysis was performed using the msi-Quant (Uppsala, Sweden), where the data were processed for low-intensity removal, total ion count (TIC) normalization, peak alignment, and ion intensity map establishment.Deprotonated ions, representing compounds uptaken into the plants, were binning within the 0.0001 Da mass range of their theoretical masses after calibration.Target compounds BP-IAA, 2MBP-IAA, 3MBP-IAA, 4MBP-IAA, PEO-IAA, and auxinole assigned as deprotonated masses m/z 368.1292, 398.1398, 398.1398, 398.1398, 292.0979, and 320.1292 and were used to establish the ion intensity maps and following statistical analysis. To validate the results of targeted compounds detected in the DESI-MSI analysis, in situ MS/MS analysis was performed using 2-3 mm of primary root tips from treated plants.Precursor ions of BP-IAA (m/z 368.1292), its methoxy derivatives (m/z 398.1398),PEO-IAA (m/z 292.0979), and auxinole (m/z 320.1292) were scanned at 60 μm spatial resolution and fragmented using 5 eV collision energy.Additionally, peaks with the molecular mass assigned to the indole ring, ions after the loss of an indole ring, and decarboxylated ions of BP-IAA, its methoxy derivatives, PEO-IAA, and auxinole were also identified and assigned. DR5::GUS Assay Arabidopsis thaliana seeds expressing pDR5::GUS (Ulmasov et al. 1997) in a Col-0 background were sterilized with 70% EtOH with 0.1% Tween-20 solution for 10 min (2 ×) and rinsed with 96% EtOH for 10 min.After 2 days of stratification (4 °C in dark), seeds germinated on sterile ½ MS medium (2.2 g/L Murashige and Skoog medium, 1% sucrose, and 0.7% agar-all from Duchefa Biochemie, the Netherlands, 0.5 g/L MES PUFFERAN from Carl Roth GmbH, Germany, pH 5.6) in long-day light conditions (22 °C/20 °C, 16 h light/8 h dark, 100 μmol m −2 s −1 ).Five-day-old seedlings were incubated at room temperature in 24-well plates containing 1 mL of ½ MS liquid media supplemented with auxin derivatives in a final concentration of 20 μM, with 0.5% DMSO as a mock and 2 μM IAA as a positive control.The compounds were applied for 5 h treatment.Additionally, seedlings of the Arabidopsis thaliana transgenic pDR5::GUS reporter line were treated with auxin derivatives in defined concentrations (1, 5 μM) in the presence of 2 μM IAA for 5 h.Seedlings were then incubated in the presence of 500 μL of GUS staining solution at 37 °C in the dark for 35 min.To stop the staining reaction, seedlings were transferred to 500 μL of 70% ethanol and kept overnight.Clearing of the roots was done with HCG-2 solution (120 g chloral hydrate, 90 mL water, 30 mL glycerol) (Ma et al. 2020).GUS expression was evaluated using an inverted light microscope (Olympus IX51) with transmission light mode and phase contrast. GUS staining solution Na phosphate buffer, pH 7.0: 4.7 g of NaH 2 PO 4 . H 2 O and 9.6 g of Na 2 HPO 4 . 2 H 2 O from Merck were dissolved in 500 mL of distilled water to give a 0.2 M stock solution.50 mL of Na phosphate buffer was supplemented with 0.08 g K 3 [Fe(CN) 6 ] from Merck, 0.12 g K 4 [Fe(CN) 6 ] from Lachema n.p., the Czech Republic, 50 µL 0.1% Triton from Koch-Light Laboratories, England, and 50 mg of X-Gluc from AppliChem GmbH, Germany dissolved in 500 μL of DMSO. 35S::DII-VENUS Assay Arabidopsis thaliana seeds expressing p35S::DII-VENUS (Brunoud et al. 2012) in a Col-0 background were sterilized with 70% EtOH with 0.1% Tween-20 solution for 10 min (2 ×) and rinsed with 96% EtOH for 10 min.After 2 days of stratification (4 °C in dark), seeds germinated on sterile ½ MS medium (2.2 g/L Murashige and Skoog medium, 1% sucrose, and 0.7% agar-all from Duchefa Biochemie, the Netherlands, 0.5 g/L MES PUFFERAN from Carl Roth GmbH, Germany, pH 5.6) in long-day light conditions (22 °C/20 °C, 16 h light/8 h dark, 100 μmol m −2 s −1 ).Fiveday-old seedlings were treated in liquid ½ MS medium in the presence of BP-IAA compounds, auxinole and PEO-IAA (5 µM in 0.5% DMSO) or DMSO as a control for 1 h.The seedlings were then transferred to a glass slide with a drop of untreated medium and confocal images were taken using a Zeiss LSM 900 confocal microscope (Carl Zeiss, Germany) with a 10 × objective and resolution of 1024 × 1024 px.VENUS fluorescent protein was excited at 488 nm.At least 30 plants belonging to three independent biological replicates were measured. Hypocotyl Elongation and Cytoskeletal Organization Assay Arabidopsis thaliana MBD::GFP seeds (Marc et al. 1998) were sown in full MS (pH 5.7, 0.8% agar) and grown vertically in a growth chamber (22 °C/20 °C, 16 h light/8 h dark, 60 μmol m −2 s −1 light intensity).Five days after sowing seedlings were transferred to a new media containing 5, 10, or 20 μM of either BP-IAA or auxinole, with and without co-treatment with 0.5 μM NAA (final DMSO concentration of 0.9%).All samples were examined 1 and 3 days after treatment using an Axio imager Z.1 platform equipped with LSM700 module (Carl Zeiss, Germany) using 40 × oil objective, as previously described (Skalák et al. 2019).The light source included an argon-neon laser with wavelength 488 nm for GFP fluorescence and 639 nm for chlorophyll auto-fluorescence to avoid interference of the two fluorescence channels.Cell length and microtubule density (estimated as Mean Grey Intensity) were calculated using ImageJ software.The microtubule orientation and anisotropy of the microtubule array were also calculated on ImageJ, using the FibrilTool macro (Boudaoud et al. 2014).All parameters were calculated on at least 8 plants per treatment (at least 200 total cells per treatment), belonging to three independent biological replicas. Micropropagation of Hemp (Cannabis sativa L.) from Nodal Segments Monoecious hemp (Cannabis sativa L.) seeds, variety USO-31 (origin Ukraine), were obtained from the Czech Hemp Gene Bank (Agritec Ltd., the Czech Republic).Seeds were surface sterilized and germinated as described previously (Smýkalová et al. 2019) and nodal segments (i.e. the first node below the apex containing two meristems for two future shoots) were used as a type of an explant.For the experiment, 10 µM BAP9THP and 10 µM anti-auxin activity-possessing substances (BP-IAA, its methoxy derivatives, PEO-IAA, and auxinole) were added to the medium previously described (Smýkalová et al. 2019), which was supplemented with macro-and microelements (Murashige and Skoog 1962) and vitamins (Gamborg et al. 1968), 100 mg/L myo-inositol, 40 mg/L adenine hemisulfate, 30 g/L sucrose, 5 g/L activated carbon, 5.5 g/L agar (Difco Bacto), and pH 5.8-6.Explants were cultured in a growth room at 21 ± 2 °C, light intensity 156 µmol m −2 s −1 , 16 h photoperiod, and 60% relative humidity.Nineteen to twenty six explants per treatment, belonging to three (in case of BP-IAA) or four (in case of all other anti-auxins) biological replicates, were used.A selection of morphological parameters was recorded as either present or absent upon visual examination.Positively evaluated parameters for nodal segments: balanced growth of both shoots, proliferation of both buds, proliferation of more buds -i.e. more shoots, and proliferation of lateral buds at the base.Negatively evaluated parameters for nodal segments: formation of callus, long shoots, dominance of one shoot, and proliferation of one bud.The average number of shoots per explant was recorded.Data of positively and negatively evaluated parameters were processed with R 4.2.1 in environment of RStudio 2023.03.0 Build 386 (R Core Team 2022).Scripts were written using packages: stats 4.2.1, readxl 1.4.2,rstatix 0.7.2, ggplot2 3.4.2,and gridExtra 2.3.Chi-square test and respective P-values were computed by Monte Carlo simulation.Pairwise Fisher's test was used as a post hoc test with Bonferroni correction of calculated P-values.Significance level was set to α = 0.05. Determination of Biological Activity In Vitro by SPR Assay First, using surface plasmon resonance (SPR) analysis (Lee et al. 2014), BP-IAA and its methoxy derivatives were tested for binding to auxin receptor TRANSPORT INHIBITOR RESPONSE 1 (TIR1) (Dharmasiri et al. 2005;Kepinski and Leyser 2005) and AUXIN/INDOLE-3-ACETIC ACID7 (IAA7) co-receptor complex (Villalobos et al. 2012).As anticipated, derivatization of the PEO-IAA core with aromatic substituents resulted in generation of compounds with anti-auxin activity.When mixed with purified TIR1, neither BP-IAA compounds, nor auxinole were able to support TIR1 co-receptor assembly with IAA7 on the SPR chip even at 50 μM concentration (Supplementary Fig. S1).On the other hand, when co-treated with 5 μM IAA, BP-IAA compounds effectively inhibited TIR1 co-receptor assembly with IAA7 degron (Fig. 3A), by competing with IAA for its binding site, thus reducing the signal in a dose-dependent manner (Fig. 3B for BP-IAA). Validation of Uptake in Planta The uptake of lipophilic compounds into roots is considered to be fast; however, their transport to the upper plant parts is slow (Schriever and Lamshoeft 2020).Due to the bulky aromatic structures of BP-IAA compounds, confirmation of their uptake was needed prior to the in vivo experimentation.Desorption Electrospray Ionization-Mass Spectrometry Imaging (DESI-MSI) is emerging as a powerful tool for in situ identification and visualization of small molecules (Zhang et al. 2021).Therefore, ten-day-old Arabidopsis thaliana plants were transferred to ½ MS media containing BP-IAA compounds and their presence in different plant tissues was visualized by desorption electrospray ionization-mass spectrometry (DESI-MS) and compared to the reference compounds PEO-IAA and auxinole (Fig. 4).In general, all six targeted compounds demonstrated high abundance in the primary root of treated plants, compared to the absence of peaks found in control plants (Fig. 4).The ions of BP-IAA and its methoxy derivatives were only detected in the mature root growing in the treated part of the medium, whereas the hypocotyl, root cap, elongation zone, and upper root in the non-treated part of the medium did not contain any aforementioned ions.In the case of the least lipophilic compound PEO-IAA (Supplementary Table S1), a high signal intensity level was observed not only in the mature regions but also in almost all imaging tissues, from the root cap to hypocotyl and epicotyl.The ions of auxinole were widely present in the primary root attached to the treated medium with robust signal intensities (Fig. 4). To validate the results of targeted compounds detected in the DESI-MSI, in situ MS/MS analysis was performed using 2-3 mm of primary root tips from treated plants.Notably, the peak assigned as the indole ring was identified in all fragmentation spectra and displayed very similar distributions with precursor masses of targeted compounds in their ion intensity maps (Fig. 5).Peaks of decarboxylated ions were identified and assigned for all the compounds, whilst peaks representing ions after the loss of the indole ring were identified for all compounds except PEO-IAA.In summary, peaks predicted as fragmentation ions of targeted compounds matched previous records in the standard fragmentation spectra and their spatial distributions were compatible with the ion intensity maps established by precursor masses, which proved the presence of targeted compounds in the treated plants. Determination of Biological Activity in Arabidopsis Roots The anti-auxin activity of BP-IAA compounds predicted by the SPR analysis was confirmed in vivo employing Arabidopsis auxin-responsive reporter line pDR5::GUS.This line possesses a β-glucuronidase (GUS) reporter gene fused to the artificial canonical DR5 auxin-responsive promoter.The regulation of GUS expression responds to auxin levels, thus allowing the visualization of auxin maximas (Bai and DeMason 2008).Similarly to auxinole and PEO-IAA, BP-IAA compounds did not induce pDR5::GUS expression in Arabidopsis primary roots.On the other hand, likewise to auxinole, all BP-IAA compounds were able to overcome 2 µM IAA-induced GUS expression at 5 µM concentration, whilst the effect of PEO-IAA was milder (Fig. 6). Additionally, the activity of BP-IAA compounds was further evaluated using DII-VENUS line, which expresses fluorescently labelled Aux/IAA auxin interaction DII domain (Brunoud et al. 2012).The DII domain is ubiquitinated and induces degradation of the protein in response to the auxin dose-dependent presence.The fluorescent signal is rapidly Fig. 2 Structures of PEO-IAA, auxinole, BP-IAA, and its methoxy derivatives degraded in response to exogenously applied auxin, whilst it increases upon treatment with TIR1-IAA-Aux/IAA complex formation inhibitor auxinole (Hazak et al. 2014).In a similar manner to auxinole and PEO-IAA, BP-IAA compounds induced the accumulation of the DII-VENUS reporter protein by repressing the endogenous IAA activity, resulting in an increase in the fluorescent signal in the elongation zone of Arabidopsis primary root (Fig. 7, Supplementary Fig. S2). Arabidopsis roots react to the addition of auxin by extremely rapid root growth inhibition which gets restored once the auxin source is removed (Fendrych et al. 2018).Additionally, auxin-induced root growth inhibition can be reverted by co-treatment with the auxin antagonist auxinole (Hayashi et al. 2012) or several other anti-auxin activitypossessing molecules (Bieleszová et al. 2019).On the other hand, other known anti-auxins, such as PEO-IAA, BH-IAA, or PCIB, are not able to even partially revert auxininduced root growth inhibition (Oono et al. 2003;Hayashi et al. 2012).As anticipated, at high concentrations BP-IAA derivatives slightly reduced primary root growth of Arabidopsis plants (Fig. 8A), which is a typical feature of antiauxins (Oono et al. 2003;Hayashi et al. 2012).However, despite the observed anti-auxin effect in SPR assay, analogously to PEO-IAA and PCIB, BP-IAA compounds were not able to revert 0.5 µM IAA-induced root growth inhibition to the same extent as auxinole (Fig. 8B).This could suggest that BP-IAA compounds are probably not capable of antagonizing TIR1-independent auxin responses, such as the auxin-binding protein 1-transmembrane kinase 1 (ABP1-TMK1)-dependant auxin signalling pathway, which has been shown to modulate the activation of plasma membrane H + -ATPases, cell wall acidification, and cell expansion, amongst other processes (Lin et al. 2021;Friml et al. 2022). Effect of BP-IAA on Hypocotyl Elongation and Cytoskeletal Organization In the hypocotyl, auxins have been suggested to regulate cell expansion (Collett et al. 2000) and modulate microtubule orientation (Chen et al. 2014).It is generally assumed that there is a correlation between microtubule orientation and cell expansion, with transversal microtubule arrays usually being found in elongating cells (Baskin 2001;Chen et al. 2014); even though the causality between both parameters is still being discussed.The effect of the newly developed antiauxins on microtubule orientation was analysed using Arabidopsis thaliana expressing a GFP-tagged binding domain of the microtubule associated protein 4 (MBD::GFP) (Marc et al. 1998).Five-day-old MBD::GFP plants treated with 0.5 µM NAA underwent a significant microtubule reorientation favouring longitudinally oriented fibres (60-90°) in detriment of transversal ones (0-30°), leading to a higher average microtubule angle (Table 1, Fig. 9, Supplementary Fig. S3).In agreement with the work of Chen et al. (Chen et al. 2014), Arabidopsis plants showed decreased hypocotyl cell elongation rate following NAA treatment (Table 1).Treatment with the auxinole had a similar effect as NAA on the microtubule orientation which, interestingly, was not dose dependent.Furthermore, treatment with 5 to 20 µM of auxinole also decreased cell elongation rates in a fashion similar to NAA.This effect falls in line with the findings of Collett et al. (Collett et al. 2000), which suggest that auxin levels are already optimal in seedlings and any deviation from that concentration, either increase or decrease, results in a decrease in cell elongation.On the contrary, treatment with BP-IAA had a less severe effect on cytoskeletal organization, but it affected it in a dose-dependent manner, with 5 µM BP-IAA having no significant effect on microtubule orientation and 20 µM BP-IAA showing an effect closer to that of NAA.This pattern was mirrored by the cell elongation rates, with 20 µM resulting in decreased rates, whilst supply of 5 µM BP-IAA showed no differences from controls Fig. 6 The effect of BP-IAA derivatives on GUS expression in pDR5::GUS transgenic plants of Arabidopsis thaliana.Fiveday-old seedlings were A kept untreated or treated with IAA (2 µM), B BP-IAA compounds, auxinole or PEO-IAA (each at 20 µM) for 5 h alone, or C co-treated with IAA (2 µM) and with BP-IAA compounds, PEO-IAA, or auxinole (each at 1, 5 μM) for 5 h.Figures were chosen as representatives from three independent biological repetitions Fig. 7 The effect of BP-IAA compounds on DII-VENUS expression in p35S::DII-VENUS reporter line.Five-dayold seedlings were incubated with anti-auxins BP-IAA, its methoxy derivatives, PEO-IAA, or auxinole (each at 5 μM) for 1 h.Fluorescent confocal images were chosen as representatives from three independent biological repetitions (Table 1).Co-treatment with auxinole started to counteract the effect of NAA in microtubule orientation and cell elongation rate at 10 µM, but only managed to completely revert to the values found in controls at a concentration of 20 µM (Table 1).On the contrary, co-treatment with BP-IAA showed a greater effect at counteracting NAA, as a concentration of 5 µM was enough to revert the effect of NAA and bring the average microtubule angle back to the values found in controls (Table 1).Lastly, none of the treatments resulted in a decrease in the level of organization of the microtubule array (anisotropy) (Table 1). When plants are subjected to certain stresses, such as salinity, low temperature, or hormonal and ROS unbalance, they usually undergo microtubule depolymerization and a consequent decrease in cytoskeleton density (Zhang et al. 2012;Fujita et al. 2013;Araniti et al. 2016).In this case, Fig. 8 The effect of BP-IAA derivatives on Arabidopsis thaliana (Col-0) primary root growth.The primary root length was quantified in five-day-old seedlings grown on BP-IAA compounds (each at 1, 5, 10, 20 μM) A in the absence or B presence of IAA (0.5 μM) and normalized to mock.IAA (0.5 μM), PEO-IAA, and auxinole (each at 1, 5, 10, 20 μM) were used as controls.Statistical analyses were per-formed using the t test, values are means ± S.E., and n > 30 from three independent replicates.White circles (○) indicate statistically significant differences (P < 0.01) compared to mock, whilst black circles (•) indicate statistically significant differences (P < 0.01) between the effect of BP-IAA compounds, PEO-IAA, and auxinole compared to 0.5 μM IAA treatment 1). Similarly, when added independently, both auxinole and BP-IAA might have created stress, which decreased microtubule density, even though the effect was much more acute in the case of auxinole.However, the difference between auxinole and BP-IAA got accentuated when they were applied in co-treatment with NAA.In that context, no concentration of auxinole reverted the effect of NAA on microtubule density, whilst 5 µM of BP-IAA was enough to bring the microtubule density back to the values found in controls.This highlights the contradiction between the effect of BP-IAA alone (where it shows a milder disruptive effect on cell growth and cytoskeletal organization than auxinole) and in combination with NAA (where it counteracts the effect of NAA at lower concentrations than auxinole), which deserves further investigation. Application of BP-IAA Compounds in Hemp Micropropagation In our previous work, we demonstrated that weak antiauxin PEO-IAA can efficiently suppress apical dominance of newly forming shoots in hemp.Co-treatment with BAP-9THP and PEO-IAA gave a balanced multiple shoot culture for the formation of shoots which could be reliably rooted (Smýkalová et al. 2019).Having established PEO-IAA as a valuable component of hemp propagation medium, we further analysed if micropropagation of hemp could be further improved using BP-IAA compounds as a choice of anti-auxin.The experiments were conducted on nodal segments of variety USO-31, using the newly prepared anti-auxins at 10 µM concentration in combination with 10 µM BAP9THP as a choice of a cytokinin.The previously described methodological procedure was used (Smýkalová et al. 2019) and a selection of positive and negative parameters was visually evaluated for each node and recorded as either present or absent (Fig. 10).In addition, the average number of shoots per explant was calculated. The comparison of positive and negative morphological traits of explants after the application of anti-auxins in a medium intended for the induction of multiple shoot 2, Supplementary Fig. S4 and Supplementary Table S2.After cutting off the shoot apex, the nodal segment contains two meristems for the proliferation of two shoots.Due to high auxin concentrations in hemp nodal segments, apical dominance of one of the two shoots and callus formation are commonly observed in MSC (Smýkalová et al. 2019;Dreger and Szalata 2021).In the present study, co-treatment with BAP9THP and BP-IAA derivatives suppressed the apical dominance, resulting in uniform development of both buds from one node and the reduction or prevention of callus formation on the basal part.Out of the four tested novel compounds, BP-IAA and 4MBP-IAA yielded the highest number of shoots per explant, which was comparable to that of auxinole and PEO-IAA.Moreover, in the case of BP-IAA, as well as the known anti-auxins auxinole and PEO-IAA, a slight increase in the percentage of explants that displayed proliferation of more buds was observed, which can enable easier multiplication of meristems.However, it should be noted that none of the BP-IAA compounds promoted the proliferation of lateral buds at the base that was observed with auxinole and PEO-IAA. The application of 10 μM BP-IAA compounds yielded a low percentage of explants with long shoots or with dominance of only one shoot.The use of BP-IAA was particularly positive regarding dominance of one shoot, and it showed better results than PEO-IAA.On the contrary, PEO-IAA was the best option for keeping a low percentage of plants with only one developed bud.Lastly, the number of plants which developed callus was relatively high in case of all tested compounds, with 2MBP-IAA being the most potent to prevent callus formation. Conclusion In this work, we prepared a set of novel indolic compounds which are absorbed by Arabidopsis thaliana primary root and showed strong anti-auxin activity in SPR assay.Further testing of biological activity in vivo using Arabidopsis pDR5::GUS and p35S::DII-VENUS lines proved that BP-IAA and its methoxy derivatives overcome the effect of exogenous and endogenous auxin, respectively.BP-IAA was also shown to counteract the effect of exogenous auxins on Arabidopsis hypocotyl elongation, without any strong negative effect on its own.Lastly, we tested the use of BP-IAA and its methoxy derivatives in hemp micropropagation as a supplement for the establishment of multiple shoot cultures, where they improved positive morphological traits, such as the balanced growth of all the produced shoots and the proliferation of more than one bud, without negatively affecting the explants. Fig. 3 Fig.3SPR analysis of the antagonistic effect of BP-IAA and its methoxy derivatives on auxin-induced interaction between TIR1 protein and IAA7 degron peptide.The sensorgrams show association for 120 s followed by dissociation in buffer for 240 s. Results for IAA Fig. 5 Fig. 5 DESI-MS/MSI analysis of targeted compounds (BP-IAA, its methoxy derivatives, PEO-IAA, and auxinole) precursor ions, major fragmentation ions, and their distribution patterns acquired from treated Arabidopsis primary root tip.The molecular formulas of the Fig. 9 Fig. 9 The effect of BP-IAA derivatives on microtubule orientation in the hypocotyl of Arabidopsis thaliana MBD::GFP plants.Five-dayold seedlings were A kept untreated or treated with NAA (0.5 µM), B treated with BP-IAA or auxinole (each at 5, 10, 20 µM) for 1 day alone, or C co-treated with NAA (0.5 µM).Figures were chosen as representatives from three independent biological repetitions Table 1 Microtubule angle (degrees), microtubule density and anisotropy of the microtubule array in five-day-old Arabidopsis thaliana MBD::GFP plants 1 day after treatment, and cell elongation rate (µm day −1 ) between days 1 and 3 after treatment Values show mean ± S.E.; n ≥ 8 plants.Different letters indicate statistically significant differences between treatments according to Tukey's test (P < 0.05) external NAA supply, in addition to inducing a microtubule reorientation, also decreased microtubule density (Table Table 2 Average number of shoots per node (mean ± SE) and percentage of hemp explants showing positively (white to blue scale) and negatively (white to red scale) evaluated morphological traits after application of 10 μM of BAP9THP and 10 μM of anti-auxins.Different letters indicate statistically significant differences between treatments according to Tukey's test (P < 0.05) (Color figure online)
2023-05-29T15:04:33.388Z
2023-05-27T00:00:00.000
{ "year": 2023, "sha1": "251eb6bce5afab336d33da48015d2e76e735d22b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00344-023-11031-x.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "e4f21e30dc09da34b48eafa5c7bf3f8b61ba8822", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
27984646
pes2o/s2orc
v3-fos-license
Quantum general relativity and Hawking radiation In a previous paper we have set up the Wheeler-DeWitt equation which describes the quantum general relativistic collapse of a spherical dust cloud. In the present paper we specialize this equation to the case of matter perturbations around a black hole, and show that in the WKB approximation, the wave-functional describes an eternal black hole in equilibrium with a thermal bath at Hawking temperature. Introduction Quantum gravitational effects are expected to modify the nature of singularities that arise as the end state of the classical gravitational collapse of a compact object. A concrete analytical model of classical spherical collapse is the Lemaître-Tolman-Bondi (LTB) dust solution of Einstein equations, which shows that the singularity forming in the collapse could be either naked or covered, depending on the choice of initial conditions [1,2]. Treating this dust collapse model as a classical background, one can quantize a massless scalar field on this space-time using standard techniques [3,4,5]. When the classical collapse ends in a black hole, the quantization of the scalar field yields the emission of Hawking radiation from the black hole, as expected. However, a strikingly different result is obtained when the scalar field is quantized on a classical background which ends in a naked singularity. It turns out that during the period of validity of the semiclassical approximation (curvatures should be less than Planck scale), the collapsing cloud emits only about one Planck unit of energy [6]. Moreover, because the back-reaction does not become important so long as gravity can be treated classically, it follows that the future evolution of the star is governed by quantum gravitational effects and it is impossible to say, from the semi-classical approximation, whether the star radiates away its energy on a short time scale or settles down into a black hole state. This is completely different from the black hole case where essentially the entire star evaporates via Hawking radiation during the semi-classical phase. A full understanding of the gravitational collapse, both in the naked and in the covered case, requires the application of quantum gravity. Given our limited understanding of quantum gravity at present, perhaps the theory which is currently most suited for addressing dynamical quantum collapse and questions regarding its final state is canonical quantum general relativity. Although limited in its ultimate scope as a theory of quantum gravity, the canonical theory can meaningfully address the issue of singularities in minisuperspace and midisuperspace models, so long as one can tackle questions relating to operator ordering and regularization, and provide a notion of time evolution in the theory. The minisuperspace quantization of a collapsing null dust shell was analyzed in [7] where it was shown how the classical singularity can be avoided in the quantum theory. In this model, the avoidance is a direct consequence of the unitary time evolution -since the wave-function vanishes at r = 0 for early times, it does so at any time. As a consequence, an ingoing quantum shell develops into a superposition of ingoing and outgoing shell. Such a scenario has interesting physical features that cannot be seen in a semiclassical approximation: no event horizon can form, and there is neither an information loss nor a naked singularity. A midisuperspace program to study the canonical quantum dynamics of the LTB dust collapse has been developed during the last two years [8], following earlier pioneering work by Kastrup and Thiemann [9] and Kuchař [10] on the quantization of the Schwarzschild geometry. Here one sets up a canonical description of the collapse, using the dust proper time, the area radius and the mass function of the cloud as canonical configuration variables. The evolution is recorded by the dust proper time. One then develops the quantization via the momentum constraint and the Wheeler-DeWitt equation, to which a solution for a general mass function has been found using an ad hoc delta-function regularization used by DeWitt [11]. We show below that this regularization scheme is equivalent to the WKB approximation. (A similar analysis has also been carried out for null dust [12] although the constraints that are obtained in this case are linear and there is no need for regularization.) The Schwarzschild black hole can be viewed as a LTB model with a constant mass function. We have applied [13] the program described above to this particular case and shown how the horizon area quantization and entropy of the eternal black hole can be understood in terms of quantized shells of matter that are trapped inside the horizon. The purpose of the present paper is to show that the WKB solution is able also to describe an eternal black hole in equilibrium with a thermal bath at Hawking temperature, if the mass function is chosen appropriately. We argue that Hawking radiation may be understood as a combination of the WKB and the Born-Oppenheimer approximations on the full quantum wavefunctional. It reinforces our belief in the overall consistency of the program and, in particular, suggests that our proposed choice of operator ordering and our definition of the inner product on the Hilbert space of wave-functionals capture features of the full theory. As we will see below, the definition of the inner product enters crucially in the calculation of the Hawking radiation. In Section 2, we briefly recall key results from our previous paper on quantum dust collapse [8], leading up to the WKB solution of the Wheeler-DeWitt equation. In Section 3, this solution is specialized to the case of matter around a black hole, and shown to describe Hawking radiation. Canonical quantization of dust collapse The spherical gravitational collapse of a dust cloud having energy density ǫ(τ, ρ) in an asymptotically flat space-time is described in comoving coordinates (τ, ρ, θ, φ) by the LTB metric and the Einstein equations Here, F (ρ) is twice the mass to the interior of the coordinate ρ and R(τ, ρ) is the area radius of the shell labeled ρ at the dust proper time τ . A tilde and an asterisk denote partial derivatives with respect to ρ and τ , respectively. (Throughout this paper, the gravitational constant is set equal to one.) The canonical dynamics of the collapsing cloud is described by embedding the spherically symmetric ADM 4-metric, in the LTB space-time (1), and by casting the action for the Einstein-dust system in canonical form. The phase space of non-rotating dust is described by the dust proper time, τ , and its conjugate momentum, whereas the gravitational phase space consists of the configuration space variables (R, L) and their conjugate momenta. Using a version of the canonical transformation developed by Kuchař [10], the configuration variable L is replaced by a new variable F (the mass function). In terms of the new chart (τ, R, F, P τ , P R , P F ), the momentum and the Hamiltonian constraints read [8] Here, F ≡ 1 − F/R. The Hamiltonian constraint shows that on the effective configuration space (τ, R), the DeWitt super-metric is just diag(1, 1/F ). This is a flat metric, therefore a redefinition of the area coordinate according to where the positive sign refers to the region exterior to the horizon (R > F ) and the negative sign to the region interior to the horizon (R < F ), brings the super-metric to manifestly flat form. In terms of the momentum P * , conjugate to R * , the Hamiltonian constraint reads Quantization is implemented by raising the momenta to operator status and requiring the physical state, Ψ[τ, R * , F ], to be annihilated by the constraints. In this way, the time evolution of Ψ[τ, R * , F ] is determined by the Hamiltonian constraint, (where the positive sign before the second term refers to the region outside the horizon, and the negative sign to the region inside), while invariance under spatial diffeomorphisms is implemented by the momentum constraint, In the region exterior to the horizon, Eq. (8) is no longer hyperbolic, in contrast to the Wheeler-DeWitt equation on the original configuration space. The reason lies in the canonical transformations performed, which lead to a new effective configuration space. To complete the quantum theory, one must define an inner product on the Hilbert space of wave-functionals. In [8], we defined it in a natural way by exploiting the fact that the DeWitt super-metric is manifestly flat in the configuration space (τ, R * ), Note that this inner product is defined on a τ = constant hypersurface. We emphasize that this inner product is in general τ -dependent. The reason is that the Wheeler-DeWitt equation preserves a Klein-Gordon type of inner product, not a Schrödinger-type of product [11]. However, as has been shown in [14], the Schrödinger inner product is approximately conserved in the highest orders of a semiclassical approximation. Since we shall deal here with WKB states only (Sec. 3), quantum-gravitational correction terms to this conservation do not play any role here. Equations (8)-(10) clearly imply a specific choice, albeit a natural one, of operator ordering. We will see below that this choice is sufficient to reproduce the Hawking effect. The momentum constraint is obeyed by any functional that is a spatial scalar and, in particular, by the functional provided that W has no explicit dependence on the radial label coordinate, r. Our choice, while not unique, is dictated by the knowledge that F ′ is the proper energy density of the collapsing cloud. When (11) is substituted in the Wheeler-DeWitt equation (8) one finds, on using DeWitt's δ−function regularization (δ(0) = 0 = δ (n) (0) ∀ n ∈ N), that W obeys We emphasize again that this regularization prescription is at this stage completely ad hoc, and could only be justified from an understanding of the full theory. It is even imaginable that, analogous to string theory, Schwinger terms may arise in the commutation relations of the constraints that could forbid the implementation of the Wheeler-DeWitt equation [15]. Fortunately, however, for our present purpose of recovering Hawking radiation it is not necessary to resolve this issue. Equation (12) yields the solution outside the horizon, and inside. Origin of Hawking radiation We begin by noting that the quantum constraint (12), which has been written using DeWitt's delta-function regularization, is the same equation as one would get by writing the Wheeler-DeWitt equation (8) in the highest-order WKB approximation. The reason is that this prescription effectively suppresses the (divergent) WKB prefactor. To show this, let us expand the wave-functional Ψ of (8) (withh being re-inserted) in a power-series inh, S(τ, R, F ) = S 0 +hS 1 +h 2 S 2 + . . . . Substituting this expansion in (8) and retaining only the leading order,hindependent, terms gives Comparison with (12) shows that the WKB solution is the same as one would get by doing the delta-function regularization in the original Wheeler-DeWitt equation (after the identification W = iS 0 ). (For a similar discussion of WKB states for the Schwarzschild black hole see [16] and for two-dimensional dilaton gravity see [17]). We will now show that the wave-functional (11), along with the solutions (13) and (14), yields Hawking radiation when it is applied to a matter distribution that is appropriate to a massive black hole surrounded by dust whose total energy is small compared with the mass of the black hole. For this purpose, let us assume that the mass function F (r) is of the form where θ(r) is the Heaviside step-function, and f (r) (not to be confused with f (ρ) occurring in (1)) is differentiable, representing a dust distribution with f (r)/2M ≪ 1. This mass function is interpreted as the presence of a Schwarzschild black hole of mass M at the origin, and f (r) is a dust matter perturbation on the black hole, which, as we now show, can be related to Hawking radiation. In a sense, it plays the role of the quantum field used in standard derivations of Hawking radiation. Inserting this mass function in the wave-functional (11) gives (settinḡ h = 1 again) F (0)). The first exponent on the righthand side is the WKB wave-functional representing the black hole at the origin, as shown in [13]. The second term, up to order f (r), represents a matter distribution that propagates in this background, if we take F (r) ≈ 2M in W f . Thus we have where The solution Ψ f is known from (11), (13) and (14) above and given by outside the horizon, and inside the horizon. We would like to rewrite the expressions for W f in terms of the Killing time T . For the Schwarzschild background being considered for the distribution f (r) and for contracting clouds, we have the following relation between the proper time and the Killing time (see e.g. [18]), Thus, in terms of T , there are two possibilities for W f out , The wave-functional that corresponds to an infalling wave at T → −∞ and R → ∞ is the one with W f out given by the second of the above. We will therefore concentrate on (27) Defining Z = 4 √ 2MR we find that as R → ∞ this wave-functional approaches which undergoes rapid oscillations except when T → −∞, that is on I − . When T → ∞, in order for the phase to be not large we see that R → 2M, i.e. Z → 8M. In this limit we have This is similar to what happens in the geometric optics approximation. The simple looking phase on I − has scattered through the geometry to turn into the complicated looking phase on I + near the horizon. Equation (28) represents infalling waves. We can think of it as a product over plane waves, one at each label r, as follows: This should represent a complete set of infalling modes at each label r, if we think of the ω(r) as the frequency of the modes. In other words, we allow all possible ω(r) = ∆f (r). A complete set of outgoing modes on I + would likewise be given by the functional This is because the transformation from the dust proper time to the Killing time is now obtained by matching an expanding (rather than contracting) dust cloud to a Schwarzschild exterior, for which one gets, instead of (25), the relation It is then easily shown that as R → ∞, the asymptotic form of the wavefunctional is as in (31). Now we ask the question: what is the projection of our solution (29) on the negative frequency modes of the outgoing basis on I + . For this purpose we must consider the inner product of states on a hypersurface of constant Schwarzschild time T . Thus we must transform from the Euclidean flat metric on the (R * , τ ) plane of Eqn. (8) to the metric in the (R, T ) coordinates. Using the relations (6) and (25) we get (Note that in (8) the positive sign holds outside the horizon, so that g R * R * = +1.) The required inner product on a constant T hypersurface then is This projection represents the negative frequency modes present in the solution. We are interested in | Ψ + ω |Ψ + f | 2 because these are the analogs of the Bogoliubov coefficients, |β(f, ω)| 2 . If we think of β(f, ω) as we have Substituting u = Z − 8M and integrating we find This is interpreted as the eternal black hole being in equilibrium with a thermal bath at the Hawking temperature (8πM) −1 . Our derivation provides a functional Schrödinger picture for dust Hawking radiation, consistent with the WKB wave-functional which solves the Wheeler-DeWitt equation. Concluding remarks In this paper we have obtained a derivation of Hawking radiation for dust matter, starting from the WKB wave-functional which satisfies the Wheeler-DeWitt equation for quantum spherical dust collapse. The fact that such a derivation could be found should be treated as support for the validity of the inner product defined in Equation (10). A functional description of Hawking radiation can also be given within a Born-Oppenheimer type of approximation to quantum gravity. Instead of our wave function Ψ f , one has there a Gaussian quantum state for a quantum field. Evolving this state through the background of an object collapsing to a black hole, one finds that it encodes information about the Hawking radiation similar to our Ψ f [19]. The present analysis also suggests that an exact treatment of the Wheeler-DeWitt equation (8) which goes beyond the WKB approximation should yield corrections to Hawking radiation, and provide a better understanding of the end state of gravitational collapse. This could perhaps be done along the general lines presented in [14] in which corrections to the semiclassical limit have been calculated from the Wheeler-DeWitt equation. These issues are at present under investigation.
2014-10-01T00:00:00.000Z
2002-08-27T00:00:00.000
{ "year": 2002, "sha1": "8fd7922c5c730ecbb44d681c70d3edd105d50bdc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/gr-qc/0208083", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8fd7922c5c730ecbb44d681c70d3edd105d50bdc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
56029264
pes2o/s2orc
v3-fos-license
SEEKING A REFERENCE FRAME FOR CARTOGRAPHIC SONIFICATION Sonification of geospatial data must situate data values in two (or three) dimensional space. The need to position data values in space distinguishes geospatial data from other multi-dimensional data sets. While cartographers have extensive experience preparing geospatial data for visual display, the use of sonification is less common. Beyond availability of tools or visual bias, an incomplete understanding of the implications of parameter mappings that cross conceptual data categories limits the application of sonification to geospatial data. To catalyze the use of audio in cartography, this paper explores existing examples of parameter mapping sonification through the framework of the geographic data cube. More widespread adoption of auditory displays would diversify map design techniques, enhance accessibility of geospatial data, and may also provide new perspective for application to non-geospatial data sets. INTRODUCTION Geospatial data are characterized by their position in space and time. Consideration of the spatio-temporal properties of data reveals patterns that could be missed when treating locations or time values as generic numeric variables in a multi-variate data set. Typical assumptions of independence between observations do not hold due to spatial and temporal autocorrelation. Further, parameter mappings that overlook organization of data along spatial or temporal dimensions compel the listener to mentally reconstruct that organization. The strong temporal qualities of audio have provided a natural parameter mapping for time series data (e.g., [1]); but, the effective representation of (geo-)spatial data and the use of spatial audio remain open questions. Organization of data and sonification into binary categories of "spatial" and "non-spatial" [2] take an initial step toward addressing the peculiar needs of geospatial data, but further examination of the "spatial" category is warranted. Specialized systems have emerged to handle geospatial data. Geographic information systems combine data structures and computer algorithms to support the unique needs of geospatial data storage, processing, and efficient query. Statistical methods that accommodate spatial and temporal autocorrelation have been designed (e.g., [3]). Evidence from psychology and neuroscience suggests that the human brain has specialized mechanisms for encoding and processing spatial data (e.g., [4]). And, design and use of maps have been studied as communication channels for geospatial data (e.g., [5]). Thematic maps and reference (or navigation) maps constitute two common types of geographic map. Thematic maps depict the location of attribute values over geographic space. Among their varied purposes, thematic maps encourage the map reader to notice spatial patterns. For example, a map of Oregon that depicts population density shows the majority of the population living on the west side of the state (Figure 1). The map does not explicitly declare this pattern; the map reader interprets the information they perceive from the map. In contrast, navigational maps facilitate route learning, guide movement through the physical environment. Experience navigating through physical space inspires a metaphor for exploration of a data space [6], creating a connection between the two map types. But this paper challenges that connection and emphasizes sonification of geospatial data in thematic maps. Auditory display of geospatial data is not a new idea. Although the majority of modern cartographic display techniques fall in the realm of graphic design, widespread and inexpensive Figure 2: Conceptual categorization of data across location, time, and attribute (left) is an established framework for organizing geospatial data [7]. Each of these categories may be further subdivided across multiple dimensions (center). The number of dimensions depends on the specific data set under consideration. Displays of geospatial data create parameter mappings that stay within or translate across the axes of the geographic data cube (right). For example, depiction of data location to screen coordinates demonstrates a mapping within the location axis (A). And among many possible examples of translations: turn-by-turn directions translate spatial information into a temporal sequence (B), time series data represented as line graphs convert time into a spatial location on the x-axis (C), scatterplots translate attribute values into locations (D), and time series data may be represented with a color attribute in a static graphic (E). sound synthesis in the early 1990s prompted cartographers to consider auditory displays. In parallel with expanded use of audio in more general human-computer interfaces, cartographers used sound to represent uncertainty in remote sensing data [8] and to highlight anomalies in data [9]. However, initial interest and optimism about the use of audio in geospatial data displays has waned. Recent availability of browser-based audio tools revived some interest in augmenting web-based interactive maps with auditory elements (see review in [10]). Still, proliferation of visual displays and challenges in the design of effective parameter mappings for sonification of spatial data, have meant that the use of sonification in cartographic design has been low. To support more widespread adoption of sonification in cartography, this paper draws attention to parameter mappings that cross the conceptual category boundary between data location and the temporal dimension of an auditory display. The next section provides a brief overview of geospatial data, associated data transformation techniques, and example cartographic design guidelines. The third section considers existing sonification approaches to (geo-)spatial data, highlighting the role of space and time in those displays. A final section reflects on the current status of sonification in cartography, posing an open question about the implications of cross-category parameter mappings. GEOSPATIAL DATA AND CARTOGRAPHY Geographers and cartographers have a long history of organizing, transforming, and representing geospatial data. The discussion revolves around the location and time axes of the geographic data cube as a context in which to examine the application of parameter mapping sonification to geospatial data. The Geographic Data Cube An emphasis on location, or position within a two (or three) dimensional reference frame, distinguishes geospatial data from other multi-dimensional data sets. The geographic data cube [7], an established conceptual framework from geography, organizes data in three categories: attribute, time, and spatial location. The three axes represent three categories of data inherent and necessary to any geospatial data set ( Figure 2, left). Omitting any category is detrimental to interpretation of the data set. For example, a temperature attribute value carries little information without the context of when and where the observation was recorded. While all three categories are necessary, a category may be held constant for a given map product. As a case in point, the maps in Figure 1 depict a range of locations and attribute values, while holding time constant (the year 2010). The three categories each consist of one or more dimensions (Figure 2, center). Of particular interest in the case of geospatial data is the location axis that has two or more dimensions. A location may be recorded as two-dimensional coordinates on the surface of the earth, and elevation may be added as a third location dimension. Notably these dimensions of spatial location are orthogonal to one another, and values along these dimensions tend to be autocorrelated: "everything is related to everything else, but near things are more related than distant things" (Tobler's first law of geography, [11]). Cartographic Processing Cartographers have developed techniques to transform geospatial data in preparation for display in geographic maps. Two such techniques, projections and generalization, are described here. Projections, or systematic translations of spherical geographic coordinates (e.g., latitude and longitude) into two-dimensional planar (page or screen) coordinates, are standardized data transformations. Locations in the physical world have a one-to-one mapping with locations the two-dimensional frame (although relationships between locations are inevitably, but predictably, distorted). Even though projection introduces error in the location data, the resulting two-dimensional model has been applied to good effect in helping people conceptually understand and reason about phenomena on the surface of the earth. Print and screen technologies have traditionally necessitated the dimension reduction from three to two dimensions for map production. Even through the physical world is three dimensional and 3D rendering technologies are emerging, the most common map displays -maps printed on paper, displayed on computer screens -are still constrained to a two dimensional spatial plane. Although this spatial dimension reduction affords a simplified special case of geospatial data, projection may not be required for auditory representations [2] and may be propagating a visual bias into auditory display design. As a second example, generalization eliminates unnecessary clutter or emphasizes specific characteristics of the data [12], and may be applied to data from any of the three data cube categories. Cartographers have developed many generalization routines for geospatial data in visual displays, and analogous approaches have also been applied in sonification, e.g., by emphasizing "distinct" data values [6]. Generalization transforms data into a simpler or alternative form. The degree to which such tools are applied depends on several factors including display technology and modality. Generalization helps focus attention on the message that the map was designed to convey, but introduces error and removes detail (compare, e.g., the two maps in Figure 1). In some sonification prototypes, heavy-handed generalization or simplification has made evaluation feasible, but researchers recognize that it is not realistic data (e.g., [13]) and is not functionally equivalent to its visual or tactile counterparts. While arbitrarily reducing data complexity is not a long term solution, generalization may still play a role in the design of an effective parameter mapping. Symbolization of Geospatial Data Map symbolization, and more generally graphic or sonification design processes, creates relationships between data values and display values. This section briefly describes the treatment of location and time data in typical cartographic design, and considers how these approaches do or do not apply to auditory displays. In some cases, a direct relationship exists between the data and an analogous display parameter, but a direct relationship is not strictly necessary, and mappings may cross the boundaries outlined by the categories of the geographic data cube (Figure 2, right). Cartographic design commonly presents geospatial locations as corresponding locations within a graphic map display. The spatial arrangement of light receptors in the retina of the eye and the projection of that two dimensional organization into higher level processing areas of the brain [14] further support a direct mapping of the location of geospatial data to location within a visual display. Similarly, this "easy" choice for representing location is also observed in tactile map graphics: the position of symbols on the map correspond to location in the real world. The relative ease with which humans visually perceive spatial relationships has also lead to the use of location to depict non-spatial data. For example, attribute values are represented by a location in space in a scatterplot ("spatialization", [15]) or iconographic display [16]. While the translation across axes of the geographic data cube can be effective, there is still a correlation between the ease and usability of a display and the alignment between the dimensionality of the data and that of the display. However, this approach does not directly generalize to auditory displays. Despite what may appear at a cursory glance to be an easy direct mapping of location attributes of the data to monaural or binaural spatial audio cues, on closer inspection, several limitations become apparent. On the plus side, spatialized audio targets the human ability to localize sounds [17] and can "leverage the natural affordances of the space and the user's location within the sound field" [18]. When applied to sonification of non-spatial data, spatialized audio has been reported to facilitate segregation between data streams [6,19] and to provide orienting cues for the use of a haptic mouse in the absence of visual feedback [13]. But, spatialized audio is relative to the listener, and relies on either distance cues or elevation cues to determine a position in a two dimensional space. The egocentric perspective that relies on distance cues may be sub-optimal for communicating relationships between data points (e.g., [20]) and accuracy of perceived location varies across different axes of physical space (e.g., poor resolution in conveying elevation [21,22]). These nuances highlight open questions about the use of spatialized audio to depict geospatial data. As an alternative to spatialized audio, cartography has explored depiction of one of the spatial dimensions on the time axis. Time has been used to depict both temporal and spatial data. Interactive maps have provided functionality to produce animations of geospatial data that change over time (see review in [23]). Animated visual displays are consistent with the general recommendation to use time to represent temporal data [24]. As cartographers and sonification designers explored stand-alone auditory displays of geospatial data, the time dimension was also co-opted for the display of location. In 2000, Saue [6] proposed the idea of spatial data "temporalization," which translates location data into the time dimension of an auditory display using a metaphor of walking through an environment. The depiction of location data over time has since been a common approach to sonification of geospatial data (either alone or with redundant location information from other sensory modalities). A drawback to this approach however, is that the reduction of two dimensional space to a linear sequence takes longer to perceive than a visual display of the same data and data that is spatially proximal may be separated by extended time intervals. The listener faces the challenging task of remembering a long sequence of data values and mentally reconstructing two dimensional space. And in the context of accessibility, the resulting display, which may take more than a minute to render depending on the size of the data set, lacks functional equivalence with its visual counterpart, which might take only a few seconds to perceive and mentally process. The next section expounds on the auditory display of geospatial data, reviewing a number of sonification examples. SONIFICATION OF GEOSPATIAL DATA As sound production and real-time audio rendering became possible in computer hardware and later through software, sonification emerged as a feasible data display modality across many application domains [25,26]. Although researchers have recognized the challenge of presenting multiple variables in a single audio stream (e.g., [24]), there are also many success stories. Sonification has been used to depict large, complex, and multi-dimensional scientific data sets including recordings of solar winds [27] and climate data [28]. Still under investigation is effective and efficient sonification of geospatial data. Amid ongoing advances in technology and expanding adoption of sonification across multiple disciplines, geographers too considered ways to incorporate audio into map design. Audio was viewed as a "largely untapped medium for the communication of cartographic data" [8] and a "means of expanding the representational repertoire of cartography and visualization" [9]. Geographic applications began pairing auditory representations of non-spatial data with visual map displays. For example, Fisher [8] represented uncertainty information associated with classified remote sensing data, and Krygier [29] augmented animated graphics with redundant information through natural sounds. Over the following decades, however, interest among cartographers waned. Although the introduction of audio capability in web browsers spurred interest in multimedia mapping (see review in [10]), initial difficulties adopting audio into cartographic design had already set the tone. Some geographers and cartographers came to doubt the potential of auditory displays to represent of spatial data over non-trivial spatial extents [30], or grew skeptical of any non-visual display of geospatial data [31]. Rather than justification for jumping ship and abandoning sonification, however, this skepticism could indicate a lack of an appropriate design. And, not all have abandoned auditory map display: "Rather, the use of sound forces us to rethink the very concept of the map as primarily a visual image of space that serves as a simple conveyer of information" [32]. The remainder of this section explores three groups of examples, organized by the role that audio plays in the display: audio-enhanced displays, multimodal displays, and stand-alone sonification. In audio-enhanced displays, sonification of attribute data accompanies or enhances another display modality. Location information is conveyed through, e.g., vision [30,10]. Interaction with a mouse, touchpad, or stylus triggers playback of an audio recording or renders a data value in audio that is specific to the selected location. For example, in an interactive web map the map user triggers rendering of a parameterized note or playback of a pre-recorded audio clip by selecting a location using a mouse-click or tapping a touchscreen. Within this group of audio-enhanced displays, the role of sonification is limited to the display of isolated non-spatial data values. The audio component of the display cannot stand alone; the display relies on an alternate display modality to communicate spatial location -the aspect of the data that makes it (geo-)spatial. And, the map reader must mentally integrate disparate sensory input streams to interpret the complete set of spatial and non-spatial data attributes. In other multimodal displays, the auditory component of the display conveys partial or redundant information about spatial location. Location data is depicted in a two dimensional plane through, e.g., proprioceptive feedback [33,34,35,36,37,38,39], or a haptic device [40,13]. While such audio-enhanced displays have found some success communicating spatial data, evaluations have found that users have difficulty interpreting spatial patterns, without a companion visual or tactile display [41,38] or without providing contextual clues about the specific layout [42]. The number of examples of stand-alone sonification of (geo-)spatial data is more limited. Stand-alone auditory displays encode sufficient information in the auditory stream to convey location information independent of other display modalities. Flowers, Buhman, and Turnage [43] used frequency and time as two axes for an auditory display of scatterplots, depictions of (non-geographic) data points within a two-dimensional space. Alty and Rigas introduced AudioGraph [42], which used pairs of notes to represent coordinate locations within a display. Timbre indicated which axis is being represented and frequency encoded location along that axis. A virtual cursor traced shapes in the display, playing a pair of notes at each vertex. Specifically to display geospatial data, Zhao et al. implemented a stand-alone auditory display in iSonic [21] that traverses the two dimensional geographic space following pre-established scan patterns. A virtual cursor moves through the display as auditory feedback indicates the data value at the cursor location. As the sounds play, the listener must mentally assemble the individual notes to recreate the two dimensional arrangement of objects in the display. Reports from evaluation of the AudioGraph display indicate some success communicating spatial information, but also difficulty interpreting overall patterns [42]. Over several years of development, the iSonic interface seems to have moved away from the audio-only display in favor of an interactive display with a spatial input device [35]. Such findings are consistent with other evaluations of non-visual display in which a heavy burden is placed on users' working memory [44]. The parameter mappings employed in the examples mentioned in this section are summarized in Table 1. The table lists the location and attribute categories present in the data set (none of the example sonifications depicted temporal data), along with the respective auditory dimension in which that data was encoded. Listed parameter mappings are available in the respective systems, but may not be concurrently available. The set of examples is not intended to be exhaustive, but to illustrate the variety of approaches that have been explored. The next section reflects on trends among these example systems and poses an open question for future research. REFLECTION AND OUTSTANDING CHALLENGES The examples of geospatial data sonification listed above show diversity among parameter mappings, and shows that translations across categories of data and display dimensions are common. The categories themselves were not mutually exclusive. Amplitude, for example was used in a way that mimicked distance (location) in the physical world with closer features represented as louder sounds [34,13]. But, amplitude was also used in a way that mimicked magnitude (attribute) [37,10]. Both uses employ intuitive metaphors, and there is no single rule to assign amplitude to a category of display dimension. Within the location category, special attention is draw to the distinction between egocentric and allocentric perspectives. Differences between the two perspectives have implications for interpretation of the display. Beyond categorization as spatial and non-spatial, parameter mappings may also need to address or accommodate the alignment of perspectives between the data and the display. Acknowledging the limitations of a simplified account of the examples, this summary offers one interpretation and provides a basis for discussion; for full details of the parameter mappings and their respective display systems, readers are referred to the original papers. With an exponential number of possible combinations, the selection of auditory dimensions to serve as such a reference frame is neither obvious, nor trivial. Selecting a parameter mapping is complicated by perceptual limits on the number of auditory events that can be processed concurrently [45] and interactions between auditory dimensions (e.g., [46,24]). Further, results from empirical evaluation of sonification parameters by experiment must be applied with caution when removed from the laboratory and applied to real-world data [46,13] or generalized from pure tones to more complex musical sounds [47]. A two-dimensional auditory reference frame to support effective and efficient auditory displays of geospatial data is still illusive. Table 1: Existing implementations of (geo-)spatial data sonification provide examples of parameter mapping sonification representing location and attribute data (none of the example systems depicted temporal data). Grey highlighting and bold font emphasize egocentric location information and translation across axes of the geographic data cube, respectively. Reference, Content Domain (System) Location Attribute Audio-enhanced Displays (no auditory display of spatial location) Bearman and Fisher [30], data value → frequency elevation, uncertainty (ArcGIS extension) Brauen [10], multiple airborne pollutants data value → amplitude, audio clip Audio-enhanced Displays (with complementary visual, proprioceptive, tactile, or haptic display of location data) Krueger and Gilden [33], relative location → audio icon data value → speech named polygon features (KnowWhere TM ) Daunys Brittell, Young, and Lobben [38], relative location → audio icon data value → frequency choropleth map (mGIS) Kaklanis, Votis, and Tzovaras [40], direction → azimuth, data value → speech reference maps (Open Touch/Sound Maps) distance → frequency, relative location → audio icon Geronazzo et al. [13], relative location → azimuth guided pointing, object location relative location → elevation distance → amplitude Schito and Fabrikant [39], elevation x-location → azimuth, frequency data value → duration, frequency y-location → waveform, distance → frequency relative location → time Audio Displays (stand-alone auditory display of spatial location) Flowers, Buhman, and Turnage [43], scatterplot In the case of geospatial data, the need for data patterns to be simultaneously interpreted across multiple dimensions poses a unique problem. The goal is not to encourage separation or perceptual streaming, but to organize data within a two (or three) dimensional frame and communicate patterns that occur across those dimensions (c.f., [6,19]). Recall at this point that the objective of thematic maps is to communicate a general spatial pattern. A trend across the (geo-)spatial sonification examples is reliance on time or temporal order. The sonification follows a cursor through the display controlled by either an interactive spatial input device or a pre-established scan path. The supporting metaphor of movement through a space occupied by sound sources offer both affordances from real world experience and appealing simplicity in implementation. But the resulting sequential exploration is susceptible to variable or ambiguous interpretations prompted by spurious depiction of distance between locations that are proximally located (pre-planned and automated scan patterns) or emergent texture and patterns that are intertwined with the speed of movement (user-controlled spatial input devices). Time that elapses during movement directly corresponds with the timing of auditory events. This interplay between space and time leads to emergent properties of the sound, creating textures and patterns [16,24,37] that can be difficult to anticipate from the underlying data. Harnessing the power of these emergent patterns means striking a balance between a parameter mapping that is "exact and rigorous" [48] and one that is more fluid: "It should be stressed that the sound tracks need to be constructed according to their own sound logic and cannot simply be reduced to the structure of data or other map variables" [49]. Again for an application in which general patterns are of interest, the ability to extract a single data value with high precision is not a goal of the display. For sonification designers who are steeped in a visual tradition, such as that of cartography, finding such a balance will require conscious effort to mitigate visual bias (e.g., [50]). Even though geospatial data are not inherently visual, an occular-centric trend in cartography and GIS has emphasized visual displays. Mainstream cartographic designs often focus on graphic displays, and occasionally later append an auditory component of the display. The auditory display is an afterthought. It is subject to the decisions that were made to optimize the graphic display. A conscious effort is required to avoid temptations to translate visual displays into audio. Without diminishing the value of visual displays, the focus on map graphics allows implicit assumptions in the way that sighted researchers think about geospatial data to persist unnoticed and to creep into the tools for map production. For example, the implementation of the GeoTools library [51] tightly integrates the spatial data representations with the Swing graphics library [52], which is good for code optimization in implementing visual map displays but can complicate efforts to explore auditory displays that still rely on functionality to handle geospatial data. Map production that targets auditory displays earlier in the design process can help reverse some of the embedded visual bias. In practice, the dominance of visual maps in print and on computer screens has lead to a scarcity of alternative cartographic display techniques and a reduction in accessibility of geospatial data, particularly for people with disabilities [53]. As noted in the context of multidimensional astronomy data, however, auditory display enhanced both accessibility for researchers who are blind, and conveyed patterns beyond those apparent in visual displays [25,54]. Visual displays of geospatial data are good, but not sufficient to support a diverse population of map readers and growing volumes of scientific data. With efficient and usable designs, auditory displays could make a substantial contribution to the cartographer's toolbox. Using the geographic data cube to describe the structure of cartographic sonification reveals an open question: how does translation across the conceptual boundaries of the geographic data cube influence communication of geospatial data through sonification? By investigating this question, geographers and cartographers can join musicians and sound designers in pursuing ways to think about auditory patterns that emerge from sonification of multi-dimensional data sets. A pursuit that seeks a reference frame for cartographic sonification at the intersection of exact science and expressive art. ACKNOWLEDGEMENT This work was developed in part through writing for my doctoral comprehensive exam; I would like to thank Dr. Amy Lobben and all members of my Dissertation Advisory Committee for their feedback and insights that helped shape this work.
2018-12-13T01:54:40.900Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "b05346b90e9d49e53333ef618c02483bdac84936", "oa_license": "CCBYNC", "oa_url": "https://smartech.gatech.edu/bitstream/1853/60082/1/ICAD2018_020.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "01ab5b44c4d7391b68842fdea06e85b47fdaf80d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
246528646
pes2o/s2orc
v3-fos-license
Sexual dimorphic parameters of femur: a clinical guide in orthopedics and forensic studies Sexual dimorphic studies of various parameters of the femur play an important role in forensic studies. Various femur morphometric parameters help estimate an individual’s age, sex, and stature from unknown skeletal remains. This research was done to analyze maximum length, trochanteric oblique length, and diameter of the femur head for sexual dimorphism. The study was done on 200 (128 male & 72 female) Indian adult human femora, which were fully ossified, dry, and free from deformity. The maximum length of the femur (L), trochanteric oblique length (TOL), and vertical diameter of the head (VDH) were measured using an osteometric board and digital Vernier calipers. The mean length of the femur was 436.88 mm in males and 402.38 mm in females, respectively. The mean trochanteric oblique length of the femur was 423.78 mm in males and 387.18 mm in females, respectively. The mean vertical diameter of the femur head was 43 mm in males and 38.19 mm in females, respectively. Depending upon the results of this study, it was concluded that the mean values of maximum length, trochanteric oblique length, and vertical diameter of the femur head are significantly higher in males than females. These parameters are useful and reliable for sexual dimorphism in anthropometric and forensic studies, especially in identifying skeletal remains. These differences can also be considered in selecting or designing the exact ranges of the gender-specific prosthesis for Orthopedic surgeries. INTRODUCTION The hip joint is very stable and it is the largest joint of the body. This specific feature is governed by the typical anatomical shape of articulating surfaces and ligaments. It is a multiaxial, ball, and socket joint. Its maximum stability is due to the deep insertion of the head of the femur into the acetabulum [1]. The femur is one of the largest bones of the body subjected to maximum weight-bearing; its typical geometric shape gives it strength and stability. Morphometric parameters, including hip axis length, femoral head width, have been related to the mechanical strength of the proximal femur [2]. JOURNAL of MEDICINE and LIFE The morphology of the proximal femur, especially the relationships between the head, neck, and the proximal shaft, has been investigated numerous times. There are many pathologies like avascular necrosis, osteoporotic fractures, osteoarthritis etc, and a greater understanding of the anatomy of this area might refine treatment options for these conditions [3]. Consequently, researchers started studying the measurements of the proximal femur. Many forensic studies have proved the importance of various morphometric parameters of the femur, which help to estimate biological profiles including age, sex, ancestry, and stature of an individual to identify unknown skeletal remains. The dimensions of the head and length of the femur were studied extensively by many researchers, and conclusions were drawn. It was found to differ in different population groups and at different ages; the findings on sex were also slightly different [4]. Information on the variations in dimensions of the femur for different sexes will help the Anatomists and Forensic experts in sexing the femora. Moreover, it will help in devising proper sized prosthesis. It will also help Orthopedic Surgeons in femoral head replacement surgeries. Furthermore, awareness regarding gender differences will lead to distinct implant designs for male and female patients [5]. Considering the factors above, this study was undertaken to determine the normal range of metrical values for length and measurement of the femur head in males and females on adult human cadaveric bones. Knowing the importance of sexual dimorphic studies in calibrating exact sized prosthesis in Orthopedics and identification of skeletal remains for Forensic experts, the present study aims: 1. To study and analyze maximum length, trochanteric length, and diameters of the head of the femur for sexual dimorphism; 2. To compare femoral sexual dimorphic findings of the present study with that of other studies. MATERIAL AND METHODS 200 skeletonized samples (128 male and 72 female) adult human femur of known sex, dry, free from deformity, and fully ossified were obtained from the bone bank of Department of Anatomy, Government Medical College, Aurangabad, Maharashtra, India. The samples were taken from the year 1995-2018. The sex and year of samples collection were well documented in the bone bank. The samples were obtained by the burial method. The anthropometric tools used in the study were: osteometric board, sliding digital calipers, and scale. The inclusion criteria were dry, free from deformity, and fully ossified adult human femora. The exclusion criteria were damaged, burnt, abnormal bones and bones of children The following measurements were taken as described by Singh I.P. [6] and M.Sreenivas JOURNAL of MEDICINE and LIFE The above measurements were taken using sliding digital Vernier calipers. Each parameter was tabulated and statistically analyzed. Mean, standard deviation, and ranges were obtained for male and female femora, and an independent t-test was applied for statistical analysis. For statistical analysis, GraphPad Prism 5.01 software was used. Comparative graphs of male and female values were drawn, which show the zone of difference and overlap between male and female values. RESULTS The mean value for the maximum length of the femur was 436.88 mm in males and 402.38 mm in females, with a range of 392-490 mm and 360-469 mm, respectively. The standard deviation in males was 19.880 and 22.581 in females. In comparison, the mean values in males and females were highly significant (p-value <0.001) ( Figure 4, Table 1). The mean value for the trochanteric oblique length of the femur was 423.78 mm in males and 387.18 mm in females with a range of 386-471 mm and 303-452 mm, respectively. The standard deviation in males was 19.024, and 40.152 in females. In comparison, the mean values in males and females were highly significant (p-value <0.001) ( Figure 5, Table 1). 19.880 and 22.581 in females. In comparison, the mean values in males and females wer ificant (p-value < 0.001) (Fig. 4, Table 1). Table 1). JOURNAL of MEDICINE and LIFE s was 2.4185 and 2.3040 in females. In comparison, the mean values in mal hly significant (p-value < 0.001) (Fig. 6, Table 1). The mean value of the vertical diameter of the femur head was 43 mm in males and 38.19 mm in females, with a range of 30-54-48.73 mm and 33.33-43.34 mm, respectively. The standard deviation in males was 2.4185 and 2.3040 in females. In comparison, the mean values in males and females were highly significant (p-value <0.001) ( Figure 6, Table 1). DISCUSSION In the preceding discussion, these results are compared with those of previous researchers. The mean value for the total length of the femur was 436.88 mm in males and 402.38 mm in females, with a range of 392-490 mm and 360-469 mm, respectively. The standard deviation in males was 19.880 and 22.581 in females. In comparison, the mean values in males and females were highly significant (p-value <0.001). Studies conducted by other researchers were in concordance with the findings of our study. For example, R. Purkait and Chandra in their study performed on 200 male and 80 female femora, found the mean value of the total length of the femur to be 451.47 mm in males and 403.69 mm in females. Comparing the mean values in males and females, they found that total length was higher in males than in females [7]. Similarly, the study carried out by Gargi Soni et al. on 40 male and 40 female femora showed the mean value of the total length of the femur to be 439.57 mm in males and 410.60 mm in females with a high significance statistically [8]. Kalpana R et al. made a similar study on 100 males and 100 females and found the mean value of the total length of femora to be 441.36 mm and 394.60 mm, having significant results for male and female respectively [9]. However, other studies showed the mean value of the total length of the femur in both males and females a little different as compared to the present study [10][11][12][13][14]. The mean value for the trochanteric oblique length of the femur was 423.78 mm in males and 387.18 mm in females with a range of 386-471 mm and 303-452 mm, respectively. The standard deviation in males was 19.024 and in females, 40.152. In comparison, the mean values among males and females were highly significant (p-value <0.001). Leelavanthy et al. obtained the mean values for trochanteric oblique length in the right and left femora of males 419.3 mm and 421.5 mm and 389.8 mm and 385.6 mm in the right and left femora of females, respectively. Comparing the mean values in males and females, they found that trochanteric oblique length was higher in males than in females on both sides. However, it was found that the values were not statistically significant for the right side but highly significant for the left side [15]. In a study conducted by Shital M. et al. on 187 male and 179 female samples, the mean value for trochanteric oblique length was 440.3 mm and 396.4 mm, respectively [12]. However, the studies conducted by Shital M. et al. [12], P.S. Igbigbi and B.C. Msamati [16] and Pearson [17] showed the mean value of trochanteric oblique length on the higher side in both males and females compared to the present study. The mean value for the vertical diameter of the femur head was 43 mm in males and 38.19 mm in females, with a range of 30-54-48.73 mm and 33.33-43.34 mm, respectively. The standard deviation in males was 2.4185 and 2.3040 in females. In comparison, the mean values in males and females were highly significant (p-value <0.001). The results of the head of femur of the present study were found to be correlating well with the observations made by other researchers [9,12,18]. Shital M. et al. in their study of 187 male and 179 female femora, found the mean value of the vertical diameter of the head of the femur to be 43.61 mm in males and 38.7 mm in females. Comparing the mean values in males and females, they found the vertical diameter higher in males than females and was statistically highly significant [12]. Similarly, Kalpana R. et al. in their study on 100 male and 100 female femora, found the mean value of the vertical diameter of the head of the femur to be 44.37 mm in males and 38.44 mm in females. Comparing the mean values in males and females, they found that the vertical diameter of the head was higher in males than in females and was statistically highly significant [9]. In 2019, a similar study was conducted by K.V. Pavan Kumari and found values higher in males [19]. The studies done by other authors showed the mean values of the parameter under discussion to be a little different as compared to the present study [10,16,20]. The present study is in concordance with most of the studies. However, sample size and use of one measurement method remain the main limitations. CONCLUSION Focusing on the results of this study, it was concluded that the mean values of maximum length, trochanteric oblique length, and vertical diameter of the head of the femur are significantly higher in males than females. These parameters are useful and reliable for sexual dimorphism in anthropometric and Forensic studies, especially in identifying skeletal remains. These differences can also be considered in selecting or designing the exact ranges of gender-specific prostheses for Orthopedic surgeries.
2022-02-05T05:18:12.210Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "f83b4604f1d2c59dbbec546084c864f0a4ac1dbb", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f83b4604f1d2c59dbbec546084c864f0a4ac1dbb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231706858
pes2o/s2orc
v3-fos-license
Zosteriform skin metastasis caused by retrograde lymphatic migration of metastatic squamous cell lung carcinoma Background Zosteriform skin metastasis (ZSM) is rare, and its etiology is not well understood. ZSM is possibly derived from the retrograde movement of cancer cells through the lymphatic vessels during disease development. However, it has been difficult to demonstrate it, as no specific findings have been observed. Case presentation A 68-year-old man presented to our department with neck lymphadenopathy. After detailed examinations, squamous cell lung carcinoma (cT2aN3M1c) was diagnosed. Although cisplatin combined with gemcitabine was administered, his cancerous lymphangiopathy was exacerbated, and ZSM was observed on his right chest. Pembrolizumab was initiated as a second-line chemotherapy; however, the patient died 7 months after the initial presentation. In this case, fluorodeoxyglucose-positron emission tomography indicated the presence of skin metastasis and cancerous lymphangiopathy. Similarly, after performing an autopsy, tumor-cell filled lymph ducts were observed in the right subclavian and the cutaneous lymphatic vessel from the right hilar lymph nodes. Conclusions To the best of our knowledge, this is the first study to demonstrate that the localization of ZSM in the cutaneous lymphatics was caused by the retrograde movement of cancer cells through the lymphatic vessels, using radiographical and pathological analysis. In addition, fluorodeoxyglucose-positron emission tomography may help predict skin metastasis induced by cancerous lymphangiopathy. Background Zosteriform skin metastasis (ZSM) develops on the skin in regions similar to those where herpes zoster manifests. However, ZSM is rare, and reports of ZSM development from primary lung cancer lesions are limited [1][2][3][4][5]. Although its etiology is unknown, two hypotheses may explain ZSM development. One hypothesis involves a Koebner-like reaction at the site of a past varicella-zoster virus (VZV) infection [6], while another involves the retrograde movement of cancer cells through lymphatic or vascular vessels [7]. Using 18 F-fluorodeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) and an autopsy, we report the first case of ZSM development linked to the retrograde mechanism, as a result of cancerous lymphangiopathy. Case presentation A 68-year-old man presented to our department with neck lymphadenopathy. His medical history revealed that he had undergone endoscopic resection of a colon polyp Open Access Physical examination revealed edema from the neck to the right shoulder and swelling of the right postauricular, cervical, and axillary lymph nodes. Chest CT revealed a tumor in the right upper lobe with hypertrophy of the bronchovascular bundles (Fig. 1a). Lymphadenopathy was apparent at the hilar, mediastinal, supraclavicular, axillary, and cervical lymph nodes, predominantly on the right (Fig. 1b, c). The superior vena cava was slightly compressed but patent. Right cervical lymph node biopsy and cytology of the sputum revealed lung squamous cell carcinoma (cT2aN3M1c; programmed death ligand 1 tumor proportion score, 10%). Although cisplatin plus gemcitabine treatment was performed in six cycles and reduced the size of primary lesion, the lymphedema progressed, and papules appeared in the anterior and lateral chest wall (Fig. 2a). FDG-PET/CT revealed scattered FDG uptake in the skin around the papules (Fig. 2b, Additional file 1: Fig. S1). Skin biopsy indicated ZSM, which was judged as progressive disease. Pembrolizumab was administered, as a second-line chemotherapy, for two cycles; however, cancerous lymphangiopathy worsened and the patient died 7 months after the initial presentation. An autopsy, and post-mortem biopsy of the skin and cervical lymph nodes indicated metastatic primary squamous cell lung carcinoma. In addition to the bilateral hilar lymph node metastasis, which was significantly larger on the right side, the lymphatic vessels near the subclavian vein and the lymphatic vessels inside the dermis near the papules were filled with tumor cells (Fig. 2c). Discussion and conclusions We present the case of a patient with ZSM developed from lung cancer, as a result of retrograde metastasis of cancer cells through the lymphatic vessels. The retrograde mechanism of ZSM development has previously been supported by qualitative lymphoscintigraphy analysis [1]. To our knowledge, this is the first report that used pathological data from autopsy report and radiological FDG-PET/CT data to confirm the development of ZSM via the lymphatic retrograde migration of cancer cells. The pathogenesis of ZSM is hypothesized to occur as follows: (1) A Koebner-like reaction at the site of past VZV infection; and (2) retrograde migration via the lymphatics or blood vessels [6,7]. In our case, the lack of past medical history of skin disease and the continuous tumor regurgitation from the hilar lymph nodes to the dermal lymph ducts via the ipsilateral mediastinal, subclavicle, and axillary lymph nodes, as confirmed on autopsy, led us to conclude that ZSM developed through the latter mechanism (Fig. 2c). In this case, we observed ZSM using FDG-PET/CT. It is generally difficult to confirm whether an FDG-PET/CT finding is a cancerous lymphatic regurgitation or simple lymphedema [8]. In cases of lymphedema, FDG is taken up by tissues slightly but uniformly [8]. In contrast, this case showed distinctive FDG uptake patterns that matched the locations of the skin metastasis. Based on these observations and the histology of autopsy specimens, we speculated that the FDG-PET/ CT features may help diagnose lymphatic retrograde skin metastasis. However, there were some limitations that should be acknowledged. First, pathological demonstration of metastasis to the right axillary lymph nodes, which is the confluence of lymph vessels in the chest wall, was not performed because these samples were not collected during autopsy. However, 18 F-FDG uptake was similarly observed in the skin and the subclavicle and axillary lymph nodes. Τherefore, we hypothesized that there were also metastatic sites in the axillary lymph nodes. Second, we could not completely exclude the possibility that the Koebner-like reaction mechanism caused the ZSM formation. In some cases, the detection of viral DNA in the skin lesions of ZSM was useful for the demonstration of the mechanism, but in many cases, this method proved unsuccessful, thus, suggesting a more complicated pathologic mechanism of Koebner-like reaction of ZSM [6]. Given that the patient in this case had no history of zoster, we believe that it is unlikely that VZV infection contributed to the ZSM development. In conclusion, using FDG-PET/CT and an autopsy, we demonstrated that ZSM developed through the retrograde migration of cancer cells to the right anterior thoracic lymphatics, via the hilar, mediastinal, subclavian, and axillary lymph nodes/vessels in a patient with lung cancer. Although pathological proof was essential, FDG-PET/CT may be useful to evaluate metastasis progression and predict complications, including cancerous lymphangiopathy and skin metastasis.
2021-01-26T15:03:27.878Z
2021-01-26T00:00:00.000
{ "year": 2021, "sha1": "6645e05df5ee16e23f1b5fabff39c6f8b4c96799", "oa_license": "CCBY", "oa_url": "https://bmcpulmmed.biomedcentral.com/track/pdf/10.1186/s12890-021-01414-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6645e05df5ee16e23f1b5fabff39c6f8b4c96799", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
57376019
pes2o/s2orc
v3-fos-license
Effects of thyroid-stimulating hormone on adhesion molecules and pro-inflammatory cytokines secretion in human umbilical vein endothelial cells Atherosclerosis is a multifactorial disorder, which affects the arterial wall. It has been reported that, hypothyroidism and thyroid hormone deficiency are related to cardiovascular disorders. Also, endothelial dysfunction plays an essential role in the development of atherosclerosis. We aimed to evaluate the effects of thyroid-stimulating hormone (TSH) on pro-inflammatory tumor necrosis factor-α (TNF-α), interleukin-6 (IL-6), angiogenic vascular endothelial growth factor (VEGF) and leukocyte adhesion, intercellular adhesion molecule 1 (ICAM-1) and E-selectin in human umbilical vein endothelial cells (HUVECs). In this study, HUVEC cells were treated with 1 and 2 μM of TSH in different treatment times. The gene and protein expression of ICAM-1, VEGF, and E-selectin were performed by real-time polymerase chain reaction and western blotting, respectively. Likewise, TNF-α and IL-6 protein levels were determined by the ELISA method. VEGF, ICAM-1, and E-selectin as endothelial dysfunction markers and also, TNF-α and IL-6 as pro-inflammatory cytokines were detectable in HUVEC. Besides, the results of this study revealed that TSH treatment down-regulates TNF-α and IL-6. Evaluating the gene and protein expression data revealed the upregulation of ICAM-1, E-selectin, and VEGF in TSH treated cases in different periods of exposure. Considering the multiple actions of TSH, it could be concluded that TSH plays a controversial role in atherogenesis by anti-inflammatory effects and on the other side, angiogenesis and leukocyte adhesion induction which is related to vascular cell proliferation. INTRODUCTION Impaired thyroid hormones metabolism may be considered as a cause of various heart disorders and abnormal endothelial function (1)(2)(3).Indeed, thyroid hormone deficiency and hypothyroidism are related to defects in the secretion of endothelium-dependent dilation factors (1,4).The levels of thyroid stimulating hormone (TSH) are increased in subclinical hypothyroidism which has been related to elevated cardiovascular risk in epidemiological surveys (5).TSH activity is exerted on a definite cell membrane.TSH receptors are controlled by a thyroid-specific growth factor.Based on a recent report, the presence of TSH receptor has been shown in extra-thyroid organs, with unidentified functions.Data shown that TSH, plays an essential function in hepatic cholesterol production and increased total blood cholesterol levels.Also, considering that the TSH receptor is presented in HUVECs, it has been indicated that, TSH exposure stimulates tumor necrosis factor-α (TNF-α) which is a pro-inflammatory factor (6) and related stigma (1,7,8).Also, a positive relationship among endothelium-relates vasodilation and TNF-α and interleukin-6 (IL-6), were reported in hypothyroid patients (9).Likewise, TSH stimulates IL-6 release from 3T3-L1 adipocytes (5).On the other side, since adhesion molecules are engaged in inflammation phenomena, the study of these factors is imperative.In this regards, reports shown that patients with subclinical hypothyroidism had noticeably elevated levels of intercellular adhesion molecule-1 (ICAM-1) versus healthy subjects (10). In this regards it could be deduced that, the vascular inflammatory reaction involves intricate interactions among inflammatory cells, endothelial cells, vascular smooth muscle cells, and the extracellular matrix.Therefore, vascular damage is related to elevated levels of adhesion molecules expression by endothelial cells and the role of inflammatory cells, growth factors, and cytokines (11). According to the previous studies, TNF as an inflammatory cytokine plays an imperative role in vascular disturbance in experimental surveys (11,12).Some cellular phenomena like leukocytes adhesion, and invasion of vascular cells arise with the pro-inflammatory cytokines including IL-6, which plays a key role in atherosclerosis (12,13).In addition, pro-inflammatory cytokines and TNF-α have promotive effects on the production of the ICAM-1 on cultured endothelial and epithelial cells.It is possible that this effect plays an essential role in inflammation regulation.It has been reported that, IL-6 resulted to oxidative stress and endothelial dysfunction induction by affecting the angiotensin II receptor (14). On the other hand, TNF-α activates NF-κB which results to E-selectin gene expression with reactive oxygen species (ROS) production.Indeed, the formation of ROS in endothelial cells is prompted by inflammatory cytokines such as TNF-α (15).Considering the fact that the vascular endothelial growth factor potently induced angiogenesis, there is an evidence showing that TNF-α had an antiangiogenic effect that was impressed by the vascular endothelial growth factor (VEGF)-definite angiogenic way (16).Since, impaired vascular function had multifactorial etiology that is associated with aging, inflammation, injury, hyperlipidaemia and several other elements which are engaged in the pathogenesis of atherosclerosis (12).Therefore, this study evaluated the TSH effects on pro-inflammatory factors such as TNF-α and IL-6, accompanied by endothelial dysfunction markers (VEGF, E-selectin, and ICAM-1) which are related to atherosclerosis induction in HUVECs. The aim of our survey was to evaluate the TSH effects on the HUVEC as assignee of vascular endothelial cells to evaluate whether change in TSH level could affect the endothelial function or not.Indeed, we investigated the possible effects of TSH on the endothelial dysfunction which, these TSH effects and also altered levels of ICAM-1, E-selectin, and VEGF (that have major roles in endothelial function) (17) might be the essential factors in atherosclerosis progression. Chemicals and materials TSH was purchased in powdered from Sigma Aldrich Company (Cat No: T1775, Sigma, USA) and dissolved in distilled water.TNF-α and IL-6 enzyme-linked immunosorbent assay (ELISA) kit were obtained from IBL International GmBH (Hamburg, Germany).RNA purification and cDNA synthesis kit were purchased from GeneAll Company (South Korea).A real time polymerase chain reaction (PCR) master mix was prepared from Ampliqon (Denmark).Western blot materials were purchased from Bio-Rad Inc. Cell culture materials were obtained from Biowest, France. Cell culture and treatments HUVECs were purchased from the Pasteur institute of Iran (Tehran, I.R. Iran).The HUVEC cells were seeded in a complete medium containing Dulbecco's modified eagle medium (DMEM)/F12 with 10% fetal bovine serum (FBS) and 1% penicillin/streptomycin and the cells were incubated in a 5% CO 2humidified incubator at 37 °C.The HUVEC cells had a flagging morphology in all cell cultures. In order to evaluate the effects of TSH, 1 × 10 6 HUVEC cells were seeded 24 h before treatments in the FBS free medium.Thereafter, cells were treated with different doses of TSH (1 and 2 µM) for 24 and 48 h in the FBS free medium, according to a previous similar survey (18). The enzyme-linked immunosorbent assay IL-6 and TNF-α levels were determined by the ELISA method and 5 × 10 3 /well of HUVEC cells were cultured in 96-well plates overnight before treatments.Subsequently, the cells were treated with TSH (1 and 2 µM) for 12, 24, and 48 h.After the period of exposure, the cells culture plates centrifuged in 1500 g for 5 min and then the cell supernatant collected.In following, protein levels evaluations according to the ELISA kit protocols.In the following, 100 µL of a standard solution was added to each well.The well blank was prepared by adding 100 µL of the sample diluent.In addition, 50 µL of the sample diluent was added to each sample well, after 50 µL of the treated sample were poured.Then, conjugate biotin was added and maintained for 2 h at room temperature (18-25 °C).After the period of incubation, streptavidin-HRP was added and maintained at room temperature for 1 h.At that time, washing with phosphate buffered saline (PBS) (pH 7.4) was performed.Incubating with tetramethylbenzidine as substrate was done at room temperature.By ELISA reader, the absorbance of each sample at 450 nm was quantified. Real time-polymerase chain reaction Cells (10 6 /well) were grown in 6-well plates 24 h before exposure to TSH.To evaluate the effects of TSH treatments on the mRNA level of ICAM-1, E-selectin, and VEGF, the cells were treated with different concentrations (1 and 2 µM) of TSH for 24 and 48 h.Total RNA was purified by GeneAll kit (cat.No.: 305-101) according to the manufacturer's instructions.RNA integrity was evaluated by gel electrophoresis.Two μL of total RNA was utilized for cDNA synthesis.Real time-PCR was performed by Ampliqon master mix (cat.No.: A325402) with specific forward and reverse primers for all genes (with two replicates).β-actin mRNA was evaluated as the internal control to normalize gene expression.Primer sequences used for of ICAM-1, E-selectin, VEGF, and β-actin are listed in Table 1. The mRNA levels of ICAM-1, E-selectin, and VEGF were normalized to β-actin mRNA levels.PCR was run as 40 cycles at 95 °C for 15 s, 62 °C for 30 s, and 72 °C for 30 s.The negative controls for each target showed an absence of carryover.Data of target mRNA copies were calculated relative to β-actin using the 2 -ΔCt method. Protein extraction and western blotting In order to evaluate the effects of TSH on the protein expression of ICAM, VEGF, and E-selectin, 10 6 HUVEC cells were seeded and treated with TSH (1 and 2 µM).After 6 and 24 h of exposure, the cells were washed with PBS (pH 7.4) and then trypsinized.After calculating the number of treated cells, isolated HUVEC cells were lysed and centrifuged at 16000 g for 15 min at 4 °C.Then, supernatants were utilized for protein detection.Proteins were measured by the Bradford method.Proteins specimens were separated by SDS-polyacrylamide gel and transferred to polyvinylidene difluoride (PVDF) membranes.The blots were blocked in 5% skim milk solution at room temperature.In the following, the PVDF membrane was incubated overnight with a primary antibody, anti-ICAM-1, Eselectin, and β-actin (Santa Cruz, USA).After washing the membrane, incubation was done with a secondary antibody (with horseradish peroxidase-conjugated anti-mouse) for 3 h.The immunoblots were incubated with Amersham ECL western blotting detection reagent (GE Healthcare Life Science, Boston, MA, USA), followed by exposure to X-ray film.To evaluate the bond densities of the proteins, the Image J software was used to analyze the densities.Each protein band was normalized to β-actin bands. Statistical analysis The presented data were analyzed by SPSS software, version 21.0 (SPSS, Inc.) and GraphPad Prism (version 6.01).Real-time PCR and ELISA data are presented as the mean ± SD.In order to evaluate the differences between means, t-tests was used to compare each group with the control.For multiple analyses between groups, One-Way ANOVA was applied.Statistical differences were considered at P < 0.05. Effects of TSH on the TNFa and IL-6 expression levels in HUVECs In order to examine the effects of TSH on endothelial cells, the expression of TNF-α and IL-6 levels as pro-inflammatory factors was evaluated by ELISA methods.The cells were exposed to different concentrations of TSH (1 and 2 µM) for 12, 24, and 48 h.The results of this study showed that TSH remarkably down-regulated the TNF-α levels.inall treatment periods.As shown in Fig. 1, there was a significant decrease in TNF-α in TSH treated cells, when compared to the untreated controls in all exposure times (P < 0.05).In further analysis, there were no significant differences in cells treated with 2 µM TSH, as compared to cells treated with to 1 µM TSH in all examined incubation times (P > 0.05).The results obtained from IL-6 analysis (Fig. 2) revealed that TSH down-regulated IL-6 levels in the 12 h treatment period (P < 0.01).In the case of 24 and 48 h incubation periods, there was a decrease in the IL-6 levels compared to the control, but these differences did not reach significant levels (P > 0.05).As presented in Fig. 2, there was a significant decrease in IL-6 levels of 1 and 2 µM TSH treated cells, in comparison to the controls in all exposure times (P < 0.01). Effects of TSH on the VEGF, ICAM-1, and E-selectin gene expression levels in HUVECs In order to investigate the effects of TSH on the gene expression of VEGF, ICAM-1, and E-selectin on endothelial cells, the mRNA levels of adhesion and angiogenic related factors were evaluated by the real-time PCR method.The cells were treated with different doses of TSH (1 and 2 µM) for 24 and 48 h.Real time-PCR results showed that treatment with TSH (1 and 2 µM) for 24 h increased the expression of E-selectin mRNA (Fig. 3A; P < 0.01, P > 0.05, respectively).As shown in Fig 3B, the E-selectin mRNA levels became slowly elevated over 48 h of treatment with 2 µM of TSH compared to 1 µM.The amount of increase recorded using 1 µM TSH was higher compared to that for 2 µM (P < 0.01).Moreover, for the treatment conducted in 24 h, the E-selectin mRNA levels of HUVEC (2 µM) increased insignificantly in comparison to the control (P > 0.05).Between varying doses in 48 h treatment time, there was a significantly elevated level of E-selectin in 2 µM compared to 1 µM (Fig. 3B).These data indicated that the up-regulation of E-selectin during 48 h was concentration dependent.Consequent upon exposure to different concentrations of TSH, the E-selectin levels in a dose of 2 µM increased significantly in comparison to 1 µM treatments.Based on the real time-PCR data analysis presented in Fig. 4A, the ICAM-1 mRNA levels increased significantly over 24 h of treatment with 1 and 2 µM TSH (P < 0.01).By comparing between various doses in the 48 h treatment time, there were elevated levels of ICAM-1 in 2 µM (P < 0.01) and 1 µM (P < 0.01) doses in comparison to the controls (Fig. 4B).Furthermore, in comparison with the different doses of TSH, the ICAM-1 mRNA levels of HUVEC (1 µM) had a higher level than 2 µM in the case of 24 and 48 h exposure (P > 0.05). in the case of 24 and 48 h exposure (P > 0.05). As presented in Fig. 5A, the results indicated that the VEGF mRNA levels in 1 and 2 µM TSH increased significantly over 24 h of the treatment (P < 0.05).In addition, in comparison with the different doses of TSH, the VEGF mRNA levels of HUVEC (1 µM) recorded significant differences to 2 µM TSH in the case of 24 h exposure.In addition, the VEGF mRNA levels of HUVEC treated with 1 µM TSH had significant differences in comparison to the control. In 48 h incubation with TSH (Fig. 5B), similar results were obtained in comparison to 24 h.Indeed, between the different doses in the 48 h treatment time, there was an elevated level of VEGF in 2 µM than 1 µM (P < 0.05).These data indicated that the regulation of VEGF during 24 h and 48 h, depended on the concentration. Effects of TSH on the E-selectin and ICAM-1 protein expression levels in HUVECs According to the results of the western blotting analysis by Image J software as presented in Fig. 6A and 6B, the ICAM-1 protein levels were elevated over 6 and 24 h of treatment with 1 and 2 µM of TSH in comparison to the control.Besides, among the different doses of TSH, the ICAM-1 expression in 1 µM dose of TSH treated cases had higher levels in comparison to 2 µM in 6 and 24 h.The amount of increase (ICAM-1 protein expression) over 24 h was higher than that obtained during the 6 h treatment time.According to western blotting results, the protein expression levels in the examined doses (24 h) were parallel to the real-time PCR data. Analysis of western blotting protein bands showed that, the E-selectin protein levels increased over 6 and 24 h of treatment with 1 and 2 µM of TSH in comparison to the control (Fig. 7A and 7B).Moreover, between different doses of TSH, the E-selectin protein levels in a concentration of 2 µM was higher in comparison to 1 µM over the 24 h period.In contrast, in the 6 h incubation period, a higher protein expression was present in the 2 µM dose. DISCUSSION The present survey evaluated the direct effect of TSH hormone in HUVEC.Indeed, TSH changes the levels of pro-inflammatory and angiogenic factors.Based on our results, TSH down-regulated the IL-6 protein levels in different treatment times.The amount of decrease was more noticeable in 12 h treated cells.In addition, our findings showed that TSH decreased the TNF-α level in all treatment times.IL-6 has been recognized as a novel cytokine that is synthesized by numerous tissues, which is a major regulator of defense responses.One of the main effects of IL-6 is the stimulation of acute phase protein production, and the induction of lymphocytes development (19).The present research showed that HUVEC is a source of IL-6 and TNF-α. According to previous studies, cytokine IL-6 has been suggested as the main factor in the suppression of thyroid action during no thyroidal disorder (20).Also, Antunes et al. in a similar survey reported that, TSH increases the release of IL-6 from adipocytes.These results were not parallel to our findings.This discordance may be related to the different cell lines and selected doses (5).Another survey by Diez et al. showed that in patients with hypothyroidism, TNF-α and sTNFR-I levels were higher in comparison to normal individuals (21).However, this was not relevant to our results.It could be stated that hypothyroidism is a multifactorial disorder that affected TNF-α in the study of Diez et al., but our survey evaluated the effects of TSH only on TNF-α at the cellular level.Indeed, according to the author's knowledge, the present study is the first to investigate the effects of TSH on pro-inflammatory factors such as TNF-α and IL-6 on HUVEC. According to results of previous survey, TSH play major role as negative regulator of cytokine signaling (suppressor of cytokine signaling) (SOCSs) protein, that led to change in the signal transducers and activators of transcription (STAT) factors phosphorylation.It could be deduced that TSH signaling cooperates with the JAK-STAT pathway; therefore, there is signaling mutual impacts among TSH and cytokines SOCSs which exert a possible multiple biological functions that intervened by their SOCS box, Src homology 2 domain, and GTPase domain (22,23). In addition, based on previous studies (22,24) which led to Th1-skewing reticence results in TNF-a synthesis inhibition.It could be deduced that, TSH exerted its effects on SOCS and triggers the signaling of cAMP (25), leading to interruption of the JAK-STAT pathway, and NF-kB activation (26,27), which resulted in inhibition of TNF-α and IL-6 expression levels. In evaluating the effects of TSH on the expression levels of ICAM-1 and E-selectin as leukocyte adhesion molecules and VEGF as angiogenesis related factor, it could be suggested that, TSH by interacting with specific receptors on the HUVEC, induces angiogenesis (VEGF), and leukocyte adhesion (ICAM-1 and E-selectin). Endothelial dysfunction is related to increased atherosclerosis risk, which is accompanied by the elevated expression of leukocyte adhesion, cell penetrability, and proliferation of vascular smooth muscle cells (30).VEGF is an essential angiogenic element, which promotes endothelial cells proliferation and increase vascular penetrability.VEGF may play a noticeable role in atherosclerosis progression (31). TSH has a specific receptor that triggers the progression of two signaling pathways which include the adenylatecyclase-protein kinase A (PKA) and the phospholipase C-protein kinase C (PKC) pathways (32).As G-proteinscoupled receptors, TSH receptors have stimulatory effects on cAMP and phospholipase C-dependent pathways (33,34). A previous study performed by Balzan et al., showed the effects of TSH on angiogenesis by interaction with its receptor that was modulated with the cAMP-mammalian target of rapamycin signaling.They reported that, this impact is dependent on VEGF in human microvascular endothelial cells (6) which is in accordance with our data and may suggest the possible mechanism of TSH effects in our study. Similar to our results, Hoffmann et al. indicated that, TSH increases VEGF production in several ways including the PKC, more than the PKA pathway in thyroid cancer cell lines (35), this was in accordance with our findings. An et al. reported that TSH and forskolin decrease the ICAM-1 mRNA levels in FRTL-5 thyroid cells (29) and this does not agree with our data.This discordance might be related to the different activated signaling pathways in the FRTL-5 cells compared to HUVEC. In a similar survey, Desieri et al. evaluated the effects of human recombinant TSH on biomarkers of vascular endothelial cell related factor.After human recombinant TSH administration, serum TSH was elevated.Concurrently, plasma sICAM-1 concentrations increased significantly as E-selectin (36), and these data are in parallel our results. Overall, according to results from the presented study, in evaluating the effects of TSH on the gene and protein expression levels of ICAM-1, E-selectin, and VEGF, it had been confirmed that, TSH has a specific receptor on the endothelial cells, which induces multiple effects including the down-regulation of antiinflammatory factors (TNF-α and IL-6) in addition to the induction of angiogenesis (VEGF), and leukocyte adhesion (ICAM-1 and E-selectin).In this regards, the elevated expression of leukocyte adhesion molecules is related to endothelial dysfunction and increased risk of atherosclerosis. CONCLUSION Based on our findings, this theory arises that TSH could play a controversial role in atherogenesis by anti-inflammatory effects and conversely, angiogenesis and leukocyte adhesion induction resulted to vascular cell proliferation.Further studies should be conducted to evaluate the effect of TSH on atherosclerosis by evaluating the numerous pro-inflammatory and proliferative factors in different cell lines and animal models. Fig. 1 . Fig. 1.Down regulation of tumor necrosis factor-α (TNF-α) level in HUVECs by thyroid-stimulating hormone (TSH) exposure.After treatments with TSH (1 and 2 µM) for 12, 24, and 48 h, protein expression evaluated by ELISA method.* Indicates significant differences (P < 0.05) compared to control group.Data are presented as mean ± SD values derived from three independent experiments. Fig. 3 . Fig. 3. mRNA levels of E-selectin in HUVECs cells treated by thyroid-stimulating hormone (TSH) (1 and 2 µM) for (A) 24 h and (B) 48 h exposure times.Data presented as mean fold change ± SD values derived from three independent experiments.*P < 0.01. Fig. 6 . Fig. 6.Prortein detection for evaluation of E-selectin expression performed using western blotting technique.(A) Protein expression levels of E-selectin in HUVECs cells treated by thyroid-stimulating hormone (TSH) (1 and 2 µM) for 6 and 24 h and (B) TSH treatment significantly increased relative expression of E-selectin.For analysis protein's bands densities, Image J software was used to analyze the densities.Each protein (in one repeat) bands normalized to βactin bands. Fig. 7 . Fig. 7. Prortein detection for evaluation of E-selectin expression performed using western blotting technique.(A) Protein expression levels of E-selectin in HUVECs cells treated by thyroid-stimulating hormone (TSH) (1 and 2 µM) for 6 and 24 hand (B) TSH treatment significantly increased relative expression of E-selectin.For analysis protein's bands densities, Image J software was used to analyze the densities.Each protein (in one repeat) bands normalized to βactin bands.
2019-01-22T22:25:22.072Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "1edb46bcaef03550037193ed2712f5711523cad9", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/1735-5362.245966", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4cb935f7790d6bc2848afe864ee859b6037a6ff3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18861881
pes2o/s2orc
v3-fos-license
A Catalyzing Phantom for Reproducible Dynamic Conversion of Hyperpolarized [1-13C]-Pyruvate In vivo real time spectroscopic imaging of hyperpolarized 13C labeled metabolites shows substantial promise for the assessment of physiological processes that were previously inaccessible. However, reliable and reproducible methods of measurement are necessary to maximize the effectiveness of imaging biomarkers that may one day guide personalized care for diseases such as cancer. Animal models of human disease serve as poor reference standards due to the complexity, heterogeneity, and transient nature of advancing disease. In this study, we describe the reproducible conversion of hyperpolarized [1-13C]-pyruvate to [1-13C]-lactate using a novel synthetic enzyme phantom system. The rate of reaction can be controlled and tuned to mimic normal or pathologic conditions of varying degree. Variations observed in the use of this phantom compare favorably against within-group variations observed in recent animal studies. This novel phantom system provides crucial capabilities as a reference standard for the optimization, comparison, and certification of quantitative imaging strategies for hyperpolarized tracers. Introduction Quantitative analysis of tissue structure, function, and composition through imaging and imaging biomarkers has the potential to help transform clinical care by facilitating personalized management of diseases such as cancer. Imaging measurements are particularly attractive because they can provide researchers and clinicians with non-invasive, serial views of heterogeneous tumor tissue in toto, in stark comparison to the limited number of samples that are practically available through traditional biopsy [1]. In order for imaging biomarkers to provide additional information that positively affects clinical decisions, they must be robust, reproducible, and reflective of relatively specific tissue characteristics. Biomarkers that provide insight into dynamic biological processes are particularly challenging due to the difficulty of establishing a reference truth against which comparisons can be made. Magnetic resonance imaging (MRI), spectroscopy (MRS), and spectroscopic imaging (MRSI) technologies allow interrogation of anatomic, functional, and molecular characteristics of tissue without the use of ionizing radiation. Traditional MRI, which is fundamentally based on the detection of hydrogen nuclei ( 1 H), is sensitive to a wide range of contrast mechanisms that lead to excellent soft-tissue contrast and functional imaging capabilities. Spectroscopic measurements exploit differences in the resonance frequency of nuclei with a nonzero spin, due to changes in the local magnetic field imparted by their molecular environments, to permit quantitative estimation of chemical concentrations in vivo. MRSI provides spectroscopic information at points within a slice or volume of interest with varying degrees of spatial and spectral resolution. A vast array of information is available through such measurements, including the ability to generate chemically selective maps of the spatial distribution of key metabolic compounds by 13 Unfortunately, magnetic resonance remains a noise-limited measurement and the tremendous clinical potential of MRS/ MRSI techniques has not been realized largely due to an intrinsically low signal-to-noise ratio (SNR). SNR limitations are pronounced for MR of 13 C in vivo due to the low natural abundance and MR sensitivity of the 13 C isotope. Dynamic nuclear polarization (DNP) of 13 C-enriched tracers enables dramatic increases in the excess spin population and thus the observable 13 C MR signal [2,3]. The hyperpolarized state created by DNP has been shown to increase SNR by approximately 10 4fold compared to normal thermal equilibrium for select substrates such as [1-13 C]-pyruvate [3]. This increase in signal has made in vivo spectroscopic imaging of hyperpolarized (HP) tracers feasible [4]. However, the signal from HP tracers is non-renewable and short-lived due to intrinsic relaxation mechanisms and signal depletion from necessary radiofrequency (RF) excitations, placing fundamental constraints on the measurement of their spatial and chemical fate. All acquisition protocols reflect a balance of acquisition time, the number and kind of signal excitations, and spatial, spectral, and temporal resolution. A range of data reduction techniques aimed at extracting more information from fewer signal excitations have been employed [5][6][7][8][9][10][11][12][13]. Optimization of acquisition strategies with respect to these unique constraints is ongoing, but assessment of the accuracy and reproducibility of these methods remains a challenge. Repeatable calibration standards would help characterize the accuracy and reproducibility of measurements under various data reduction strategies, a critical step in the translation of these powerful new biomarkers to clinical use. 13 C-labeled pyruvate is the most studied hyperpolarized tracer to date because of its strong polarization, favorable kinetics, and central role in metabolism. The chemical fate of pyruvate is of particular interest in oncology because many cancers display the 'Warburg effect' and metabolize glucose, the precursor of pyruvate, using mechanisms that are less efficient for energy production but are thought to provide other advantages for survival and proliferation of disease [14,15]. Preservation of nuclear spin state through chemical conversion allows us to observe the spatial distribution of HP-pyruvate and of HP metabolites such as lactate, alanine, and bicarbonate that can only arise through interactions between tracer and the relevant metabolic enzymes. Early studies have shown that a decrease in the conversion rate of HP pyruvate to lactate correlates with response to therapy and tumor grade [4,[16][17][18][19]. In this work, we demonstrate a phantom system in which the conversion of hyperpolarized pyruvate to lactate can proceed in the controlled, tunable, and reproducible environment of an isolated buffer. Hyperpolarized pyruvate is injected into a vessel containing the reagents that are necessary for reproducible conversion of pyruvate to lactate at a rate that is consistent with in vivo observations. This platform provides a robust tool for evaluation and comparison of spectroscopic and imaging measurements without the added complexity of a biological system. Many sources of variability inherent to the use of HP tracers in vivo are eliminated, allowing focused study of controllable parameters in order to maximize SNR and minimize measurement errors that can lead to artifacts. The ability to carry out these reactions at a known rate over a predetermined spatial distribution will significantly accelerate the optimization and validation of instru-mentation and strategies for efficient signal acquisition, reconstruction, and analysis. This work focuses on [1-13 C]-pyruvate due to its progress in clinical trials and near-term potential for clinical use [4]. Similar phantoms and calibration standards can be developed for other HP tracers, employing alternative enzyme systems. Methods Pyruvate is specifically converted into lactate by the enzyme lactate dehydrogenase (LDH) and the reduced form of the coenzyme nicotinamide adenine dinucleotide (NADH): This ordered ternary complex can be modeled using classical enzyme kinetics [20][21][22][23] to derive velocities (Mol/s) of the reaction as a function of constituent concentrations. The Gibbs free energy for this reaction is large (DG' 0 = 225.1 kJ/mol), strongly favoring the forward reaction and lactate production; under normal physiological conditions, these reagents are involved in multiple cellular processes that modify their cytosolic concentration [24]. Under normal conditions for MRSI of HP lactate and pyruvate, the signal from 13 C at thermal equilibrium is below the threshold of detectability, and only polarized labels can be observed. The rate of chemical conversion for HP tracers depends on the velocities of the chemical reaction and the likelihood that reactants are polarized: Here, V PL denotes the forward velocity of the reaction, for conversion of pyruvate to lactate, and V LP represents the reverse reaction; L is the concentration of unpolarized lactate, P represents the concentration of unpolarized pyruvate, and the concentration of the polarized spin pools for each metabolite are indicated by asterisk. Witney et. al. [20] presented a model to describe the net conversion of pyruvate to lactate as well as the forward velocity of HP label exchange. A. Modeling Equations 2-3 can be modified to account for spin-lattice (T 1 ) relaxation and losses due to signal excitation: Where T 1,pyr and T 1,lac are the spin lattice relaxation time constants for pyruvate and lactate respectively, a is the flip angle of the RF excitation pulse, and TR is the repetition time. Using the models outline by Witney et. al. [20] and equations 4 and 5, a finite-difference approach was implemented in Matlab (The Mathworks, Natick, MA) to calculate conversion and exchange as a function of the total concentrations of pyruvate, lactate, LDH, NADH, and NAD+, and the percent polarization of the tracers. This model was used to determine phantom reagent concentrations that would reduce reaction rate variability. Kinetic models with multiple compartments can be used to determine the flux of hyperpolarized tracers. To account for the conditions of an in vivo environment, models with two to six chemical and physical pools have been proposed [25][26][27]. One clear advantage of a single enzyme phantom is that the minimal 7) injections. The mean signal for lactate and pyruvate, normalized to peak carbon signal for each injection, are displayed with error bars that indicate the minimum and maximum values at each time over all injections. Total HP 13 C was estimated by summing signal from HP 13 C Lactate and HP 13 Where k PL and k LP are the unidirectional rate constants of the LDH catalyzed conversions. By defining k PL~VPL = PzP Ã ð Þand k LP~VLP = LzL Ã ð Þ , and recognizing that observations are samples of the longitudinal magnetization of each pool, the twocompartment model can be expressed in a more suitable form [25,28,29]: In contrast to the isolated phantom system where reagents can be accurately quantified, in vivo measurements based on this model reflect apparent rate constants due to the uncertainties in the cellular concentration of unlabeled reagents. Leveraging known reagent concentrations, time series of hyperpolarized pyruvate and lactate signal levels measured as described below were fit to this model using custom software developed in Matlab (The Mathworks, Natick, MA). Rate constants were determined by fitting dynamic tracer curves to equations 7 and 8 in the least squares sense, as previously described [25,28,29]. B. Phantom Preparation Phantom concentrations were qualitatively optimized to reduce reaction rate sensitivity to variabilities in the concentrations of its components and ensure the reaction ran to completion before hyperpolarized signal decayed below threshold of detectability and as consistent with previous in vivo observations. Special consideration was given to reduce sensitivity due to pyruvate concentration and LDH activity as these were assumed to be the least reproducible characteristics of the phantom system. A custom phantom container was machined out of cylindrical Ultem resin stock and fitted with a 1 m polyethylene 3.175 mm diameter catheter (Coilhose Pneumatics, East Brunswick, NJ) for remote injection into the cavity when located at the isocenter of the magnet. The rectangular cavity was 16163 cm with the injection catheter connecting to the front as seen in Figure 1. This structure was used for assessment of the enzymatic phantom using dynamic spectroscopy. To test the feasibility for such a reaction to be conducted inside a phantom with spatial details necessary to validate spectroscopic imaging sequences, a standard MRI quality assurance phantom (Aufloesung 30 KIT G, model 1P T58930; Bruker Biospin MRI, Ettlingen, Germany) was drained and refilled with the catalytic mixture described below. All reagents save pyruvate were thawed from aliquoted fresh solutions stored at 280uC and mixed in a 5 mL syringe shortly before completion of the polarizing process as described below. Once polarization was complete, HP pyruvate was injected into the phantom and followed by the enzyme substrate mixture to fill the phantom cavity. Nominal final concentrations were 2 mM hyperpolarized 13 C-Pyruvate, 40 mM Lactate, 3.92 U/mL LDH (Worthington Biochemical Corp., Lakewood, NJ), and 4 mM B-NADH (Sigma-Aldrich Corp., St. Louis, MO) in the Tris buffer (81.3 mM trisma preset crystals pH 7.2, 203.3 mM NaCl) (Sigma-Aldrich). This specific configuration places the phantom far from chemical equilibrium, rendering the reverse reaction (V LP , k LP ) negligible throughout. The phantom was held at 28uC with a final pH of 7.2 and 3-mL final volume. UV spectrophotometry indicates that enzyme activity remains unchanged for at least two hours at room temperature. D. Dynamic Spectroscopy Dynamic spectroscopy was performed on a 7-T/30-cm Biospec System (Bruker Biospin Corp., Billerica, MA) using B-GA12 gradients (120-mm inner diameter (ID), G max = 400 mT/m) and a dual-tuned 1 H/ 13 C volume coil (72-mm ID, Bruker Biospin MRI). Dynamic 13 C spectra were acquired with a 2.5 kHz bandwidth, 4098 points, 10u excitation using a 1 ms Gaussian pulse, 2-sec repetition time (TR), with 60 repetitions over a 3-min scan time beginning at dissolution and triggered by the HyperSense system. To evaluate performance and repeatability, the measurement was repeated seven times using identical reagent concentration. Signal from each metabolite was determined by calculating the area of the Lorentzian line shape that fit most closely in the mean square sense. Dynamic signal curves for each tracer were integrated in time to estimate total signal from pyruvate and lactate. Their sum represents the total signal observed from all HP 13 C-labeled metabolites in this phantom system. Signal amplitudes were normalized to account for variations in the amount of polarized pyruvate present at the onset of scanning. Three quantitative parameters were used to characterize the reaction rate for each measurement: total lactate signal normalized to total pyruvate, or to the sum of total pyruvate and lactate signal, and k PL , the forward rate constant for the two compartment model described by equations (7) and (8). E. Spectroscopic Imaging To demonstrate the usefulness of the enzyme phantom on evaluating spatial sequence performance, a 10-mL standard imaging phantom was drained and fitted with the same injection catheter described above. A slightly lower concentration of NADH (2 mM) was used in an otherwise identical mixture due to the increased phantom volume. A custom built dual-tuned 1 H/ 13 C linear birdcage coil with a 35 mm ID was used in conjunction with B-G6 gradients (60-mm ID, G max = 1000 mT/m, Bruker Biospin Corp.). The phantom was scanned with a radial echo planar spectral imaging (EPSI) sequence (unpublished). This was a single shot acquisition that expunged the entire hyperpolarized signal to acquire a single set of spectroscopic imaging data. The acquisition was started ,40 seconds after all components were combined in the phantom, and data was acquired with a repetition time of 60 ms, initial echo time of 5.5 ms, and 1.3 ms echo spacing to form a 32 point echo train. A variable flip angle was used to maintain approximately equal sampling of longitudinal magnetization [31]. Spectral bandwidth was 23.8 kHz with a 744 Hz or 9.85 ppm spectral width. Fifty spatial projections were taken with 32 readout points over a 4 cm by 4 cm field of view and a 2 cm slice thickness. Results Isolating the enzymes in a system where initial reagent and product concentrations can be controlled allows the exchange rate to be specifically tuned and modeled without interference from other confounding biological processes. Simulations suggested that a phantom system containing 2 mM hyperpolarized [1-13 C]pyruvate, 40 mM lactate, 3.92 U/mL LDH, and 4 mM B-NADH would progress at a rate that is consistent with in vivo observations [30] with reduced sensitivity to experimental variations. The phantom system, shown in Figure 1, was assembled and tested (N = 7 replicates), demonstrating reproducible conversion of hyperpolarized tracer as summarized in Figure 2. After a brief delay between start of data acquisition and injection of the polarized tracer, the pyruvate signal peaks quickly as it fills the chamber. A small frequency shift of 0.2 ppm, likely due to changes in susceptibility in the chamber, was observed during injection. Pyruvate signal decays due to relaxation, signal excitation, and chemical conversion to undetectable levels in less than two minutes. The lactate signal rises until the growth of the HP lactate pool is less than losses due to relaxation and signal excitation, at which point it similarly decays. The coefficient of variation for common measures of this reaction, summarized in Table 1, compare favorably to within-group variations reported in 9 recent animal studies [16][17][18][19][32][33][34][35][36][37] as summarized in Table 2. Snapshot spectroscopic imaging of the separate imaging phantom shows relatively homogeneous mixture of components and distribution of tracer and metabolite 40 s after initiation of the reaction. Images of HP pyruvate and lactate, alone and in overlay over reference proton images, can be seen in Figure 3. While the resolution of the MRSI sequence is significantly lower than the proton image, it is possible to resolve features within spectroscopic images for both individual metabolites. No significant spatial distortions are seen. Discussion Real time in vivo spectroscopic measurement of labeled metabolites and their products is now possible with the integration of key polarizable tracers and carefully designed strategies for encoding, acquisition, reconstruction, and analysis of signals that are observable through magnetic resonance. Imaging the metabolic conversion of pyruvate into lactate shows promise as a clinically useful biomarker for oncology [4]. Hyperpolarized contrast agents comprise a relatively new and rapidly developing field, and research into best practices for signal acquisition, reconstruction, and analysis is ongoing. This dynamic phantom will provide robust, reproducible, and tunable dynamic processes against which experimental strategies can be compared and optimized. This phantom system provides new capabilities for experimental development and validation with distinct advantages over singletracer phantoms, static multi-compartment thermal equilibrium phantoms, and in-vivo models. Static phantoms are useful for confirming some functionality, such as the signal-to-noise ratio and spatial resolution associated with specific measurement settings, but do not create the dynamic conditions that could lead to artifacts in reconstruction algorithms that are based on the assumption of a stationary target. Assessment using in vivo models is challenging because of biological heterogeneity and the evolution of target processes in diseases such as cancer that can progress rapidly and increase within-group variations in a matter of days. With this platform, acquisitions can be readily repeated, at arbitrary intervals, to extract statistical measures of image quality. To our knowledge, this is the first reproducibility study of a phantom system that provides controlled dynamic evolution of a chemical process that is observable by magnetic resonance using hyperpolarized tracers. This system catalyzes the final step in aerobic glycolysis, the conversion of pyruvate into lactate, without the need for animal subjects, human subjects, or cell suspensions that can increase the cost and the variability of technical measurements. The 12.3-19% variation that we observed is a result of many factors. LDH is sensitive to a range of experimental circumstances [38]; small variations in temperature, pH, or storage conditions can affect enzyme activity and therefore the rate of the reaction. In this pilot work, the injection of a small amount of hyperpolarized pyruvate was performed by hand, potentially leading to unnecessarily high variations in the end concentration of pyruvate. In future investigations, this error will be reduced by utilizing automated injections and a refined mixture and delivery system that reduces local fluctuations in the composition of the system [39]. The compatibility of this concept with a wide range of potential phantom structures provides clear opportunities for future use. Multiple compartments can easily be incorporated into new or existing phantom structures, and reaction rates in distinct regions can be tuned by varying reagent concentrations to simulate different tissues or disease states in parallel. In addition, a separate chamber that is free of enzyme would allow normalization to account for variations in tracer polarization. Because the spatial characteristics of such a phantom would be known a priori, rigorous evaluation and optimization of data encoding, acquisition, and reconstruction algorithms is possible. This is especially important when considering data reduction strategies that are designed to address key limitations in the measurement of hyperpolarized tracers but blur traditional definitions of spatial and temporal resolution in the observation of dynamic processes. Such a platform would be ideal for exploration of thresholds for detectability of pathologies that may not be evident in 1 H MRI, early testing of new sequences to ensure preservation of spatial and temporal accuracy, and for quality assurance scans to confirm that similar acquisition, reconstruction, and analysis parameters lead to similar data over time both within and between laboratories and institutions.. Our phantom structure for dynamic spectroscopy (Figure 1) was designed using ultem (x V ,28.9610 26 ) to minimize susceptibility differences with water (x V ,29.03610 6 ) at interfaces that are normal to the static field. However, air (x V ,0.36610 26 ) on one side of the rectangular volume of the chamber led to approximately 0.2 ppm shift in the resonant frequency of metabolites as the chamber was filled. More complex phantom structures will require careful design in order to minimize frequency shifts due to susceptibility differences that are modified as chambers are filled. A crucial step in the translation of powerful new imaging technologies into routine preclinical and clinical use is the establishment of well-defined reference standards [40] to provide a common reference against which experimental circumstances can be compared. Such a reference can be used to ensure comparable results across platforms, laboratories, and institutions, and aid in study design and execution. The dynamic single enzyme phantom helps fill this critical need by providing reproducible HP tracer evolution that is independent of complex biological barriers and heterogeneity that cannot be strictly controlled. The physical structure of the phantom can be tailored to more closely approximate preclinical or clinical applications, and the rate of the reaction can be controlled through multiple compartments in a spatially dependent manner to simulate a wide range of disease states. This phantom platform represents a flexible and powerful tool to aid in development, optimization, validation, and certification of techniques, processes, and instrumentation that are crucial to ensure the successful and efficient translation of powerful new imaging capabilities afforded by MRSI of hyperpolarized tracers such as [1-13 C]-pyruvate.
2016-05-12T22:15:10.714Z
2013-08-15T00:00:00.000
{ "year": 2013, "sha1": "adc9c2dfd22300b5300223703f09ebdc82767cb5", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0071274&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "adc9c2dfd22300b5300223703f09ebdc82767cb5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
16511231
pes2o/s2orc
v3-fos-license
Methamphetamine use and malnutrition among street-involved youth We sought to explore the effect of crystal methamphetamine use on the risk of experiencing malnutrition among street-involved youth in Vancouver, Canada. Risk of malnutrition was defined as being hungry but not having enough money to buy food. Socio-demographic and drug use factors associated with risk of malnutrition were investigated using univariate and multivariate analysis among a prospective cohort of street-involved youth known as the At-Risk Youth Study (ARYS). Between September 2005 and December 2006, 509 street-involved youth were enrolled in ARYS, among whom 21% reported being at risk of malnutrition as defined above in the previous six months. In multivariate analysis, only non-injection crystal methamphetamine was significantly associated with being at risk of malnutrition among this cohort (Adjusted Odds Ratio [AOR] = 1.60, 95% Confidence Interval [CI]: 1.03 - 2.48, p = 0.036). Interventions seeking to address food insecurity among street youth may benefit from considering drug use patterns since methamphetamine use predicted higher risk in this setting. Over the last decade, crystal methamphetamine use has emerged as a unique and significant public health concern, and data on the prevalence of crystal methamphetamine use have shown that its use is increasing in North America, particularly among young gay men and young injection drug users [1,2]. Studies have also reported that crystal methamphetamine use is associated with a variety of physiological and neurological disorders [3], as well as with a number of risk behaviours for HIV transmission, and that these risks are heightened among street-involved youth [2]. This is of concern given that the health of street-involved youth is already often compromised by widespread unstable housing and chronic food insecurity [4]. Little research, however, has been conducted on the association between patterns of drug use and food security among youth. Specifically, knowledge gaps exist concerning the potential unique impact of crystal methamphetamine use on risks of malnutrition among street-involved youth populations. We therefore sought to explore the effect of crystal methamphetamine use on the risk of experiencing malnutrition among a cohort of street-involved youth in Vancouver, Canada. We evaluated factors associated with malnutrition among participants enrolled in the At-Risk Youth Study (ARYS), a prospective cohort of street-involved youth aged 14 to 26 in Vancouver, Canada, which has been described in detail previously [5]. In brief, at baseline and semi-annually, ARYS participants complete an interviewer-administered questionnaire and provide blood samples for diagnostic testing. In the present study, Pearson's Chi-square test and multivariate analysis were used to determine factors associated with ever having experienced malnutrition among this cohort. Our primary independent variable of interest was crystal methamphetamine use, though we accounted for a wide array of socio-demographic, drug use and behavioural variables, all of which are shown in Table 1. All variable definitions were identical to earlier reports from our setting [6] and all behavioural and drug use variables refer to behaviours in the previous six months. We defined the dependent variable based on responses to the following ARYS survey question: "I am often hungry but I don't eat because I can't afford enough food". Respondents answering "Often true" or "Sometimes true" were defined as being at risk of malnutrition; those answering "Never true" were defined as not being at risk of malnutrition. All tests were two-tailed and the significance level was set at p < 0.05. All statistical analyses were performed using SAS software version 9.0 (SAS, Cary, NC). In total, 509 youth were recruited into the ARYS study between September 2005 and December 2006. Among this cohort, 149 (29%) were women, 154 (30%) were non-Caucasian, and the median age of participants was 22 years (Interquartile Range [IQR]: 20.0 -23.9). Overall, 105 (21%) individuals reported being at risk of malnutrition in the previous six months. As shown in Table 2, in multivariate analysis, only non-injection crystal methamphetamine use was associated with risk of malnutrition (Adjusted Odds Ratio [AOR] = 1.60, 95% CI: 1.03 -2.48, p = 0.036) after adjustment for all other variables found to be significantly associated with being at risk of malnutrition in univariate analyses. In the present study, over 20% of participants reported being at risk of malnutrition in the prior six months as defined as often being hungry but not having enough money to buy food. In multivariate analysis, and despite intensive adjustment for a range of drug use, behavioural and socio-demographic factors, only non-injection crystal methamphetamine use was independently associated with being at risk of malnutrition. These findings may reflect a unique risk of malnutrition associated with crystal methamphetamine use among street-involved youth. While past studies on crystal methamphetamine use among street-involved and gay male youth populations have reported an association between use of this drug and risk behaviours associated with the transmission of HIV and other blood-borne diseases [2,7], we believe our study is the first to identify crystal methamphetamine use as a potential determinant of malnutrition. Our findings indicate that interventions focussed primarily on the risk of HIV transmission and the neuropsychological effects of use of this drug among youth [8] may need to be reevaluated in order address the broader impact of crystal methamphetamine use on a wider variety of health determinants, such as malnutrition. Similarly, the impact of interventions aimed exclusively at increasing food security and improving malnutrition among streetinvolved youth may be limited without the incorporation of components aimed at reducing crystal methamphetamine use. Studies have shown that supply reduction strategies in the United States aimed at disrupting small-scale producers of crystal methamphetamine within the country have largely failed to stem an increase in rates of crystal methamphetamine use [9]. Consequently, evidencebased interventions focussing on demand reduction and other adverse health sequelae of crystal methamphetamine use must be developed. Although the ARYS study is not a random sample, all cohort studies of high-risk or marginalized populations generally suffer from this limitation since there are rarely registries from which to draw random samples. As well, our data was based on self-report and could therefore have resulted in socially desirable reporting [10], which may have consequently lowered reported rates of illicit drug use [11]. However, we know of no reason why risks of malnutrition would be differentially reported by methamphetamine users and non-users. In summary, over one-fifth of our cohort reported being at risk of malnutrition in the previous six months as defined by not having enough money to buy food, and in multivariate analysis only non-injection crystal methamphetamine use was independently associated with this problem. This finding suggests that the impact of current health and preventive interventions aimed at addressing issues surrounding either crystal methamphetamine use or malnutrition among street-involved youth may be limited without taking into account the relationship between these health behaviours. Finally, further prospective and qualitative research is needed into the potential role of crystal methamphetamine use in mediating food security among this population, and prospective study will be required to examine the long term impact of crystal methamphetamine on nutrition-related health outcomes. Note: CI = Confidence Interval *All drug use variables were defined as ever in the last six months.
2014-10-01T00:00:00.000Z
2010-03-08T00:00:00.000
{ "year": 2010, "sha1": "023686be66c3586a447ba2a4d5c46ea1df37d57c", "oa_license": "CCBY", "oa_url": "https://harmreductionjournal.biomedcentral.com/track/pdf/10.1186/1477-7517-7-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4015fb67d138c3a13b84253139326acc260a48e2", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
166440772
pes2o/s2orc
v3-fos-license
Identify and Rank the Factors Affecting the Willingness of Workers to Entrepreneurship in the Copper Industry with AHP Approach-A Review Background/Objectives: Entrepreneurship has key role in the economic growth and development in different communities. This research utilizes previous research; identification factors of affecting the willingness of employees to entrepreneurship, using the AHP method to prioritize them. Methods/Statistical Analysis: The purpose of the research study and description of group solidarity is the way the surveys. Data collected using questionnaire and data collection method is field. The population consisted of 8 copper industry executives have formed due to the limited sample, the whole community is involved. In the process of analyzing the information of Expert Choice software is used. Findings: The results showed that on the whole the most important risk factor in the willingness of employees is entrepreneurial. After that, the knowledge, the technology of the day, top management support, confidence, financial resources, structure, flexible, challenging work, matching behavior and the dynamics of leadership as the most important factor affecting the willingness of employees to entrepreneurship were identified. Application/Improvements: This study can help managers to identify factors influencing on entrepreneurship employee in major industry. It can also be a guide for future research on this subject. Identify and Rank the Factors Affecting the Willingness of Workers to Entrepreneurship in the Copper Industry with AHP Approach A Review Somayeh Akhavan Darabi1*, Mohammadreza Akhbarieh2, Laleh Abbaslu2, Mojgan Hamidi Beinabaj3 and Mohammadali Zare4 1Department of Industrial Engineering, Payam Noor University, Iran; akhavan.star@yahoo.com 2Faculty of Management, Islamic Azad University of Sirjan, Science and Research Branch Sirjan, Sirjan, Iran; Abbaslo.8463@gmail.com, Mr.Akhbari@gmail.com 3Department of Management, Economic & Accounting, Payam Noor University, Tehran, Iran; Hamidi.Mojgan@yahoo.com 4Faculty of Management, Tehran University, Tehran, Iran; Ali.Zare660@gmail.com Introduction The current pace of change is such that identify and predict changes in the steady state and outside mirrors and past experiences and achievements in ensuring future success is not required. It can be said that the present age is the age of discontinuity and predictable and binding decision of the idea on the basis of changes in the world, these communities further towards creativity and innovation and entrepreneurship in the lead and these can be seen in the amount of their entrepreneurs 1 . Entrepreneurship in different countries in a multi-level engagement in the sense of job creation, both in the sense of change through innovation and process improvement and as a key factor in economic growth, is desperately needed. Entrepreneurship can be discussed even in modern times in any country considered one of the main strategies 2 . Entrepreneurship is one of the development tools because there are people creating entrepreneurs will be success. Also, due to increased competition in emerging markets today and creating a sense of mistrust in traditional practices need more entrepreneurs in the organization felt. In this connection, in dynamic and flexible organizations that are in our own time, discover and develop entrepreneurs. Each organization in order to produce spontaneous and innovation requires the right structures and entrepreneur. Ministry of internal talent that can be used to give other organizations will be in a short time 3 . Entrepreneurship in general and entrepreneurship in particular, played a key role in the economic growth and development in communities. The experiences of countries such as Japan's development centered around Asia, China, Malaysia and South Korea full of activities has been outstanding entrepreneurs and now the development of entrepreneurs and their self-pride 4 . Given the importance of entrepreneurship and entrepreneurs experience in the development of many countries and taking into account the economic difficulties facing our country, promote and disseminate the concept of entrepreneurship as an underlying support for the culture of entrepreneurship and elder trained people with entrepreneurial spirit and the individual, group or organization for developing countries, including our country, Iran, has great importance 5 . But what factors affect the willingness of employees to entrepreneurship? This is a question that may be etched in the minds of many researchers so far. Numerous studies have been done in this area indicates for example, Lober internal factors, external environment and the factors influencing entrepreneurship characteristics noted 6 . Nawaser et al state that Digman believes that build the knowledge and expertise to build the personality characteristics such as confidence, risk and control focus on entrepreneurship 7 . We will talk more in research in this field in the background. But what is the focus of this study is to identify and rank the factors affecting the willingness of employees to be entrepreneurial. As noted above, entrepreneurship is essential factors for developing countries like our country. But another issue that a lot of research in this area is not considered, in other words, the degree and the rank and importance of the factors affecting entrepreneurs. That is to say that the factors influencing entrepreneurship of employees largely has been considered in previous research, but what is the missing link has the order of priority of these factors. Because in addition to the different needs of different organizations that would provide entrepreneurship, they are also important differences. For example, do not be so important in an industrial organization needs a service organization assumed equal or even in industrial organizations with different tasks and objectives under the entrepreneurship. In this study, using previous research, the identification of factors affecting the willingness of employees to entrepreneurs, we have to prioritize them using AHP method. The study was conducted in the copper industry. Since the copper industry is one of the local industries and many staff is employed, as well as providing entrepreneurial solutions to increase revenue, the industry will be very economical and hence national income will increase, this place was chosen as the domain industry. Internal Studies Mahdavi et al research idol as a "decisive influence on entrepreneurs indicators of state universities and ranking of universities from the perspective of " the combination of Delphi, Vikor analysis of the factors influencing the creation of 22 networks were identified entrepreneurial university 8 . Abdolahian et al in his review the prioritization indicators FANP by their entrepreneur skills 9 . The study was conducted at the University Jihad; it was found that among the main indicators of the decision in the first priority Ahmadi et al examined the influence of personal and environmental factors on entrepreneurial behavior. To investigate the effects of these two variables, the chi-square test and software EO Amos was used. Results showed that there were significant factors associated with entrepreneurial characteristics and it was found that the impact of expectations and perception of the environment has a significant and positive impact on the strengthening of entrepreneurial behavior 10 . Mohammadi et al the relationship between personality traits tends to be entrepreneurial response. The results of this study showed that there are a significant positive relationship between women personality traits and tend to be entrepreneurial 3 . Rezvani et al investigate the role of entrepreneurialism through emphasis on the relationship between top management on the performance of state-owned banks. The survey of 13 items was measured by trends in entrepreneurship was confirmed 11 . Beigi Nia et al examined the impact of the type required to pay interest to entrepreneurs. The study was conducted at the headquarters of the National Iranian Oil Company, was found there is a significant positive relationship between the need for achievement, need for power, the need to respect the need for self-actualization, biological needs, social needs and the need for security and the desire of employees to entrepreneurs 5 . Imani Pour and Ziodar investigate the relationship between entrepreneurial orientation and performance of the company's sales representative in Tehran's Iran Insurance Company. In this study, four factors flexible structure, effective organizational climate, culture, creativity and innovation and ultimately drive backup capability and motivation of individual employees as of entrepreneurialism were introduced 12 . Foreign Studies Galord with respect to individual differences between men and women entrepreneurs, such as the factors affecting the performance of individual factors, organizational factors, factors of industrial environment, resources and strategies and policies discussed. The performance of women entrepreneurs in the survey with three components of financial performance, the performance of personal and social functioning was measured 13 . Halepota in their study concluded that in order to increase the enthusiasm of entrepreneurs, options such as organizational rewards, job satisfaction and strengthening innovation management are effective 14 . Ryan and Deci with a set of basic research in the field of entrepreneurship education in the United States of America has done it concluded that although the use of technology in teaching entrepreneurship is important but these new technologies but also new regardless of the human aspects of entrepreneurship, will not be successful 15 . Pritchard and Ashwood in the field of entrepreneurial spirit in organizations providing social services, organizations place great emphasis on the study of entrepreneurial behavior. He plans to use the main motivation for the proposition to increase the entrepreneurial spirit in the organization has introduced 16 . Wakkee et al make employees' jobs through traditional service companies examined. In this study, two factors led to the creation of effective and creative staff noted and its impact on entrepreneurial behaviors was assessed and the impact of these two factors was confirmed 17 . Klarner et al to the motivational factors that lead to employees of entrepreneurs in the organization noted. In this study it was found that factors such as structure, top management attention, flexibility and control to create motivation for employees to stay in organization become entrepreneurs 18 . The Hierarchical Structure of Literature A review of the literature revealed several studies and identify factors that affect the willingness of employees to have entrepreneurship. Each of these studies with the goal and the option to select and review specific cause has been determined. After a review of the literature identified 10 factors affecting entrepreneurs that include: confidence 19 . The ability to apply a calculated risk 20 , the positive response to the challenging work 1 , the ability to adapt to the behavior of others 19 , having knowledge 21 , updated technology in the organization 5 , top management support 20 , the financial resources required 1 , flexible organizational structure 5 , dynamism and good leadership 20 . After identifying 10 of the most important factors affecting the willingness of employees to entrepreneurship, with a little care it became clear that these factors may be more comprehensive in the categories. The consultation and the comments 2 academic experts and 3 working experts in Kerman copper industry, 10 factors identified in the framework of variable organizational factors and factors affecting willingness to entrepreneurship category. In category performed 5 factors: confidence, the ability to use a calculated risk, the positive reaction to the jobs challenge, the ability to adapt to the behavior of others, having knowledge as factors in date technology in the organization, top management support, required financial resources, organizational structure flexible, suitable dynamic leadership were classified as organizational factors. Finally, according to the AHP hierarchical structure in Figure 1 was brought. Methodology The research in term of purpose is descriptive and in term of method are surveys. Data collected using questionnaire and data collection method is field. The questionnaire consisted of questions using a paired comparison with a range of nine options and by the researcher and academic and industry experts to assess the validity and approved. Since the AHP technique was used for data analysis and necessary for the implementation of this technique is access to information experts, the population consisted of 8 of copper industry executives, each of them have more than 20 employees have formed. Due to the limited sample and using all of society, all community members are included. It is in the process of analyzing the information of Expert Choice software is used. Research Findings After determining the objective criteria and the relevant sub-criteria survey questionnaire was distributed among the eight heads of copper industry. Paired comparisons between criteria and sub-criteria comparison test was performed for each criterion. Paired comparisons of the numbers one to nine in terms of importance and was conducted in accordance with the Table 1. After completing the questionnaire, AHP software expert choice was made. At first paired comparison between the two main criteria were personal and organizational. The output of the software and the results presented in Figure 2 and Table 2. As is clear from the Table 2 and the chart above, according to experts more important factors to organizational factors tend to entrepreneurship employees. Also, given that only paired comparison was done between the two factors and incompatibility index for comparison is zero. Compare the following two criteria to individual and organizational measures were individually and the outputs of software and results presented in Figure 3 and Table 3. The above results show the risks relative to the degree of importance of the 0.362 is most important factor in people tend to employees entrepreneurship. Of knowledge, confidence, work, challenge and adapt their behavior in the next ranking. For organizational factors were also ranked. According to copper industry experts, the technology of the day with a relative importance grade of 0.348 is most important organizational factor in employees tend to entrepreneurship in the organization. Support of senior managers with degrees in secondary 0.290 and then of financial resources, flexible and dynamic structure and leadership are important in the next grade. The software uses the weight of the individual and the organization in general to investigated and prioritized. The results of this stage presented below. As can be seen in all the most important risk factor in the willingness of employees is entrepreneurial. After that, the knowledge, the technology of the day, top management support, confidence, financial resources, structure, flexible, challenging work, matching behavior and the dynamics of leadership as the most important factor affecting the willingness of employees is entrepreneurial. It is at any stage of the incompatibility rate was calculated by the software. The index is used to assess the compatibility of the judgment. If the index is less than 0.1 compatibility conflict will be accepted in the judgment or the judgment should be revised. As the above results for all stages of inconsistency index is less than 0.1 and therefore judgments are good compatibility. Conclusion This study aims to contribute and review of the literature about the factors that led to the breeding of such a workforce of 10 is identified and individual and organizational factors examined. The results showed that in the categories of factors plays a much more important than organizational factors in creating an entrepreneurial workforce which was already expected. What is clear is that creativity and innovation to create more entrepreneurs and interpersonal factors related to organizational and external factors. The internal factors are known, it was found that the risk of status, self-esteem, behavioral practices and compliance challenges are most to least important. As is known, the results suggest that the risk for entrepreneurs and risk-taking is the most important factor. The dynamism and leadership in the priority can also return it to its creative individuals often rely on innovation and entrepreneurship as well as monitoring and attention from their superiors as leader and perhaps in some cases because such people have unique psychological characteristics with the opposite results.
2019-05-28T13:15:29.957Z
2015-08-16T00:00:00.000
{ "year": 2015, "sha1": "b97b88969339c156be7c52a20a3040f0fbc3a1e0", "oa_license": null, "oa_url": "https://doi.org/10.17485/ijst/2015/v8i19/76233", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f1a0f414ae8ff77c67268c3a2b889bcaf2b7449f", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
84845961
pes2o/s2orc
v3-fos-license
The Birkhoff Diamond as Double Agent Despite the existence of a proof of the 4-color theorem, it would seem that there is still more to learn about why any planar graph is 4-colorable. To that end, we take another look at the Birkhoff diamond and discover something new and intriguing: after an extensive search for (rare) Kempe-locked triangulations, we find a Birkhoff diamond subgraph in each one. We offer a heuristic argument as to why that result is not only reasonable but also to be expected and posit that the presence of a Birkhoff diamond is necessary to Kempe-locking. If that conjecture is true, it means that the Birkhoff diamond plays a double role in the matter of 4-colorability, simultaneously working for opposite sides of whether a given planar graph could possibly be a minimum counterexample. Introduction Mention the Birkhoff diamond to a mathematician, particularly a graph theorist, and it brings to mind the matter of the 4-color problem. Tell her that you have some new insight about the Birkhoff diamond and she will shudder to think that you might be bold enough to step forward with a claim of a "human" proof (that is, without the significant aid of a computer) of the 4-color theorem. Let us be clear-we make no such claim. However, we do believe there is still more to understand about why any planar graph is 4-colorable and, to that end, we examine the role of the Birkhoff diamond. Needless to say, we discover something new and intriguing along the way: we find that it is plausible, perhaps highly so, that the Birkhoff diamond plays a double role in the matter of 4-colorability, simultaneously working for opposite sides of whether a given planar graph could possibly be a minimum counterexample to the 4-color conjecture-hence the moniker "double agent." (When we mention a minimum counterexample, we refer to the 4-color conjecture because it makes less sense to talk about a minimum counterexample to the proved 4-color theorem.) Though it may appear otherwise to some, we contend that this article is not yet another in a long list of (futile) attempts to find an alternative proof of the 4-colorability of planar graphs-instead, it is about a way of looking at the problem to understand better what it could be about planar graphs that renders them 4-colorable. Our objective is to consider the question: "Why must a planar graph be 4-colorable?" The tentative answer we find is the provocative statement: "Because the Birkhoff diamond appears to serve two opposing masters at the same time. " We presume a basic understanding of graph theory, but define and illustrate several less-common terms and all new ones. Any planar graph that is not a triangulation (a graph all of whose faces are delineated by three edges) can be turned into a triangulation by inserting edges. If that triangulation can be 4-colored, so can the original graph. Hence, we focus on triangulations and their close relatives, near-triangulations, all of whose faces except for one are delineated by three edges. Further, we consider only graphs that are connectedthat is, graphs in which there is a path joining every pair of vertices. A graph is said to be k-connected if it has more than k vertices and remains connected whenever fewer than k vertices are deleted (see [6]). Because a k-connected graph cannot be planar if k > 5 (see [6]), we can limit our study to planar triangulations that are either 4-connected or 5-connected. An example of each is shown in figure 1. The graph on the left is the well-known icosahedron, the smallest 5-connected triangulation. We will see that the graph on the right is the smallest Kempe-locked triangulation, a term we will define in due course. It is not necessary to consider planar triangulations that are only 3-connected because any such graph on more than four vertices must have at least one separating triangle (a triangle with vertices of the graph both inside and outside the triangle) and a graph with a separating triangle cannot be a minimum counterexample to the 4-color conjecture. Assume such a graph T is a minimum counterexample. Then, the vertex sets of both (1) the proper subgraph of T consisting of the separating triangle and everything outside it and (2) the proper subgraph of T consisting of the separating triangle and everything inside it can be 4-colored in such a way that the separating triangle is colored identically in each (through a permutation of colors, as necessary). Thus T can be 4-colored, a contradiction. Accepted proofs of the 4-color theorem [1,2,8,9] depend on two key ideasunavoidability and reducibility. In this context, unavoidability refers to a finite set of planar configurations (near-triangulations drawn with the infinite face non-triangular) at least one of which must appear in any planar triangulation that is sufficiently connected to possibly be a counterexample to the 4-color conjecture (one that belongs to a subset of all 5-connected triangulations in which the removal of any 5-cycle disconnects the graph into two components, one of which is a single vertex-for example, the left panel of figure 1). In other words, the set of configurations is unavoidable. A reducible configuration is one whose presence in a planar triangulation renders that triangulation 4-colorable if the near-triangulation with the reducible configuration removed is 4-colorable. Thus, a reducible configuration cannot appear in a minimum counterexample to the 4-color conjecture. Consequently, if one can create an unavoidable set of reducible configurations, there can be no minimum counterexample and the 4-color conjecture is proved. The Birkhoff diamond, depicted in figure 2, is named for the mathematician who proved that it is a reducible configuration [3]. It is the smallest configuration appearing in every one of the unavoidable sets of reducible configurations used to prove the 4-color theorem [1,2,8,9]. x y Figure 2: The Birkhoff diamond with endpoints x and y. We have just described the known first role of the Birkhoff diamond in the matter of 4-colorability of planar graphs: its mere presence in a planar triangulation serves to disqualify that graph from being a minimum counterexample to the 4-color conjecture. We now turn to the posited second role, the primary subject of this article: its presence in a planar triangulation is required for that graph to possibly be a minimum counterexample. If this second role could be verified, then the 4-color theorem would follow trivially. We therefore suspect that any proof of the second role would be nontrivial and require superhuman effort. Nevertheless, because our extensive testing has produced strongly suggestive results, we claim that there is new and valuable insight to be gained from the conjectured second role, even absent a proof of its validity. To arrive at the supposition regarding the second role of the Birkhoff diamond, we first need to introduce the idea of Kempe-locking and for that we need to know what a Kempe chain is. Those are the subjects of the next two sections. With those concepts in hand, we are able to demonstrate that a minimum coun-terexample to the 4-color conjecture must be Kempe-locked with respect each of its edges. This is a highly restrictive condition, one that gets harder and harder to meet as the size of a triangulation increases. (Because the 4-color conjecture has been proved, we know that the condition is actually impossible to satisfy.) The results of an extensive search for Kempe-locked triangulations lead to the plausible conjecture that a planar triangulation cannot be Kempe-locked with respect to an edge unless the vertices serving as endpoints of that edge also serve as the "endpoints" of a Birkhoff diamond (x and y in figure 2). Kempe chains An important tool in graph coloring is the Kempe chain, named after the British mathematician whose famous attempt at proving the 4-color conjecture failed [7]. We deal only with proper vertex-colorings of a graph, those in which adjacent vertices (those joined by an edge) must have different colors. Following the standard convention, we use the integers 1, 2, 3, 4, . . . to indicate distinct colors. In a given coloring of a graph G, a Kempe chain is a maximal, connected, induced subgraph of G whose vertices use only two colors, let us say colors i and j. (An induced subgraph F of a graph G is one in which all edges in G that join vertices in the vertex set of F are also edges in F .) An i-j Kempe chain is "maximal" in the sense that every vertex adjacent to, but not in, the chain uses a color other than i or j. A short i-j Kempe chain consists of a single vertex and uses color i or color j. All vertices adjacent to such a single-vertex Kempe chain use colors other than i or j. Kempe chains are illustrated in figure 3. Kempe chains are particularly useful for "navigating" among a subset (possibly the whole set) of the distinct 4-colorings of G because interchanging colors on a chain-that is, interchanging the color labels i and j for all vertices constituting an i-j chain-leaves G properly colored. Interchanging colors on an i-j Kempe chain does not result in a distinctly different coloring of G if there is only one i-j chain in G. Any proper vertex-coloring of a graph partitions the vertex set of the graph into color classes and when there is only one i-j Kempe chain, a color interchange on that chain does not alter the partitioning of the vertex set into color classes. Kempe-locked triangulations Kempe-locking is a property of a planar triangulation T with respect to one of its edges xy. Let G xy denote the near-triangulation that results when the edge xy is deleted from T : T is said to be Kempe-locked with respect to the edge xy if, in every 4-coloring of G xy in which the colors of x and y are the same, there are precisely three Kempe chains that include both x and y. Thus, if T is Kempelocked with respect to the edge xy, then given any 4-coloring of G xy in which x and y are both colored the same, without loss of generality color 1, there must be 1-2, 1-3, and 1-4 Kempe chains including both x and y. Interchanging colors on any of those chains leaves x and y sharing the same color. Moreover, because T is Kempe-locked with respect to xy, interchanging colors on any Kempe chain involving neither x nor y results in a 4-coloring in which there are still 1-2, 1-3, and 1-4 Kempe chains that include both x and y. If interchanging colors on Kempe chains is the only method of recoloring available, G xy , once in the state in which x and y have the same color, is "locked into" that state. Figure 4 shows the near-triangulation G xy that results from deleting the edge xy in the triangulation T on 12 vertices (we say that T is of order 12) illustrated in the right panel of figure 1. We adopt a convention, illustrated in figure 4, of drawing G xy with the 4-face as the exterior (infinite) face and with x and y denoting the leftmost and rightmost vertices, respectively, on the boundary of that face. We label the boundary 4-cycle uxvy with u the bottommost vertex and v the topmost vertex. The G xy depicted in figure 4 has as a subgraph the Birkhoff diamond from figure 2-it is highlighted. Figure 3 gives a proper 4-coloring of this G xy in which x and y are both colored 1. From figure 3, we note that there are 1-2 and 1-4 Kempe chains (each of which includes part of the the boundary of G xy as drawn) that include both x and y and a 1-3 Kempe chain including both x and y that snakes all the way through the interior of G xy . It is easily verified that there are only five other distinct 4-colorings of this G xy with both x and y colored 1. They are readily found once v is colored 2 and the upper boundary of the Birkhoff diamond is colored 1-3-4-1 as in figure 3, no loss of generality with any of these color choices. In each of those five distinct additional 4-colorings of G xy , there are 1-2, 1-3, and 1-4 Kempe chains including both x and y. Thus, the planar triangulation T from which this G xy is derived (the right panel of figure 1) is Kempe-locked with respect to the edge xy. It features a Birkhoff diamond subgraph with endpoints x and y. We now show that a minimum counterexample to the 4-color conjecture must be Kempe-locked with respect to each of its edges. Let a planar triangulation T be a minimum counterexample to the 4-color conjecture. Consider an arbitrary edge xy and coalesce x and y into w so that all the edges formerly incident to x and y now become incident to w. This so-called edge contraction yields a new triangulation T ′ with one fewer vertex than T . Because T is assumed to be a minimum counterexample, T ′ can be 4-colored. Then, when w is split apart into the original x and y, but without replacing the edge xy, we obtain a neartriangulation G xy in which the colors of x and y are the same. Now suppose that T is not Kempe-locked with respect to the edge xy. Then there must be a 4-coloring of G xy in which x and y are colored the same and in which there is a Kempe chain that includes x but does not include y. Interchanging colors on such a chain results in a 4-colored G xy with the color of x not the same as the color of y. In this 4-coloring, the edge xy can be inserted to yield a 4-coloring for T , in contradiction to the assumption that T is a minimum counterexample. Fundamental Kempe-locking configurations At last we are in a position to identify configurations that are critical to Kempelocking. Using the same notation as previously, let T be a planar triangulation and let G xy be the near-triangulation derived from it by deleting the edge xy. Finally, with uxvy the 4-cycle delineating the infinite 4-face of G xy , let K xy be the near-triangulation that results when the bottommost and topmost vertices u and v, respectively, and the edges incident to them, are deleted. We refer to K xy as the Kempe-locking configuration for T with respect to the edge xy. If T is of order n, then K xy is of order n − 2. Are there certain Kempe-locking configurations that are more fundamental than others? Indeed. If a Kempeclocking configuration K xy has no subgraph K ′ xy that is the Kempe-locking configuration with respect to the edge xy for some planar triangulation T ′ with an edge xy, then we say that K xy is a fundamental Kempe-locking configuration. From the previous section we see that the Birkhoff diamond is a Kempelocking configuration and, as we will learn shortly, it turns out to be a fundamental Kempe-locking configuration. Refer to figure 5 for an example of a Kempe-locking configuration that is not fundamental. Are there any Kempelocking configurations other than the Birkhoff diamond that are fundamental? Investigating this question is an important step toward determining whether a minimum counterexample can exist. We have shown that a minimum counterexample must be Kempe-locked with respect to each of its edges. Hence, the vertices serving as the endpoints of any given edge in a minimum counterexample must also serve as the endpoints of a fundamental Kempe-locking configuration, which may or may not be a proper subgraph of a Kempe-locking configuration. Because a triangulation of order n has 3n − 6 edges (see [6]), it would seem that finding a minimum counterexample of order n becomes less and less likely as n increases. Likewise, so it would seem that finding a fundamental Kempe-locking configuration becomes less and less likely the higher its order. Consider a coloring of G xy of order n in which x and y are both colored k. If there is a 2-color path between v and u that uses colors i, j = k, then there cannot be three Kempe chains that include both x and y. A fundamental Kempe-locking configuration K xy must be able to "block" the passage of such 2-color paths from v to u in all 4-colorings of G xy in which x and y have the same color. There are two ways in which this so-called blocking can occur: the fundamental Kempe-locking configuration can (1) prevent the transmission of the 2-color path from the configuration's top boundary to its bottom boundary on the way from v to u or (2) ultimately force u to take a color other than i or j. Refer to figure 3 to see how the Birkhoff diamond configuration is able to block the 2-4 path by way of (1) and how it is able to block the 2-3 path by way of (2). Either (1) or (2) will occur whenever there is Kempe-locking. As the order of G xy increases it becomes increasingly unlikely to find a fundamental Kempe-locking configuration K xy because the number of 4-colorings of G xy with x and y colored the same grows rapidly and the probability that there will be no 4-coloring at all with a relevant 2-color path between v and u correspondingly diminishes rapidly. However, since we shall learn that the Birkhoff diamond is a fundamental Kempe-locking configuration, we should expect to see it appear as a subgraph of larger and larger Kempe-locked triangulations due to its ability to block 2-color paths. Indeed, that is what we shall discover. The approach that we adopted in the search for fundamental Kempe-locking configurations is analogous to that of experimental physicists in their search for a new elementary particle with specified properties. Physicists confine their explorations to collision events involving total energy in an interval sufficient to bring the sought-after particle into being. Similarly, we explore graphs of orders in which fundamental Kempe-locking configurations would be expected to be found. The best chance to discover fundamental Kempe-locking configurations would seem to occur when there is a small number of distinct 4-colorings of a near-triangulation G xy with x and y the same color, thus admitting the possibility that every one of those few colorings will feature three Kempe chains that include both x and y. We are naturally led to consider low-order triangulations that are either 4-connected or 5-connected. To assure that we did not miss any low-order triangulations, we generated the full set of 8,044 isomorphism classes for 4-connected triangulations of orders 6-15 and the full set of 9,733 isomorphism classes for 5-connected triangulations of orders 12-24. (See [4] and [5].) We then tested every edge in a triangulation from each of the isomorphism classes to determine if there are any 4-colorings with x and y colored the same but with fewer than three Kempe chains that include both x and y. It turns out to be rare for this not to be the case. The only Kempelocked triangulations we encountered were 4-connected; there were none at all among 5-connected triangulations. There are no Kempe-locked triangulations of order less than 12, a single 4-connected Kempe-locked triangulation of order 12 (the right panel of figure 1), none of order 13, a single one of order 14, and a single one of order 15. Those three Kempe-locked triangulations all feature a Birkhoff diamond configuration with x and y as endpoints. Each is Kempelocked with respect to only a single edge. In an expanded search for fundamental Kempe-locking configurations, we examined 4-connected triangulations of orders 16-20. Because the number of isomorphism classes grows rapidly with increasing order (from 30,926 at order 16 to 24,649,284 at order 20-refer to [5]) and because the number of edges in a triangulation increases with increasing order, we soon ran into computation-time limitations imposed by our laptop computer. After deciding to limit aggregate computer execution time to weeks instead of months, we proceeded in the expanded search by examining all 30,926 isomorphism classes of order 16 and all 158,428 isomorphism classes of order 17, but only 100,000 randomly generated non-isomorphic triangulations for each order from 18 through 20. For orders 16 and 17, we discovered eight and fourteen non-isomorphic triangulations, respectively, that are Kempe-locked, all with respect to a single edge, call it xy in each case. Each of those 4-connected Kempe-locked triangulations features a Birkhoff diamond with x and y as endpoints. Figure 5 shows the G xy derived from one of the Kempe-locked triangulations of order 16. In the random samples of 100,000 triangulations each for orders 18-20, we discovered additional non-isomorphic Kempe-locked triangulations: ten of order 18, eight of order 19, and five of order 20, all locked with respect to a single edge xy and all featuring a Birkhoff diamond configuration with x and y as endpoints. It is an open question whether there are any triangulations that are at least 4-connected and Kempe-locked with respect to more than a single edge. Let us take stock of the results of our search for fundamental Kempe-locking configurations. We found no 5-connected Kempe-locked triangulations of orders 12-24. The only fundamental Kempe-locking configuration we found is the Birkhoff diamond of order 10. We have shown that there are no fundamental Kempe-locking configurations of orders 9 or less or between 11 and 15, inclusive, and a sample of 100,000 non-isomorphic 4-connected triangulations each for orders 18-20 turned up no fundamental Kempe-locking configurations of orders 16-18. We conclude that the "experimental" case is strong that the Birkhoff diamond is the only fundamental Kempe-locking configuration. Conclusion The results of our search for fundamental Kempe-locking configurations point to a reasonable conjecture that the Birkhoff diamond is the only one. An equivalent way to state this conjecture is that the existence of a Birkhoff diamond configuration with endpoints x and y in a planar triangulation T with an edge xy is a necessary, but not sufficient, condition for T to be Kempe-locked with respect to the edge xy. This is the conjectured second role of the Birkhoff diamond-as the "agent" for the existence of a minimum counterexample to the 4-color conjecture. A minimum counterexample to the 4-color conjecture must be Kempe-locked with respect to each of its edges and if the conjecture regarding the Birkhoff diamond is true, then a minimum counterexample to the 4-color conjecture must have at least as many Birkhoff diamond subgraphs present as there are edges, any one of which would render the minimum counterexample 4-colorable and hence not a minimum counterexample. Moreover, the existence of that many Kempe-locking Birkhoff diamonds in a planar triangulation is easily seen to be impossible because the degrees of x and y in an edge xy that is Kempelocked by a Birkhoff diamond must be at least 6: having as many such Birkhoff diamonds as there are edges would mean that the triangulation is minimum degree 6 and thus nonplanar (see [6]). (In the icosahedron there are as many Birkhoff diamonds as edges, but none is Kempe-locking.) Indeed, such an everyedge-Kempe-locked triangulation cannot be constructed because the signature feature of a Birkhoff diamond is its central diamond consisting of four vertices of degree 5-thus, none of the five edges of that central diamond can support a Birkhoff-diamond Kempe-locking configuration. For a planar triangulation to be a minimum counterexample, it appears necessary for it to have a Birkhoff diamond subgraph, but if it does, then it cannot be a minimum counterexample. As remarkable as the Birkhoff diamond is, only its imaginary "quantum" version can be both present and absent at the same time. As stated in the introduction, it will likely be very hard to prove that the Birkhoff diamond is the only fundamental Kempe-locking configuration, a situation that is unfortunately all too common in mathematics-a conjecture that is easy to formulate, has plenty of supporting evidence (namely, despite efforts to find one, there is no known counterexample), is highly difficult to prove, and is possibly false. Nevertheless, the conjecture posited in this article, if true, has the satisfying feature that the Birkhoff diamond configuration alone would explain why any planar graph is 4-colorable. That makes the conjecture worthy of further study.
2018-12-26T15:26:31.975Z
2018-09-08T00:00:00.000
{ "year": 2018, "sha1": "c1d85c51df910c37353292c47752b589b2b3f8ab", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c1d85c51df910c37353292c47752b589b2b3f8ab", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
236286547
pes2o/s2orc
v3-fos-license
Strategies and Optimizing the Role of Productive Waqf in Economic Empowerment of the Ummah The understanding and empowerment of waqf assets among Muslims has undergone significant changes in both paradigm and operational practices. The development of productive waqf aims to achieve social justice and improve the welfare of people. Therefore, productive waqf has two visions at once, destroying unequal social structures and providing fertile land for the welfare of the people. This research method uses a qualitative method with a descriptive-analytical approach. The data used were secondary data, namely, literature studies or relevant previous research results. The results of his research show that waqf plays an important role as an instrument to empower the economy of the people. Waqf has played an important role in the social, economic, and cultural development of society. There are at least four basic problems with the Islamic da'wah movement. First, the problem of poverty, both in terms of the economy and limited facilities and physical needs, creates a culture of poverty. Second, the twisting of the kemikinan prompts the symptoms of underdevelopment. Third, there was an exclusive and involutive attitude. Finally, the weakness of the institutions for accommodating participation and the weakness of the cooperation mechanism to wage a systematic struggle. Thus, waqf is an alternative that is expected to provide solutions to these problems. Therefore, the optimal management of waqf objects is required. INTRODUCTION Islam, which is based on two points, namely Al Quran and Al Sunnah, is a religion that is complete, perfect, universal, and applicable to all times and places. His teaching was considered sacred by adherents. On the other hand, every religious adherent will try to translate his religious teaching into religious behavior as the actualization of teachings. However, this form of religion is very "human" meaning that it really depends on the level of knowledge and its ability to understand or grasp the side of the teachings, plus factors of customs, environment, and so on. Waqf is a potential source of funds for the people that need to be developed, utilized, and managed in a professional manner to obtain optimal benefits to alleviate poverty and improve the welfare of the people. To mobilize the potential of waqf, it is necessary to have a partnership between the waqf institution formed by the community and the waqf agency, which is formed by the government in which the members consist of Nazhir in society and the government in a professional manner (Kasdi, 2017). However, the term productive waqf is not very familiar in the midst of Indonesian society. This can be seen from understandingnIndonesian people who seenwaqf is only limited to giving in formnimmovable property, such as land and nearby buildings for places of worship, graves, boarding school, orphanages, and education only. Utilizationnwaqf objects are still in the rangenon things of a natural physical, so that they do not have an economic impact. The number of possessionsnwaqf in communitynIndonesia has not been able to cope with the poverty problem. This is because the waqf cannot be managed maximally, while the poor from year to year are increasingly improved in urban and rural areas. (Indriati, 2017) There are two patterns of development yields of productive waqf assets that can be madenby managers: first, the development of waqf for social activities, such as waqf for justicensocial, people's welfare, development, health facilities, public policy advocacy, legal policy, human rights, protectionnchildren, environmental conservation, empowermentnwomen, developmentnarts, and culture. Second, development with economic value, such as developing trade, financial investment, developnindustrial assets, property purchases, and so on. The existing waqf has not touched much on contextual understanding. Waqf is one of the various activities of Islamic economics. Waqf is a problem that has not been extensively investigated. The existing discussion still concentrates on the subject matter of fiqh, philosophy of Shari' ah, usury, financenand Shari' ah banking, and so on. However, theoretical and practical discussions are rare. In terms of collecting public funds, it appears that the discussion centered on the issue of zakat. However, other fields have not received sufficient attention. Many Islamic economic system activities can be undertaken to accumulate public funds. These funds can be collected not only from zakat funds but also from sources such as sadaqah, infaq, waqf, and nso. (Hadi, 2017) Therefore, there are no other words to minimize the economic disparity of the people, except for withnmaximizing the roles of empowerment institutions that exist. In Islam, we are familiar with waqf institutions and zakat. In its infancy, the economy is quite concerningnThis is actually the role of waqf in addition to instruments other. It can be felt that its benefits are to improve people's lives, particularly in the economic sector, if waqf is managed properly. Allocationnwaqf in Indonesia, which leads to less empowerment of the economy of the people and tends only for the sake of special worship activities are more influenced by limitations; Muslims will understand nwaqf, both regarding the donated property, allotmentnendowments, and nazir waqf. In general, Indonesian Muslims understand that designationnwaqf is only limited to the benefit of nworshipnand things that are usually donenin Indonesia. Mosques, mosques, schools, madrasa, Islamic boarding schools, cemeteries, and nso (Kasdi, 2014) Thus, it can be said that in Indonesia until now, the potential of waqf as a means does good for interestsnthe community has not been managed and utilized in a manner maximum within the national scope. From practical experience, waqf creates a certain image or perception regarding waqf. First, waqf is generally in the form of objects that are not moving, especially the ground. Second, in reality, the ground built a mosque or madrasa. Third, their use is based on the will of the waqf giver (waqif). other than that interpretation arises that in order to maintain its immutability, the waqf land is prohibited for sale. As a result, in Indonesia, banks do not accept donated land as an admirer. In fact, if donated land can be used, organizations such as NU and Muhammadiyah or university/collegenheight can getnloan funds that are played and produce something. (Al-hadi 2009) The understanding and empowerment of waqf assets among Muslims has undergone significant changes, either in paradigm or operational practice. At the levelnparadigm, waqf, which was initially only understood as a limited utilization of worship in the form of a mosque and musalla, is currently expanding into exploitation efforts for various goods or objects that are economically productive. Meanwhile, at the practical level, waqf is now starting to develop into a form of valuable use productive and as a means of economic improvement, such as productive waqf for education, hospitals, supermarkets, etc. (Hadi, 2017) The broader understanding and wealth empowerment this waqf becomes important, especially if it is associated with the concept of developing productive waqf that aims to achieve social justice and improve well-being. Therefore, productive waqf has two visions: destroynlame and supplying social structurenfertile land for well-beingnpeople. This vision is the derivation of the philosophy that implies more waqf emphasizenon empowermentnpotential of waqf, so that waqf does not only have a divine dimension but also pro-humanity. This is a more welcoming waqf of the reality of people stricken by poverty, ignorance, and backwardness. (Dikuraisyin, 2020) There is no doubt that waqf in the history of Islamic civilization has been a pillar of support for the establishment of socio religious institutions for the Muslim community for centuries. This is done through the provision of funds and supporting facilities for religious ritual activities, education, and health. In fact, waqf at that time had significant social functions by providing public facilities, such as roads, bridges, drinking water, parks, cities, and public baths. Waqf has endorsing several social justice, education and social justice initiatives health, and other compatible goalsnwith a paradigm benefit which is also a partnof the orientation of the maqasid asy-syari'ah. The research gap is changing the understanding and empowerment of waqf assets among Muslims that have undergone significant changes, both in paradigm and operational practices. The development of productive waqf aims to achieve social justice and improve the welfare of people. Therefore, productive waqf has two visions at once, destroying unequal social structures and providing fertile land for the welfare of the people. RESEARCH METHODS This study uses a qualitative research model with the assumption of research and development, and the purpose of the research model is gradually researched and developed qualitatively (Sugiyono, 2005). Qualitative is defined as a research method used in the condition of natural objects, and is also called naturalistic, experimental, and ethnographic methods. Research and development are the basic assumptions of research that begin with the introduction, development, and implementation (Sugiyono, 2005). The obtained data were analyzed using the data analysis model Miles and Hubermannnamely reduction, display, and conclusion (Sugiyono, 2005). In the final stage, all data were analyzed for their validity using the source triangulation technique through checkingnrepeat through the final stage of observation (single source triangulation) and through related theoretical documents, both documents obtained from online media as wellnphysic. RESULTS AND DISCUSSION Productive Waqf Empowerment The above phenomenon encourages waqf managers, the government, and ulama to interpret the meaning of waqf. Waqf is not only understood in the spiritual dimension, but also contains a socio-religious dimension and has the potential to increase the economy and welfarenMuslims. One of the efforts for empowermentnwaqf is to optimize the role of waqf to make it more productive. Waqf has great potential to be developed into productive assets, which in the end are not only able to support servicesnsocio-religious, but also to support various justice initiativesnsocial andneducation. According to Jaih Mubarok (2008: 15-16), productive waqf is a transformation from professional management of waqf to increase the benefits of waqf. Productive waqf can also be interpreted as a management process for objects to produce maximum goods or services with minimum capital. According to Mundzir Qahaf, productive waqf transfers assets from consumptive efforts toward production and investment in the form of production capital that can produce and produce something that can be used in the future, either by individuals, groups, or by the general. Thus, productive waqf saves and invests activities simultaneously (Qahaf, 2006) Said and Lim (2005: 6-7) conducted research on how to empower strategiesnWaqf assets to be productive, according to which there are five (five steps to empower strategiesnwaqf in order to become productive waqf; namely, recognizing the potential for rotationnwaqf assets by looking at the history or waqf models that have been running and doingnupdatenon the waqf system. Second, it facilitates the modern waqf model by applying management techniques waqf, as long as the objectives do not conflict with Sharia principles. Third, promote Islamic philanthropy through waqf, so that waqf can become the backbone for society and potentially play an important role in servicenPublic. In addition, productive waqf can be an alternative in times of crisis when the government is no longer able to meet people's needs. Fourth, modernizing the administration of waqf, so that the management structure waqf can be more efficient, transparent, and responsive, as well as establish technical cooperation and exchange experiences with educational institutions, international organizations, and other countries to develop waqf investment. Fifth, to be productivenwaqf that was previously unproductive by generating commitment from waqf, nadzir, investors, and the surrounding community who know the benefits of waqf. The emergence of the productive waqf paradigm is the main choice when people are in a downturn of acute poverty. WithnProductive waqf means that existing waqf is given top priority and is aimed at more productive efforts. Of course, with different paradigm measures from consumptive waqf, it gives hopennew to most Muslim communities. This waqf has no intention of directingnwaqf on mahdah a sich worship, but is ratherndirected at productive efforts to solve the problems of the people. Of course, this productive waqf has a social dimension. Productive waqf is devotednoneself to errornMuslims. Thus, it can be seen from this that waqf is prohumanitarian and not waqf with a dimensionngodliness only. Therefore, what appears in waqf is waqf, which is more welcoming to the reality of Muslims who are hit by poverty, ignorance, and underdevelopment. Globally, productive waqf has become the main paradigm in managing assets. Call Egypt, Algeria, Sudan, Kuwait, and Turkey have long been managing waqf in a productive direction. For example, in Sudan, the Sudan Waqf Board manages unproductive waqf assets by establishing the nWaqf Bank. Financial institutions are used to assist waqf development projects in establishing business and industrial companies. Another example is to develop the productivity of waqf assets, and the Turkish government established the Waqf Bank and Finance Corporation. This institution specifically mobilizes waqf sources and finances various types of joint venture projects. In addition, in a country where the Muslim population is a minority, developmentnwaqf is no less productive. In Singapore, the waqf assets in Singapore amounted to $ 250 million. To manage it, the Singapore Islamic Council (MUIS) created a subsidiary called Wakaf Real Estate Singapore (Warees). Warees are companyncontractors that maximize waqf assets. Example of empowerment potential, Warees foundedn8-storey building on waqf land. Financing is obtained from loansnSukuk funds of $ 3 million, which must be returned for five years. This building for rent and annual net worth is $ 1.5 million. After three years of running, the loan was paid off. Next, incomenit belongs to the allocated MUISn for the welfare of people. In Indonesia, developmentnproductive waqf has found its bright point with the passing of Law No. 41 of 2004 concerning waqf and PP. 42 of 2006 on the implementation of Law No. 41 of 2004. Empowerment of productive waqf is characterized by three characteristics: first, management patternsnIntegrated waqf and waqf funds can be allocated for surgical programs. Second, the principles of the nadzir welfare. ProfessionnNadzir is no longer positioned as a social worker, but rathernprofessional workers who live worthy of that profession. Third, the principles of transparency and accountability are discussed. Waqf bodies and waqf institutions must report the entire process of managing funds annually to the community (Antonio, 2007) Productive Waqf Development Strategy Implementing productive waqf requires a strategy that can develop a productive waqf. According to Eddy Khairani, Waqf property inventory throughout Indonesia through a computerized system. The mapping potential of the waqf property, such that the potential that can be developed can be identified. Doing advocacy, protectionnand completionnWaqf land disputes with third parties. d. Enhancement of the quality of Nazhir and the waqf institution; Nazhir and the waqf management institution as the spearhead of management and developmentnwaqf property is givennmotivation and coachingnin to improve management professionalism through various trainings and orientations. Nazhir's quality in Indonesia continues to be givennmotivation and direction in order to donimprove, both regarding managerial abilities and critical individual skillsnin productively empowering waqf. e. Facilitating partnershipnproductive waqf investment: As a motivator and facilitator, the Directorate General of Islamic Community Guidance facilitates various events in order to build partnerships with potential investors, such as the Investment Coordinating Board (BKPM) and the Chamber of Commerce and Industry (KADIN) in several regions to empower nwaqf productively. Waqf assets in Indonesia are quite large and have the potential to be developed by inviting several third-party agencies interested in developmentnwaqf. f. Facilitating the formation of the Indonesian Waqf Board (BWI); In order to support managementnand the development of waqf in Indonesia, the Directorate General of Islamic Community Guidance facilitates the formation of the Indonesian Waqf Board (BWI) as an institution that has tasks, including fosteringnagainst Nazhir throughout Indonesia (Khairani, 2013) Thus, there are six strategies to empower productive waqf, starting from legal products to building networks in the form of partnershipnproductive investment. One of them is cash waqf, which can open up unique opportunities to create investment in order to provide services, religion, educational services, and social services. Savings of rich people can be used in exchange for Cash-Waqf certificates. Development resultsnWaqf obtained from the certificate can be used for purposes as varied as purposesnwaqf itself. Another thing about Cash Waqf Certificate is that it can change old habits where opportunitynwaqf as if only for rich people. (Indriati, 2017). Productive Waqf and Empowerment the Economy of the People Waqf is a problem that has not been extensively investigated. This is because Muslims have almost forgotten activities originating from waqf institutions. Mismanagement and corruption problems are expected to become the main cause, so that the activities of the representative institution lack interest or are even abandoned by Muslims less than a century ago. (Kurniawan, n.d.) Waqf is important for empowering the people's economy. In its history, waqf has played an important role in developing social, economic, and cultural communities. The most prominent part of the waqf institution is its role in financing Islamic education and health. Continuitynthe benefits of endowments are very likely to enact productive waqf to support various social and religious activities. In general, productive waqf is only in the form of agricultural land or Perkebenunan, buildings, managed to provide benefits that the results can be used for these activities. Thus, waqf assets have become a source of funds for people and people. (Indriati, 2017) The Indonesian nation faces two challenges in the runningndevelopment wheel. The gap between the rich and the poor is widening. The tendency to increase depends on the poor to the owners of capital and Indonesia's dependence on developed countries. Sasono added that there were at least four basic problems with movementnda'wah Islam. First, poverty, both in terms of economy or limitationsnphysical means and needs that give birth to a culture of poverty. Second, the result of povertynhence the signs of underdevelopment appear. Third, there is the emergence of an involutive and exclusive nature. Fourth, the weakness of progression to accommodate participation and the weakness of the cooperation mechanism to smooth it outnstruggle. (Kurniawan, n.d.) Waqf is one of the expected alternatives to provide solutions for solving problems. Thus, management is needed, which is optimal for waqf assets. However, many waqf assets are currently not optimally managed. According to departmental datanThe last religion there is donated land in Indonesia as many as 403,845 locations with an area of 1,566,672,406 M2. Of these, 75% have been certified as waqf and about 10% have high economic potential, and there are many that have not been recorded. This indicates a lack of understanding about the assets that are donated to immovable objects and only for purposes worship, such as mosques, prayer rooms, madrassas, funerals, and others. In fact, waqf can be productively managed. For example, the Maintenance Foundation and Expansion Waqf Boarding School Gontor, East Java, and the Waqf Board of the Islamic University of Indonesia, which manage waqf assets productively. In management efforts, Waqf land is productive, and the role of nazhir waqf is as a person or institution that has the task of managing waqf. Nadzir is one of the pillars of waqf that has responsibilities and obligations in maintaining, maintaining, and developing nwaqf, and distributes the results and benefits of the waqf to the targetnwaqf. Often, the waqf assets managed by Nadzir do not have sufficient capacity, so that the wakaf assets managed are not optimal and cannot provide benefit for the targetnwaqf. In the jurisprudence book, the requirements to be Nazir, apart from having to be Islam and mukallaf, Nadzir must also have the ability to manage waqf and be trustworthy, honest, and fair. (Kasdi, 2016) If the waqf property is managed optimally and Nazhir has the ability to manage waqf, it needs political support from the government in the context of empowermentncivil society. The potential of waqf in empowering people is encouraged by the government in terms of politics with regulationsnwaqf legislation so that it can function productively. For example, Dompet Dhuafa Republika innovation from civil society is a form of concern that arises from society. Muslims have the freedom to manage wealth in accordance with Islamic financial principles. This system is not only a profitable community but also supports government programs. Such circumstances will open opportunities for empowermentnproductive waqf as an effort to increase people's welfare (Kasdi, 2015). To optimize waqf property for the realization of the purpose of waqf, namely as a means and infrastructure to improve quality of life and human resources, it is necessary to change understandingnpeople regarding waqf who think that waqf assets are limited to immovable assets such as graves, mosques, foundations, and Islamic boarding schools that cannot be produced. Perwakaf Legislation Regulations, namely Law No.41 of 2004 concerning Waqf and Regulations, Government No. 42 of 2006 concerning ImplementationnWaqf. The two regulations both have urgency apartnthe importance of mahdha worship, and emphasized that it is necessary to empower waqf productively for our interest, namely the welfare of the people. CONCLUSION Waqf plays an important role as an empowering instrument for the economy of people. Waqf plays an important role in economic development and social and community culture. There are four problems with Islamic missionary movements through waqf. First, poverty, both in terms of economy or limitationsnphysical means and needs that give birth to a culture of poverty. Second, the result of povertynhence the signs of underdevelopment appear. Third, there is the emergence of an involutive and exclusive nature. Fourth, the weakness of progression to accommodate participation and the weakness of the cooperation mechanism to smooth it outnstruggle. Thus, waqf is an alternative that can provide solutions to these four problems. Thus, it is necessary to optimize waqf management. Waqf requires a strategy that can develop a productive waqf. According to Eddy Khairani, there are several things that can be donenin the productive waqf development strategy, namely, regulation of laws and regulationsnwaqf; Socialization of regulationsnper-Waqf Law and a new paradigm of waqf, certification, inventory, and advocacy of waqf assets, enhancing the quality of Nazhir and waqf institutions, facilitating partnershipnproductive waqf investment, and facilitating the formation of the Indonesian Waqf Board (BWI).
2021-07-26T00:05:31.564Z
2021-06-15T00:00:00.000
{ "year": 2021, "sha1": "60e88d943f69f09d9be6a75b36673a19284b8730", "oa_license": "CCBYSA", "oa_url": "https://journal.iainkudus.ac.id/index.php/Ziswaf/article/download/9871/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f29fe9e79bbf3f30a700a45379df9ed294bab918", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
29565621
pes2o/s2orc
v3-fos-license
The right to be wrong Nothing holds science back longer than clinging to what should not be clung to, and all too often it’s fear, fear of the consequences of having made a mistake, that keeps ideas around long past their expiration date. Utilizing data on nearly 400,000 patients without known coronary artery disease who had been referred for elective procedures in the CathPCI Registry, only 38% were found to have obstructive disease, whereas 39% had little or no disease (i.e., "normal" coronary arteries). Adding to these troubling findings were the observations that a large number of patients were asymptomatic (w30%) and noninvasive testing before the procedure did not improve diagnostic yield. Ko et al. (3) further explored these issues in an intriguing report published earlier this year in the Journal of the American Medical Association in which cross-national comparison data were used between New York State and Ontario. In this study, the authors compared 18,114 patients in New York and 54,933 patients in Ontario who were undergoing elective coronary angiography (utilizing a government-funded, single-payer system) from 2008 to 2011. The overall rate of obstructive disease in New York was only 30% compared with 45% in Ontario, which is a finding primarily driven by a higher rate of referral of low-risk patients in New York. Using a risk model based on clinical factors and noninvasive testing, fewer than 1 in 5 patients in New York had greater than 50% likelihood of an obstructive coronary artery disease compared with more than 2 in 5 patients in Ontario. Importantly, no underdetection of patients with surgical coronary artery disease (left main disease or 3-vessel coronary artery disease) was noted, despite a historically 50% lower use of coronary angiography per capita in Ontario. Thus, a more restricted approach to patient selection for coronary angiography in Ontario did not appear to miss those with critical disease. In this issue of the Journal, Bradley et al. (4) add to this discussion with a report from the Veterans Affair (VA) Healthcare System's Cardiovascular Assessment, Reporting and Tracking for Cath Labs (CART-CL) program. This study is important because the VA Healthcare System represents a large, integrated healthcare delivery system in the United States where financial incentives for performing coronary angiography and medico-legal concerns may be less than in the private sector. The authors utilized data from 76 VA cardiac catheterization laboratories between 2007 and 2010. Of the 22,538 patients who underwent elective coronary angiography during this time period, 4,829 had normal coronary arteries (21%) and 11,622 (52%) had obstructive disease. Patients with normal coronary arteries were more likely to have low Framingham risk scores and to have undergone a noninvasive test. To assess hospital-level variation, the hospitals were divided into quartiles based on the percentage of cases with normal coronary arteries with quartile 1 having a rate of normal coronary arteries of 11% and quartile 4 having a rate of 30%. Patients in quartile 1 were more likely to undergo noninvasive testing, but no consistent trends were noted across quartiles in patient demographics, cardiovascular risk factors, Framingham scores, or hospital characteristics. This work by Bradley et al. (4) is important for several reasons. First, it suggests a higher referral threshold for coronary angiography within the VA. Given the possibility of less direct financial incentives for testing in an integrated healthcare delivery system, this finding may have implications for Accountable Care Organizations that will gain traction in the coming years. Second, their observation of 10-fold variation in hospital rates of normal coronary arteries is important. Despite finding an overall rate of normal coronary arteries that was almost one-half of what was reported from the CathPCI Registry, this inconsistency implies an ongoing need to improve patient selection across institutions and reminds us that factors beyond financial incentives are playing a role. Third, this report raises a real concern regarding studies that compare rates of normal coronary arteries across healthcare systems that many VA cardiologists will immediately recognize. Given a higher burden of baseline and has served as a peer reviewer of PCI quality and appropriateness. Drs. Thomas and Nallamothu have reported that they have no relationships relevant to the contents of this paper to disclose. disease in the VA population, a poor decision to perform coronary angiography in a veteran (e.g., asymptomatic and low-risk stress test) may be statistically more likely to yield obstructive disease than an appropriate decision in other settings. As even 10% of patients with an acute coronary syndrome might have nonobstructive disease (5), we may be right, but for the wrong reasons, or wrong for the right reasons. Thus, it remains unclear as to what we (as a clinical community) are to do collectively with these studies of rates of normal coronary arteries (and the others that may potentially follow). Yet, the questions that they raise are potentially enormous. To what extent do high rates of coronary arteries indicate poor quality or suggest that we are performing too many procedures? Do we need to become more adept at risk stratification or do we need more or better noninvasive testing? How are financial incentives driving these decisions and what other factorsdsuch as medicolegal concernsdare playing a role in patient selection? And finally, what is a reasonable rate of normal coronary arteries that should be expected for cardiologists, realizing that 0% is neither possible nor desirable? Of course, many of these questions deal with the overall quality of current clinical assessments and noninvasive testing. These issues were highlighted over 3 decades ago in the seminal work of Diamond and Forrester (6) with their application of Bayes' theorem to coronary angiography. Results of any clinical finding or diagnostic test must be placed into the context of a patient, as their interpretation inherently depends on the pretest probability of disease. Diamond and Forrester (6) demonstrated that the probability of coronary artery disease may be obtained in large part through assessment of the patient's age, sex, and symptoms. Despite significant advancements in noninvasive tests since that time, it is disappointing that these tests only marginally improve the diagnostic yield of coronary angiography over these clinical factors (7). Despite the clear need to improve our decision-making process for coronary angiography, it also is important to acknowledge that some elective procedures will undoubtedly result in the finding of normal coronary arteries. So when is it right for us to be wrong? Is the rate of normal coronary arteries found in the CathPCI Registry of 39% too high or perhaps the rate of 21% in the VA population too low? Too high a rate suggests waste and the danger of unnecessary procedures, whereas too low a rate implies we may be causing harm by missing patients appropriately referred for this diagnostic test. Although the study by Ko et al. (3) suggests this latter concern may be minimized, it is clear (even from that study) that we must be able to accept a few false-positive test results as part of the process. In some circumstances, there may be great value to a negative study that identifies normal coronary arteries, given the concerns many patients have with the possibility of cardiac conditions as a cause of their symptoms. In fact, the value of any diagnostic test lies not only in its ability to "rulein" disease, but in how it helps clinicians to "ruleout" disease as well because that also strongly influences subsequent management. We believe many forces will push this debate even further in the coming years. Rates of normal coronary angiographies have been discussed as a performance measure for over a decade now, but there has been little pursuit of it (8). However, the emerging data highlighted by Patel et al. (2), Ko et al. (3), and Bradley et al. (4) suggest that a greater interest will and should be placed on risk stratification and patient selection in the coming years. This is further supported by the recent publication of appropriate use criteria (AUC) for coronary angiography. Using rates of normal coronary arteries may supplement AUC to fully inform us on how well an entire system is doing in this regard and ensure the validity in quality comparisons across hospitals. Bradley et al. (4) eloquently raise these points in their article, but they also warn us about potential limitations with its use in isolation. For example, before rates of normal coronary arteries become a performance measure, we clearly need more empirical work as ranking of hospitals in the VA Healthcare System was highly sensitive to how "normal" was defined. In Figure 3 of the article by Bradley et al. (4), the top hospital ranked by its rate of normal coronary arteries was approximately 50th by its rate of nonobstructive coronary artery disease. Although the extent to which the use of rankings and performance measurement of rates of normal coronary arteries may influence clinicians is unclear, it may be consequential. A prominent example of the real-world implications of these decisions was recently illustrated. In a $4 million settlement by a physician and healthcare system for allegedly performing unnecessary coronary angiography, it was purported that 75% of patients had "no significant heart blockages" (9). This case is obviously complex and raised multiple issues, including the improper reading of nuclear stress tests prior to coronary angiography. Yet, it is telling that this case is fundamentally different from prior reports of inappropriate coronary stenting or cardiac surgery because it involves the claim that a diagnostic test, and not a therapeutic procedure, was overused. This also raises the natural question as to whether this logic may be extended to other diagnostic tests, such as measures of normal rates of echocardiograms, computed tomography scans, ultrasounds, and even some expensive laboratory tests. Improving our understanding of all these issues surrounding rates of normal coronary arteries will be fundamental as we move forward in an era of AUC, quality improvement initiatives, performance measures, and escalating costs. This must be done carefully with recognition that large differences will exist across the populations that we serve. This should influence how these data are collected, interpreted, and reported. As clinicians, we certainly need to become better at how we utilize expensive and sometimes risky tests like coronary angiography. This desire for improvement, however, must be balanced with the knowledge that there remains an important role for judgment in such decisions. That is we need to hold on to the right to be wrong.
2016-05-17T12:22:17.936Z
2008-02-29T00:00:00.000
{ "year": 2008, "sha1": "33839b6089cbb33e4032420dbbeb2ec5b4eef437", "oa_license": null, "oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/gb-2008-9-2-102", "oa_status": "BRONZE", "pdf_src": "ScienceParsePlus", "pdf_hash": "5f189e35f527f78169179fbf09041193aeb037fb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
119497349
pes2o/s2orc
v3-fos-license
Physics Cahiers No. 1 We begin herewith the editing of physics notes taken in the course of Journal Club seminars at INFN-LNF in 1996. The activity consists of informal talks about work in progress and/or review of (more or less) recent physics results of interest to our laboratory. In the section titles the name of the speakers appear, together with the topics discussed in the seminars. We plan to publish these notes twice a year. 1 S. Bellucci: In-mediumKN scattering and chiral lagrangians, or the disappearence of Λ(1405) in heavy kaonic atoms * * Also presented at 2 nd DEAR Collaboration Meeting, LNF, 1-2 April 1996. My message here is twofold: • there is a clean prediction of the chiral symmetry effective lagrangian, stating that the K − p scattering length in a medium strongly depends on the nuclear density, so that its real part changes sign already at 1/8 of the normal nuclear density [1]; • a not negative kaon scattering length on an isolated proton (kaonic hydrogen) can be obtained only for values of the coupling constants among the 6 coupled meson-baryon channels, which are not compatible with SU(3) flavour symmetry. These constants are calculated in [2]. In what follows, I focus on the first point, which is a signal to the experimenters [3] of the importance of a measurement for atoms heavier than hydrogen and deuterium. For the second point, see also [4]. A recent study [1] obtains the following interesting results. Nuclear matter modifies very strongly the low-energy K − p interaction. The attractive forces that produce the Λ(1405) as a bound state are reduced by Pauli blocking. In medium the Λ(1405) moves above the K − p threshold at one-eighth the normal nuclear matter density. The K − p scattering length depends strongly on the density. Its real part changes sign at one-eighth the normal nuclear matter density. Correspondingly the optical potential for kaonic atoms has an unconventional r-behaviour. The presence of the Λ(1405) bound state just 27 MeV below the K − p threshold invalidates the low-density theorem -stating that the optical potential goes linearly with the density -at unusually low density values. The microscopic understanding of the above features is based on the low-energy QCD. A dynamical model of the Λ(1405) structure as a bound state ofK and N in the I=0 channel (and a resonance in the Σπ channel), based on the iteration of a pseudo-potential to infinite order in a Lippmann-Schwinger equation and describing successfully the data in the S=-1 strangeness sector, is modulated to respect the SU(3) chiral symmetry and have, in the Born approximation and up to terms of order O(q 2 ) in the meson momentum, the same s-wave scattering length as the effective chiral lagrangian describing the low-energy meson-baryon interaction [2]. The successes of the theory in describing the s-wave coupled channel dynamics of theKN and π-hyperon systems (i.e. the Λ binding energy, its width, and all available low-energy data ofKN, Σπ, Λπ systems), persist in describing how the nuclear matter affects the formation of theKN bound state. At a small density value, i.e. far out in the nuclear surface where the nuclear density distribution has some overlap with the atomic K − wave function, the bound state disappears, as the Pauli exclusion principle yields enough repulsion energy to compensate the binding energy E Λ = −27 MeV. The K − p amplitude varies rapidly with the density near threshold, hence the effective scattering length in nuclear matter has a strong dependence on the density. The corrections to the K − p amplitude due to the Fermi motion and the nucleon binding are also evaluated. They turn out to be much less important (and mutually opposing) effects, in comparison with the Pauli blocking of intermediate states [1]. 2 M. Greco: QCD jets at high p T In a recent CDF paper [5] the comparison between the data -starting from a rather low p T (≥ 15 GeV) and up to a maximum of 400 GeV -and the complete O(α 3 s ) calculation [6,7] of the inclusive jet cross-section dσ jet d 3 p T , at large p T with the rapidity ranging in the interval 0.1≤ |η| ≤0.7, is carried out. The agreement is quite good for about ten orders of magnitude. However it appears that a small discrepancy is present at large p T , i.e. the data suggest a departure from the O(α 3 s ) prediction for E T ≥ 200-250 GeV. One must bear in mind that the D0 collaboration reported very recently new data showing no deviation effect in the same range. Due to the large errors in the high p T region, the latter are in agreement with both the QCD calculation and the CDF data. The discrepancy has been indicated as a possible signal for quark compositeness. A composite scale Λ =1-2 TeV has been estimated by means of an effective four-fermion interaction 1 Λ 2 (qΓq) 2 . However, before drawing any definite conclusion, it is important to evaluate carefully the theoretical uncertainties in the QCD prediction, particularly at large E T . The data in the region of discrepancy are in the large x region (x ≥ 0.6) where the structure functions, in particular the gluon one, are not very well measured. Indeed the CTEQ collaboration has tried to change the g structure functions and checked the corresponding effect [8]. Generally the theoretical uncertainties in the complete O(α 3 s ) calculation due to the changes in the scales and the structure functions are of order 20-30%. In the estimate of the full theoretical prediction one needs however to take into account also the corrections coming from higher orders, which could become relevant near the boundary of the phase space (p T ≃ √ s/2). Indeed, in the large x region the QCD expansion parameter is α s ln(1 − x) n , rather than α s . Hence, when x is close to 1 and correspondingly α s ln(1 − x) = O(1), one needs to resum all these terms, i.e. a finite order calculation is not enough. The Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) equation gets modified [9]. All those terms which diverge for x →1 are related to soft and collinear gluon radiation. In Drell-Yan processes this correction has been calculated and the effect of x →1 is important, but it has not been measured so far. The CDF data would eventually yield the first opportunity to measure this effect. The claim [10] is that the corrections increase the value of the inclusive jet cross-section at large p T , i.e. they go in the right direction. If the corrections turn out to be large, then one needs also to correct for x → 1 the structure function code used in analyzing the DIS data. Of course before making any claim about possible evidence for the preonic structure of the quarks, one must include the effect of these higher order corrections. 3 G. Isidori: The problem of R b : a phenomenological update 1. Electroweak precision tests performed at LEP and SLC have confirmed the Standard Model (SM) predictions with great accuracy. Among several observables which have been measured, only the ratio R b = Γ(Z → bb)/Γ(Z → hadrons) shows a departure form the SM prediction larger than three standard deviations. In particular, the most recent determination of R b obtained by combining the four LEP-experiments, R exp b = 0.2219 ± 0.0017 [11], is 3.5σ far form the SM prediction: R SM b = 0.2157 ± 0.0001. 2. Within the SM, due to non-decoupling effects induced by the top quark, the Z → bb vertex receives non-universal corrections [12,13]. The triangular diagrams Z → tt → q dqd (W -exchange) and Z → W + W − → q dqd (t-exchange) are completely negligible for q d = d (due to small CKM matrix elements) whereas they are relevant for q d = b. Interestingly, these effects are proportional to the Yukawa coupling of the top (this is the reason why they do not decouple as m t → ∞) i.e. they are related to the symmetry-breaking breaking sector of the Model. The leading correction induced by top loops can be written as a modification of the universal down-type coupling constants of the b quark with the Z boson: Besides t /16π 2 ). All the corrections have been calculated (up to two loop in many cases [13,14]) and the corresponding uncertainties are negligible (δR SM b ≃ 10 −4 is a very conservative estimate). 3. Assuming that the discrepancy of R b is generated by non-SM physics: R exp b = R SM b (1 + δ non−SM ), then also the determination of α s (M Z ) performed at LEP via the ratio R h = Γ(Z → hadrons)/Γ(Z → µ + µ − ) has to be modified. The value of α s (M Z ) extracted by R h introducing δ non−SM is lower than the uncorrected value and is in better agreement with the low energy determinations (from ∼ 2σ above the value shift to ∼ 1σ below [15]). Though not very significant form the statistical point of view, this result enforces the hypothesis of new physics in Γ(Z → bb). On the other hand, playing the same game with R c , whose experimental value is about 2σ below the SM value, the resulting value of α s is completely inconsistent with the low energy determinations. 4. New physics sources in the process Z → bb can be generally divided in two classes: loop and tree-level effects. Let us start to analyze the former. The contribution generated by a fermion (F ), with the relative Yukawa scalar field (A), to the triangular diagram Z → bb can be calculated in a model independent way (imposing only the conservation of charge and weak-isospin) [16]. The sign of the correction thus obtained depends crucially from the weak-isospin assignment of F and can be applied to several interesting cases: • Two Higgs doublets. In this case F = t and A = H ± (the new physical-charged-Higgs field). Like in the SM, the correction is negative and there is no possibility to improve the agreement with the data. • Fourth Generation. In this case F = t ′ (the new up-type quark) and A = φ ± (the SM unphysical-charged-Higgs field, which in the unitary gauge appears as the W longitudinal degree of freedom). Also in this case the correction is negative. • MSSM. In this case there are three separate contributions: top and charged-Higgs, charginos and stop, neutralinos and sbottom. The first one has a negative sign (as in the Two Higgs doublet case) whereas the second and the third one can have a positive sign. For high (∼ 1 TeV) and almost degenerate values of SUSY particle masses the three contribution cancel each other. Only for light stop and charginos (with small tan β) or light sbottom and neutralinos (with very large tan β) there is a chance to improve the agreement with the data. However, a recent correlated analysis of MSSM parameters (including new LEP data, b → sγ and Tevatron results) shows that is impossible to decrease to less than 2σ the discrepancy [17]. Furthermore, if this was the case, then light SUSY particles should be in the LEP200 range. For what concerns tree-level effects, recently it has been shown that a leptophobic Z ′ [18], universally coupled to up-type and down-type quarks, not only can generate the right correction to R b but can also improve the agreement with CDF data of the inclusive jet cross section at high p T (p T ≥ 200 GeV) [5]. The mass of the Z ′ is estimated to be in the TeV range. 5. To conclude, we can say that there is no clear solution to the problem of R b yet. The possibility of new heavy fermions with unconventional weak isospin assignment or the leptophobic-Z ′ hypothesis point in the right direction but are still ad hoc solutions. On the other hand, the possibility of a statistical fluctuation in the experimental data is far from being excluded. D. Babusci: DHG sum rule and the nucleon spin polarizability at LEGS Energy weighted integrals of the difference in helicity-dependent photoproduction cross sections (∆σ = σ 1/2 − σ 3/2 ) provide information on [19]: • the spin-dependent part of the asymptotic forward amplitude through the DHG sum rule There are no direct mesurement of σ 1/2 and σ 3/2 , for either the proton and the neutron. Estimates from current π-photoproduction multipole analyses [20], particularly for the proton -neutron difference, are in good agreement with the relativistic 1-loop (+ ∆-resonance) χPT calculations [21] for γ but predict large deviations from the DHG sum rule. Integral Multipole Estimate Theory The following two possible interpretations have been proposed: 1. both the higher order χPT corrections to γ are large and the existing multipole are wrong 2. modifications to the DHG sum rule are required to fully describe the isospin structure of the nucleon The helicity-dependent photoreaction amplitudes, for both the proton and the neutron, will be measured at LEGS from the pion-threshold to 470 MeV. Almost 90 % of the γ integral will be covered by this set of data, providing a reasonable comparison with the χPT predictions. This data will also cover about 2/3 of the DHG integral. In these double-polarization experiments, circularly polarized photons from LEGS will be used with SPHICE, a new frozen-spin target consisting of H, D in the solid phase. Reaction channels will be identified in SASY, a large detector array consisting of wire chambers, scintillators and Cerenkov counters with a global solid angle coverage of about 80 % of 4π. 5 C. Forti: Underground muons, a tool to study the cosmic ray composition and the properties of high energy interactions We discuss the importance of the detection of underground muons for the study of the cosmic ray composition and spectra and of the properties of very high energy hadronic interactions. In particular, we show the application of the Monte Carlos HEMAS and HEMAS-DPM to the calculation of the muon multiplicity, topology and decorrelation, of the properties of multi-muon clusters and of the flux of muons from the decay of charmed mesons (prompt muons). The study of the chemical composition of cosmic rays in the energy region around the knee of the spectrum is of fundamental importance for the understanding of cosmic ray acceleration and propagation processes. The properties of high energy (TeV) muons detected deep underground are strongly correlated with the mass and energy of the primary cosmic rays which originated the particle shower. The most important features of underground muon events are : • the muon bundle multiplicity, that is the number of (almost parallel) muons originated by the same primary cosmic ray; • the decoherence, that is the relative distances between all pairs of muons which can be formed within a muon bundle; • the decorrelation curve, which is the relative angle between all muon pairs, as a function of their relative spatial separation; • the number of muon clusters within the muon bundle. We have shown that the distribution of the bundle multiplicity (called multiple muon rate) is clearly sensitive to the percentage of heavy nuclear species in cosmic rays. Composition models with more heavy nuclei predict more events with high bundle multiplicity, respect to lighter models. Also the decoherence curve is sensitive to the chemical composition, but is peculiar characteristic is to depend on the transverse momentum (Pt) distribution of hadrons (mainly pions) produced in the cascade. Then, the measurement of decoherence should allow to test the hypothesis about the Pt distribution adopted in the model for very high energy (1 -10 5 TeV Lab) hadron-air interactions. The study of decorrelation and of muon clusters are new tools recently proposed to enhance the efficacity of underground measurements. In particular, the number of muon cluster within a muon bundle (with multiplicity larger than 8 at Gran Sasso Laboratory) is shown to be sensitive both to the primary composition and to the hadronic interaction properties. A great effort is being devoted to the comparison of different Monte Carlo simulations, adopting different hadronic interaction models. Actually, the evaluation of the systematical uncertainties in these complex calculations is one of the most difficult tasks to be accomplished. For example, a new Monte Carlo code (HEMAS-DPM [22]) has been realised : it is based on the Dual Parton Model (DPMJET-II [23]), interfaced to the HEMAS shower code (containing a phenomenological interaction model [24]). The HEMAS-DPM Monte Carlo allows to simulate the interaction of hadron and nuclei with the air target. Also, the production of charmed particles is accomplished. However, calculations performed [25] indicate that the detection of high energy muons generated in the decay of charmed particles (the so-called prompt muons) could be a hard experimental task because the ratio signal (prompt muons) to noise (ordinary muons) is less than 1% for typical (> 1 TeV) underground experiments. A challenging control of systematics is thus required. R. Baldini: Question marks in the nucleon time-like form factors Question marks related to the measurements of the Nucleon Form Factors are reported (for a complete discussion see [26] and references therein), namely: • the factor of 2 between "asymptotic" values of space-like and time-like of the proton magnetic form factor; • the Nucleon Form Factor mainly imaginary at 9 GeV 2 , according to the FENICE results and a possible explanation within PQCD; • baryonium rides again (according to the e + e − total multihadronic cross section near the NN threshold and according to the Proton Form Factor at threshold) and a short review of the old baryonium phenomenology; • the Neutron time-like Form Factors: expectations and experimental results. Dispersion Relations on the log of the modulus of the Form Factors are discussed, in order to achieve the Nucleon Form Factors in the unphysical region. 7 G. Pancheri: Eikonalized minijets cross-section in photon collisions * * Work in collaboration with A. Grau and Y.N. Srivastava. A model for the parton distributions of hadrons in impact parameter space has been constructed using soft gluon summation. This model incorporates the salient features of distributions obtained from the intrinsic transverse momentum behaviour of hadrons. Under the assumption that the intrinsic behaviour is dominated by soft gluon emission stimulated by the scattering process, the b-spectrum becomes softer and softer as the scattering energy increases. In minijet models for the inclusive cross-sections, this will counter the increase from σ jet .
2019-04-14T02:43:19.968Z
1996-05-22T00:00:00.000
{ "year": 1996, "sha1": "54662cf25898a6ac7dfd11c25b29833446cba26d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "54662cf25898a6ac7dfd11c25b29833446cba26d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
260076200
pes2o/s2orc
v3-fos-license
Passive Lower Limb Exoskeleton for Kneeling and Postural Transition Assistance With Expanded Support Polygon Robotic exoskeletons, which assist in stand-to-kneel and kneel-to-stand (STK-KTS) movements and static kneeling postures are in great demand in the nursing field. This movement involves continuous adjustment of the center of gravity without a sufficient support polygon, which enhances the required joint effort of the ankle and knee. This study proposes a novel passive lower limb exoskeleton to support the movement. The exoskeleton was attached to the right leg and comprised a gas spring. The design followed an assistive strategy of the expanded support polygon. During the STK-KTS, the gas spring provided extra contact with the ground, thereby expanding the support polygon to increase motion stability and propping the knee to provide torque to the leg. The effectiveness of the gas spring was analyzed using a Lagrange dynamics-based simulation. Moreover, it was confirmed that the support polygon was expanded due to the proposed exoskeleton in real-world experiments. Further, experiments with seven healthy subjects showed that the exoskeleton reduced the time-integrated myoelectric potentials of the legs during STK-KTS (13.6%) and static posture (37.9%). These results imply that the proposed exoskeleton has the potential to reduce physical loads and provide a comfortable working environment for nursing workers. I. INTRODUCTION I N THIS article, physical workloads caused by forced static postures and repetitive posture transitions present musculoskeletal disorders in caregivers [1].Dressing assistance is a common task for caregivers and forces them to bend their waist or maintain a kneeling posture [2], [3].Several technologies have been developed to assist elderly people or physically challenged patients in dressing/undressing their clothes without the assistance of caregivers.In particular, some robots that hold a shirt and cooperate with the patient who sits on a chair or a bed to complete the dressing/undressing task have been developed [4], [5], [6].However, the assistance of the robots is limited to dressing/undressing the clothes of the upper body.Dressing/undressing the clothes of the lower limb and shoes is still difficult for the robots because the task requires more interaction and contact with the patients.Therefore, dressing/undressing the clothes of lower limbs and shoes remains a job that needs caregivers' assistance. In this task, caregivers manipulate their hands and the clothes near a floor and transfer from standing to working posture repeatedly, which will lead to a high burden on their lower limbs.Therefore, a device that supports the working posture performed near the floor and the transition movement is of great importance to caregivers.In addition, wearable-type assistance should be helpful for caregivers because each caregiver is responsible for dressing multiple patients and frequently moves between patient rooms. Exoskeletons are wearable assistance devices and have been accepted in some working fields, such as industry, nursing care, and rehabilitation [7], [8], [9].Generally, exoskeletons are designed as articulated robots with links attached to the body of the wearer and are divided into two categories depending on the assistive components: active and passive [10], [11].Active exoskeletons use actuators and power supplies to assist the wearer and require high-level control systems while passive exoskeletons use only passive parts, such as springs, dampers, or brakes [12], [13].Although active exoskeletons for rehabilitating injured or disabled people have been extensively researched, TABLE I COMPARISON OF FOUR POSSIBLE POSTURES their application to industrial or nursing care services is still at an experimental stage owing to their high cost, sophisticated structures, and battery capacity [7], [14].On the other hand, passive exoskeletons have been utilized in some fields due to the low cost, simplicity of use, and no external power supply [10], [15], [16].Therefore, we focused on a passive exoskeleton for caregivers who assist in dressing/undressing tasks. Passive exoskeletons that support prolonged static postures have been developed in the last several years.A chairless chair was designed to assist workers who spend extended periods in a standing posture [15].Similarly, Kawahira et al. [16] developed Archelis as an exoskeleton to support the standing position of doctors during surgery.These exoskeletons were reported to reduce the muscle load on the lower limb.However, these exoskeletons cannot assist the wearers with performing manual tasks near the floor, and they do not target the assistance of postural transition. This study considers four possible postures taken by the caregivers performing the task and determines, which posture our exoskeleton should assist.In the scenario of dressing/undressing a patient's clothes, there are four possible postures: only foot on the ground, kneel on one knee, kneel on both knees, and the hip on the ground (see Fig. 1).These postures were compared from three aspects: ease of transition, range of reach, and stability.Table I gives which postures have advantages in which respect.First, postures that require fewer motion processes to complete the postural transition from a standing posture can be ideal.Kneeling on one knee and only foot on the ground need fewer motion processes compared with the other postures.Second, a larger range of reach can be helpful to complete dressing/undressing both upper body and lower limb clothes.The range of reach in the case of kneeling posture is large.Finally, choosing a stable posture will relieve the design requirement to stabilize the working posture.Fig. 1 shows a range of support polygons and a vertical projection of the center of gravity (CoG).A support polygon is defined as an area consisting of all grounding points.The closer the vertical projection of the CoG is to the side of the support polygon, the more unstable it is.The posture only foot on the ground can be the least stable, whilst the other postures keep high stability.Therefore, this article focuses on kneeling on one knee from three aspects.Moreover, the exoskeleton assisting the kneeling posture has the potential to expand its application from the nursing field to industries.For example, kneeling on one knee has been reported as a common posture for floor layers and has been targeted for assistance using active components [17]. Chen et al. [17] developed an active exoskeleton capable of assisting the wearer with a static kneeling posture and dynamic squatting movements by directly providing torque to the joints of the wearer.However, because the dynamic movements involve careful adjustment of the CoG inside the support polygon, assisting only the torques on the joints of the wearer may be insufficient for successful transfer.Hence, a novel assistance strategy that increases motion stability during dynamic transitions is required. The exoskeleton should fulfill the following design policies: a passive type exoskeleton to be accepted in nursing fields, supporting the posture of kneeling on one knee to relieve the burden to perform tasks near a floor, and assisting the postural transition of stand-to-kneel (STK) and kneel-to-stand (KTS) while enhancing motion stability.We developed a passive exoskeleton to support the static kneeling posture and assist the dynamic STK-KTS movement, as shown in Fig. 2. The support for the static posture is achieved by constraining the position of the thigh and relieving the effort of the knee joint.To assist the dynamic movement, a strategy that expands a support polygon of the wearer with a passive elastic brace is proposed (see Fig. 2). The rest of this article is organized as follows.A gas spring is selected as the elastic brace through a Lagrange dynamics-based simulation conducted in Section II.The detailed mechanism is described in Section III.Further, the effectiveness of the exoskeleton is evaluated with myoelectric measurements in Section IV.Section V provides discussion.Finally, Section VI concludes this article.The contributions of this study are summarized as follows.1) A passive exoskeleton was designed to assist manual tasks in a kneeling posture.During the STK-KTS movement, an elastic brace attached to the thigh expands the support polygon of the wearer and enhances stability. 2) The effectiveness of the elastic brace was analyzed using a Lagrange dynamics-based simulation.The results showed that a gas spring was expected to efficiently assist the STK-KTS movements.3) Experiments were performed with seven subjects for further validation.Myoelectricities of the lower limb muscles were measured and showed satisfactory results. II. STK-KTS ANALYSIS AND ASSISTANCE REQUIREMENTS This section analyzes the STK-KTS movement based on both real-world measurements and Lagrange dynamic-based simulations.These analyses provided a strategy of expanded support polygon, assistance requirements, and passive component selection. A. STK-KTS Procedure and CoG Movement During the STK-KTS movement, the vertical projection of the CoG moves around a side of the support polygon and renders maintaining balance a challenge.In this study, STK-KTS movement is defined as a sequence composed of seven phases: (i) standing, (ii) stepping forward with the left leg, (iii) shifting the CoG forward, (iv) maintaining the static kneeling posture, (v) lifting the CoG, (vi) shifting CoG backward, and (vii) returning to the standing posture, as shown in Fig. 3. Since these motions are asymmetric, we assumed that the right knee touches the ground in this study.The time interval to perform STK-KTS movement was investigated with six healthy subjects.They demonstrated STK-KTS movement at each comfortable speed, and all subjects performed STK-KTS between 3.5 and 4.0 s (average 3.69 ± 0.16).Hence, healthy people can perform STK-KTS within 4.0 s, and 4.0 s was decided as the time interval for simplicity in this article. To confirm the difficulty in the motion, the vertical projection of the CoG and the range of the support polygon were measured with a motion capture system.A subject wore IMU-type motion capture markers (Xsens) [18] and performed STK-KTS without any assistance.Fig. 3 shows the vertical projection of the CoG and the range of the support polygon during STK-KTS.It was confirmed that the CoG shifted up to the side of the support polygon from motion (ii) to (iii).Although CoG was inside the support polygon during motion (iv), it approached the side again during motion (v).Hence, it was estimated that motions (iii) and (v) were difficult to perform. To enhance motion stability, this study considered two possible strategies: manipulating the CoG directly and expanding the support polygon.Since the position of the CoG is highly dependent on the posture, even a slight mistake in the manipulation of the CoG can cause falls of the wearer.In contrast, a slightly insufficient extension of the support polygons may not directly lead to a fall.Therefore, this study adopted a strategy of the expanded support polygon.The support polygon was expanded using an elastic brace (see Fig. 2) attached to one thigh in parallel, and the tip of the brace adds an extra grounding point. To estimate the effectiveness of the strategy of expanded support polygon, we performed an additional analysis by calculating a margin for stability.This margin was obtained from the distance between the vertical projection of the CoG and the closest side of the support polygon.Fig. 4(a) and (b) shows the margin without and with the brace, respectively.The blue line represents the expanded support polygon with the brace, and the additional vertex was defined as the vertical projection of the right knee.Fig. 4(c) and (d) shows the margin during motion (iii) and (v), respectively.The margin without any assistance was continuously less than 5 cm, while with the brace it was more than 10 cm.Hence, these results showed that the expanded support polygon can increase the margin for stability during STK-KTS movement. B. Dynamic Analysis of STK-KTS To analyze the difficulty of STK-KTS from the view of the required joint efforts of the lower limb, a dynamic simulation using the Euler-Lagrange formalism was conducted [19].A dynamic human model was employed and performed STK-KTS movement, as shown in Fig. 5.The human body was composed of three parts: shin, thigh, and HAT (head, arm, and torso).We assumed that the shin included the foot, and the toe joint was ignored.This simulation focused on only the right side of the body, to which the elastic brace was attached.Therefore, the human model was described as a triple-inverted pendulum [20].The angles of the ankle, knee, and hip joints are represented as θ a , θ k , and θ h , respectively [see Fig. 5(a)].The length, mass, and moment of inertia were obtained from Leva's report [21].We assumed that the wearer distributed the body weight on the right and left sides equally to maintain dynamic stability.The weight of the human model and that of the right side were assumed to be 60 and 30 kg, respectively.The Euler-Lagrange formalism for the human model is as follows: where θ is the vector of [θ a θ k θ h ] T .M , C, and G are the mass matrix, Coriolis matrix, and the gravitational vector, respectively.U is the vector of the torque at which the joints of the human model require [τ a τ k τ h ] T .Fig. 5(c) shows the model performing the STK-KTS transition.The commands of the joint angle θ were configured based on the trajectory of the human performing STK-KTS measured using a motion capture system. We assumed that the human model performs postural transition by controlling the angle of the three joints θ.This study adopted a simplified proportional-differential (PD) feedback controller, while there are sophisticated methods for simulating human models, such as linear-quadratic regulator design [22] and fictitious gain [23].This study considered that a simplified PD feedback controller (K p = 4000, K d = 300 [20]) applied to each joint can efficiently demonstrate the wearer's role and lead to intuitive insights regarding the performance of the STK-KTS.Fig. 6(a) shows that the angles of the joints θ were controlled by the PD model.We consider that the PD model sufficiently tracked the command. Fig. 6(b) shows the torque required by the joints θ.The torques reached a peak (234 N • m) at 1.2 s, and the large output continued till 2.6 s.In addition, the ankle and knee joints required large plantar flexion torque and knee flexion torque, respectively.Thus, as expected in Section II-A, large efforts were required in motions (iii) and (v). C. Assistance Requirements and Simulation of Assistance The assistance requirements of the STK-KTS movement were determined based on the result of the simulation.First, reducing the plantar flexion torque of the ankle joint is the major requirement due to the large output.Second, an assistance for the knee flexion torque is also necessary because large output was observed.Finally, our proposed exoskeleton should assist the lower limb from 1 to 3 s because the joints required a large torque continuously during the period. The ankle joint can receive an assistive torque for plantar flexion from the elastic brace when the brace pushes the kneecap.Hence, the characteristic of the elastic brace can significantly influence the effectiveness of the assistance.To determine the suitable assistive component, additional dynamic simulation was conducted among various kinds of assistive braces.This simulation was conducted with four different elastic components: coil spring (maximum force is 120 N), coil spring (max 180 N), gas spring (max 120 N), and gas spring (max 180 N).The spring constants k of the springs were 600, 900, 100, and 150 N/m in order.The compression length was assumed 20 cm on every spring, and they were Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.compressed at a constant speed (20 cm/s) in this simulation [see Fig. 5(d)]. A gas spring is a kind of gas cylinder that comprises a piston rod and a cylinder filled with gas.The reaction force of gas springs is constantly larger than that of coil springs [see Fig. 5(d)].In addition, gas springs can be compressed smoothly compared with coil springs.Due to their smoothness, they are highly adaptable to ergonomics [24] and have been widely used in worker assistance [25], [26]. In this article, the maximum force of the springs was designed to be under 200 N. Since we assumed that the body weight is equally distributed on the right and left sides, concentrating more than half body weight on the right knee will make the right foot leave the ground and significantly enhance the fall risk.Further, the actual users' weight will range from 40 to 80 kg while this simulation uses a 60-kg human model.Given wearers whose weight is 40 kg, the maximum force should be designed to be less than 200 N (half of 400 N).In this study, 180 N was selected as the maximum assistive force from the low-cost and high-performance gas spring lineup: options for the maximum force are 240, 180, and 120 N. To reduce the required efforts of the human model's knee joint, valid methods of assistance were discussed.The effort was caused because the torso and right thigh are always positioned in front of the right knee.Since the CoG moves around the side of the support polygon, the right knee needs to carefully adjust its joint angle to prevent the thigh and the torso from moving forward too much.Although the left leg grounds in front of the other body parts and supports the body weight, the instability remains due to the limited support polygon.Therefore, it is effective to apply torque to the right knee so that the thigh and torso do not move too far forward.In this simulation, this force was defined as a torque to compensate for the moment of the thigh and torso.This torque can be generated by mechanically constraining the relative position of the right thigh to the elastic brace during STK-KTS. The human model was modified as a human-brace model [see Fig. 5(b)].The weight of the exoskeleton (4.0 kg) was added to the weight of the human model's shin and thigh.In this simulation, damping elements of the springs were not taken into account for simplicity [26], [27].The Euler-Lagrange formalism for the human-brace model is as follows: where N denotes the vector of the assistive torque applied from the elastic brace to the human model's ankle and the knee [n a n k 0] T .n a was generated by the reaction force of the elastic brace and mainly assisted in driving the plantar flexion torque.n k is produced as a reaction torque to prevent the thigh from moving forward direction too much.n a and n k are obtained as follows: where F eb is the assistive force from the elastic brace [see Fig. 5(d)], and l shin is length of shin [21].θ eb−g is the angle between the elastic brace and the ground [see Fig. 5(b)].Further, τ k−HAT and τ k−t are the torques acting on the knee joint generated by the weight of the HAT and thigh, respectively.The torques τ k−t and τ k−HAT make the thigh move forward, and n k prevents the movement.The assistive torque N is applied from 1 to 3 s.Regarding the gas springs, we defined the rise and fall time as 0.1 s during 1.0-1.1 and 2.9-3.0 s, respectively, to make the force plot smooth.The time-series data of n a and n k are shown in Fig. 5(d).Fig. 6(c) shows the reduction ratio of effort over time (EoT) in each condition and joint.EoT is defined as total efforts required during the time interval [28] and is obtained by integrating the absolute torque over time (4 s), as shown in the following: where τ means torque required in ankle, knee, or hip joints.In this simulation, t 1 and t 2 are 0 and 4 s, respectively.Fig. 6(d)-(f) shows the time-series data of torques required by the human model's joints.Regarding the ankle joint, the required torque was reduced in every assistance condition [see Fig. 6(d)].The gas springs succeeded in reducing the peak torque due to the initial large force compared with coil springs, and the gas spring (180 N) reduced EoT the most (22.8%).Therefore, it was expected that the gas spring (180 N) was the suitable elastic brace to reduce the required effort of the ankle joint.On the other hand, there were no gaps at 2 s if the maximum force of the springs is the same.However, it would not be a focal point because the springs have no role in reducing the torque in the static kneeling posture (2 s).Moreover, given that coil springs require linear guides for actual usage, gas springs have no disadvantages in terms of mass, cost, durability, or mechanism complexity [24].Therefore, we designed the exoskeleton to assist the STK-KTS movement using the gas spring. The flexion torque of the knee joint was reduced owing to the assistance, while the differences between the four assistive conditions were not significant.Hence, the torque n k mainly reduced the knee flexion torque.To achieve this assistance, we need to design an exoskeleton that can constrain the relative position of the thigh to the gas spring. III. MECHANICAL DESIGN AND ASSISTIVE STRATEGY In this section, we introduce a prototype of the proposed exoskeleton and evaluate the strategy of the expanded support polygon. A. Passive Lower Limb Exoskeleton Based on the assistance requirements mentioned in Section II-C, a passive lower limb exoskeleton was developed, as shown in Fig. 7.This exoskeleton is referred to as thigh brace exoskeleton (TBE) for convenience.TBE was attached to the right thigh and shin, and it mainly assisted the right leg.TBE comprised two main links, gas spring, wheel, rotational joint, and intermediate link.The gas spring was selected as an elastic brace and expanded the support polygon by putting the tip on the ground.The other parts had their role and contributed to the novel strategy of expanded support polygon. The first link was attached to the thigh and is referred to as a thigh link [see Fig. 7(a)], which was attached by two belts that are referred to as thigh belts [see Fig. 7(d)].The position of the thigh belts was adjustable, so the users could wear TBE according to their leg lengths.The second link was attached to the shin by a knee-shin guard and is referred to as a shin link [see Fig. 7(a)].The links are steel plate cold commercial pipe.The gas spring was attached to the thigh link, which was in the pipe and covered the gas spring [see Fig. 7(b)].The range of output was from 150 to 180 N linearly (the spring constant is 150 N/m).The rotational joint was connected to the thigh link and shin link [see Fig. 7(f)].It had one degree of freedom and performed as though it was a knee joint.The intermediate link was a simple and strong wire [see Fig. 7(g)].It was attached to the tip of the gas spring and the shin link.The length of this link has an important role to decide the trajectories of the thigh link during the STK-KTS.The wheel was attached to the tip of the gas spring [see Fig. 7(e)] and a soft cushion was attached to the knee-shin guard to relieve the pressure applied to the knee. B. Design for Assistance Requirements There are three requirements to assist STK-KTS movement, as mentioned in Section II-C.To satisfy the assistance requirements, various parts were incorporated into TBE. First, the gas spring was used to assist the plantar flexion torque of the ankle joint.The thigh link was used to attach the gas spring to the thigh of the wearer, and the wheel helped the tip of the gas spring move on the ground smoothly.A pipe extending from the thigh link propped the knee cap to apply the assistive force to the lower limb [see Fig. 7(h)].However, only these parts could not assist the plantar flexion torque because the wearer could not compress the gas spring at all owing to the movement of the wheel, while the wheel was necessary for smooth grounding.Furthermore, if the grounding point moves freely, the range of the support polygon becomes unstable, and the wearer may lose his balance.The main cause of this issue was that the movement of the wheel was not restricted.Hence, Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.the issue was solved by constraining the postures of both the gas spring and the wheel. To realize the constraint, a link mechanism was constructed using the shin link, the rotational joint, and the intermediate link, as shown in Fig. 8(a).We assumed that the tip of the shin link was fixed on the ground due to a friction force between the ground and the tip of the shin link.A rubber cap was attached to the tip.The rotation of the shin link around the tip led to the compression of the gas spring, and the intermediate link prevented the wheel from moving forward freely.The movement of the wheel was determined by the amount of compression of the gas spring, which means that the posture of TBE was uniquely decided by the rotation of the shin link.Due to the constraint, the trajectory of the grounding point was also determined uniquely, which means that the transformation of the support polygon became stable for every trial. Second, the assistance requirement for the knee flexion torque can be satisfied by constraining the relative position of the thigh to the gas spring.The gas spring prevents the thigh from moving forward too much.The experiment conducted in Section II-A showed that the right thigh kept vertical to the ground from 1 to 3 s (see Fig. 3).It is considered that people can perform the STK-KTS comfortably if the thigh keeps the angle.Hence, the angle between the gas spring and the ground θ eb−g should be designed to keep about 90 • .Given the direction that the wheel moves, the angle should be between 85 and 90 • .The range of the angle θ eb−g changes according to the links' length, particularly the intermediate link.We conducted a parametric search about the length of the intermediate link and made θ eb−g almost between 85 and 90 • .Fig. 8(b) shows the relationship between θ eb−g , the compression ratio of the gas spring, and the length of the intermediate link.A length of 25 cm was selected because the range remains around 87 • . Finally, to satisfy the requirement about the timing of assistance, the length of the gas spring's rod was determined as 20 cm.Although the length can be adjusted by people, 20 cm was used in this article to prepare for a unified experimental setup.We confirmed that 20 cm was enough to reduce the effort of the lower limb in Section IV. Supporting the static kneeling posture is another requirement to enhance the effectiveness of TBE.To support the static kneeling posture, we focused on two points: postural oscillations and pressure.First, fast postural oscillations of the knee joint can be observed during the static kneeling [29], and the knee joint requires constant fine-tuning.Mechanical constraints of the thigh can prevent oscillations and relieve the effort of the lower limbs.Since the thigh link is in front of the thigh, forward falling is prevented.In addition, the thigh belts support the thigh from the back side, which constrains the angle of the knee joint.The effectiveness of relieving the effort is evaluated in Task 2 in Section IV.Second, the knee is exposed to considerable pressure from the ground [30].The pressure can be relieved using a knee guard and soft cushion [31], as shown in Fig. 7. C. Assistive Strategy of Expanded Support Polygon TBE has the strategy of the expanded support polygon.The effectiveness of this strategy was analyzed through the same experiment conducted in Section II-A.The trajectory of CoG and the support polygon were measured with the motion capture system, and the margin for stability was calculated. Fig. 9 shows that the margin with a real brace (TBE, green), with a simulated brace (result in Section II-A, yellow), and without any brace (blue).The margin with TBE was always more than 9 and 7 cm during motion (iii) and (v), respectively.It was confirmed that TBE expanded the support polygon during STK-KTS compared with the condition without any assistance.The average increments were 5.0 and 8.0 cm in motion (iii) and (v), respectively.Hence, this experiment confirmed that TBE successfully expanded the support polygon of the wearer during STK-KTS. Compared with the result of the simulation conducted in Section II-A, the movement of CoG with a real TBE was unstable during motion (iii).It is considered that the use of TBE might change the weight applied to the right leg, which shifted the position of the CoG and make the distance unstable.Regarding the result of motion (v), the margin in the case of TBE was 2.1 cm less than that of the simulated brace.The differences could be caused because the simulation did not take into account the wearer's plan to reduce the joint effort.It is considered that the wearer shifted his CoG toward the grounding point of the gas spring and applied his weight to the gas spring, which reduced the body weight applied to his leg and relieves the joint efforts.On the other hand, the margin was reduced because the CoG approached the side of the support polygon.To confirm whether the amount of the expansion was enough to reduce the wearer's effort, this study conducted another experiment about myoelectricity in Section IV. IV. EXPERIMENTS This section examines the basic performance of TBE based on the measurements of myoelectricity.The experiment was divided into three categories: STK-KTS movements, static kneeling posture, and task near the floor. A. Participants The participants were seven young men with no physical disabilities (24.4 ± 1.0 years, 172 ± 5.0 cm, 58 ± 9.5 kg).Consent was obtained from all participants before performing the experiments.This study was approved by the ethics committee of Nagoya University (No. 21-5). B. Methods and Measurements of Myoelectric Potentials The myoelectric potential was used to evaluate the effectiveness of the assistance.It is an electric signal that flows through muscle fibers when muscles contract [32].This value enabled the quantitative evaluation of the performance of TBE.In this experiment, a noninvasive method was used to measure myoelectric potential using a wireless sensor (Cometa).The sampling frequency of the sensors was 2000 Hz.The acquired signals were passed through a 5-500 Hz band pass filter and smoothed using root mean square with a 0.1-s window.The total effort of the muscles over time was obtained by integrating the processed signals with time [28]. Further, % maximum voluntary contraction (% MVC) was used to normalize the measured myoelectric potential.MVC is a method to decrease the influence of muscle size among subjects.The subjects performed MVC on each muscle for 3 s in specific postures, with an interval of 1 min.The postures were determined based on Konrad [32].The data was assessed using a paired-sample t-test, and significant gaps are represented by * (p < 0.05) and ** (p < 0.01). C. Procedures Since the purpose of these experiments is to evaluate the basic performance of TBE, experiments on each function were conducted separately.First, Task 1 was conducted to confirm the effectiveness of the transition assistance.The subjects performed the STK-KTS movements under two conditions: with TBE [see Fig. 11(a)] and without TBE [see Fig. 11(b)].The transition of both STK and KTS required 2 s each.The subjects practiced completing this movement in just 2 s using a metronome.Since the total effort of the muscles was acquired by integrating with time for the same duration, a slight difference in the speed could be minimized.The subjects repeated the movements five times.In the condition without TBE, a pad was set on the ground to match the height of TBE when the subjects were kneeling [see Fig. 11(b)]. Second, Task 2 was carried out to confirm TBE reduced the effort of the lower limb in the static kneeling posture.The subjects maintained a kneeling posture with and without TBE for 30 s, as shown in Fig. 11(c) and (d).The same setup as in Task 1 was used to match the height of the knee.To investigate the effect of the thigh belts that constrained the knee joint angle, another condition was added: the subjects maintained the kneeling posture with TBE but without the thigh belts, as shown in Fig. 11(e).The results of this task would indicate whether the wearer can complete tasks such as dressing/undressing patients' upper body clothes. Finally, the subjects engaged in Task 3, which was conducted to confirm whether TBE can assist with dressing/undressing tasks done near the floor.The subjects performed a manual task near the floor for 30 s in two conditions: (f) kneeling with TBE and (g) kneeling without TBE, as shown in Fig. 11(f) and (g).In this task, the subjects were asked not to support their body with their upper limbs to measure the effort of the lower limb properly. Each subject practiced the tasks for 30 min before starting the experiments, and all subjects were accustomed to TBE.The tasks were done in order, and the order of conditions in each task was randomized. D. Results It was found that TBE reduced the myoelectric potential during STK-KTS movements, particularly in the right leg, as shown in Fig. 12(i).The mean reduction was 2.2% MVC (p < 0.05) and the reduced EoT was 13.6%.The reduction on the right side was 4.1% MVC (p < 0.05).Further, the largest decrease was 9.1% MVC in the right VL (p < 0.05), followed by 6.5% MVC in the same VM (p < 0.05) and 4.1% MVC in the same TA (p < 0.05).In contrast, the left-side muscles showed a 0.4% MVC reduction.No significant difference due to the body size was observed in this task. Fig. 12(ii) shows myoelectric potential during the static kneeling posture.Regarding the comparison between cases (c) with TBE and (d) without TBE, the average reduction in myoelectric potential was 2.9% MVC (p < 0.01), and the reduction of EoT was 37.9%.In addition, the reduction in the right leg was 3.2% MVC (p < 0.05).The reduction in the right VL muscle was 8.2% MVC (p < 0.05), which was the largest of all the measured muscles in this task.Subsequently, the reduction in the right TA was 4.2% MVC (p < 0.01).Regarding the difference between (c) with TBE and (e) with TBE but without thigh belts, it was observed that the condition of no belts increased the output in the right VL and VM muscles by 3.7% and 3.2% MVC, respectively. Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. A. STK-KTS Movements The experiments confirmed that TBE reduced the myoelectric potential of the wearer during the STK-KTS movements [see Fig. 12(i)].The reduction on the right side was greater than that on the left side because TBE was attached only to the right leg.Large reductions were observed in the right VL and VM [see Fig. 12(i), R-VL, and R-VM].Because the output of these muscles could be decreased by the improved stability of CoG [34], the strategy of the expanded support polygon could be effective. Furthermore, the reduction of the right BF muscle, which is responsible for the knee flexion torque, was not significant.This could be because the effect of the force applied to the thigh to prevent the forward fall was too small to reduce the effort of the knee torque.This indicates that another elastic part is needed to reduce the muscle activity. A significant reduction was observed in the right TA, which works for the dorsal flexion of the ankle joint.On the other hand, the activity of the right GC muscle, which works for the plantar flexion of the ankle joint, was not reduced significantly, although it was expected that the GC muscle activity would be reduced by TBE.Fig. 13 shows the time-series data of the myoelectricity in the right TA and GC muscles, and the GC muscle did not work during the period when TBE assisted (1-2 s during STK and 0-1 s during KTS).In addition, as shown in Fig. 11(a) and (b), the subject performed STK-KTS by changing his knee and toe joints, not the ankle joint.We considered that the subject enhanced the motion stability by fixing one joint and reducing the number of joints to be controlled.That indicates that the ankle joint was fixed during the assistance, and there was no difference in the activity of the GC muscle when TB assisted the wearer. On the other hand, the TA muscle worked hard when TBE was not worn, as shown in Fig. 13.It is considered that the TA muscle was required to fix the angle rather than the GC muscle because the angle was fixed at the maximum dorsal flexion position.When TBE assists the movement, TBE expands the support polygon of the wearer and enhances the stability, which could relieve the efforts of the fixation and reduce the TA muscle activity.We considered the relief led to the reduction of the TA muscle activity. The results indicate the enhanced stability owing to the expanded support polygon reduced the muscle activities of the right VL, VM, and TA muscles.The amount of the expansion investigated in Section III-C was sufficient.Furthermore, since the reduction in BF and GC muscle activities was limited, we should analyze the kinematics and biomechanics of STK-KTS in the future.For the analysis, it would be essential to use the optical motion captures for measuring the movement of the knee, ankle, and toe joints more accurately. B. Static Kneeling TBE reduced the myoelectric potential of the leg during the static kneeling posture.Large reductions were observed in the right VL and TA [see Fig. 12(ii), R-VL, and R-TA].We considered that these muscles were used to constrain the knee joint, and TBE could relieve the effort. To ensure the effect on the constraint of the knee joint, the condition of the kneeling posture with TBE, but without thigh belts [see Fig. 11(e)] was examined.It was observed that myoelectricity in the right VL, VM, and TA muscles [see Fig. 12(ii-e), R-VL, R-VM, and R-TA] increased when the thigh belts were removed.This indicates that the constraint of the knee joint worked well and assisted the wearer in maintaining a static kneeling posture. The activities of the left BF, VL, VM, and TA muscles were also reduced.Thus, the right leg could afford to support the left leg, owing to assistance in the right leg.However, in contrast to the activity of the right muscles, the activity of the left muscles was not affected by the thigh belts [see Fig. 12(ii-c) and (ii-e)]. Through the task, it was confirmed that TBE could reduce the muscle activities of the lower limbs in the kneeling posture.These results confirmed that caregivers could dress/undress patients' upper body clothes in the kneeling posture with reduced muscle effort. C. Task Near the Floor In the experiment simulating the manual task near the floor, TBE reduced the myoelectric potential of the legs.Large reductions were observed in the right VL, VM, and TA muscles [see Fig. 12(iii), R-VL, R-VM, and R-TA].As in Task 2, the thigh belt is expected to contribute to the reduction.Similar results to Task 2 were obtained for left muscle output. In Section I, it was mentioned that an exoskeleton that can assist caregivers in dressing/undressing patients' lower limb clothes near the floor is required.The results of this experiment indicate that the wearer can complete tasks done near the floor without large muscle activities owing to TBE. VI. CONCLUSION This study developed a lower limb exoskeleton, TBE, which assisted in the transition of STK-KTS and the static kneeling posture.The movements require physical effort due to the shift of the CoG.Consequently, TBE assisted the wearer in movements by employing a novel strategy of expanding the support polygon.Through analysis with Lagrange dynamics, it was confirmed that TBE reduced the required torque of the leg during STK-KTS movements. Experiments were conducted to investigate the performance of TBE.The myoelectric potentials of the legs during the STK-KTS transition were reduced owing to the use of TBE.Since appropriate assistive force can depend on the user's weight, an investigation into the assistive force and body weight will be conducted in the future.TBE also reduced muscle output in Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. the static kneeling posture.The constraint on the knee joint was confirmed to have contributed to this reduction.When subjects worked near the floor with TBE, the load on the leg was reduced.Therefore, these results imply that TBE assists caregivers in comfortably doing tasks done near the floor, such as dressing/undressing patients' clothes and shoes.Moreover, TBE has the potential to prevent caregivers from injuries/disorders due to manual work.In the future, the effects of using TBE for long periods will be verified at real sites, such as nursing houses. One limitation of TBE is the size.Although the wearer can walk while wearing TBE, going through a narrow area should be difficult.Transformation mechanisms, such as folding would be effective to avoid the conflict between the exoskeleton and environments during walking.We will conduct detailed stress analysis and mechanical optimization to reduce the size of TBE while maintaining durability. Since lower back pain is another issue for caregivers, developing a function supporting lower back would be also helpful.In the future, we will focus on a more comprehensive exoskeleton capable of reducing the load of the low back in addition to the lower limbs. Fig. 1 . Fig. 1.Four possible postures to work near the floor. Fig. 2 . Fig. 2. Proposed novel assistive strategy "expanded support polygon" with an elastic brace (blue bar).The prototype to realize the assistive strategy is shown. Fig. 4 . Fig. 4. Definition of the margin for stability and its results.(a) Margin without the brace.(b) Margin with the brace.The blue line refers to the expanded support polygon's sides.(c) Time-series data of the margin during motion (iii).(d) Time-series data of the margin during motion (v). Fig. 5 . Fig. 5. Simulation setups.(a) Model of a human (wearer).(b) Model of a human and an elastic brace.(c) Model performs the STK movement in the first 2 s and the KTS movement in the following 2 s.(d) Spring force characteristics and assistive force (torque) generated by the elastic brace. Fig. 6 . Fig. 6.Simulation results.(a) Time-series data of angle of each joint of the human model during the transition.Dot lines represent input commands of angle, and solid lines represent output of the model.(b) Torque required at each joint during STK-KTS without any assistance.(c) Reduction ratio of efforts in each joint over time in four assistive conditions.(d)-(f) Time-series data of torque that each joint required. Fig. 7 . Fig. 7. Thigh brace exoskeleton (TBE).(a) Overview and the main parts.(b) Gas spring and its components.(c) Front view.(d) Two thigh belts.(e) Wheel attached to the tip of the gas spring.(f) Rotational joint.(g) Intermediate link.(h) Link propping the knee cap. Fig. 8 . Fig. 8. (a) Link mechanism of TBE to constrain the trajectory of the gas spring.θ eb−g is the angle between the brace (gas spring) and the ground.(b) Relationship between θ eb−g and the compression ratio of the gas spring by the length of the intermediate link. Fig. 9 . Fig. 9. (a) and (b) Time-series data of the margin during motion (iii) and (v), respectively.The blue and yellow lines are the same as Fig. 4, and the green line is an additional result performed with TBE. Fig. 10 . Fig. 10.Six kinds of muscles measured in the experiments. Fig. 13 . Fig. 13.Myoelectric potential of the right TA and GC muscles of one subject.Yellow and blue lines represent values with TBE and without TBE.(a) and (b) Time-series data of the muscle activities from standing to kneeling postures in right TA and GC muscle, respectively.(c) and (d) Time-series data from kneeling to standing postures in right TA and GC muscle, respectively.
2023-07-23T15:26:07.351Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "bd78762d2c7fadcb69067b5e3cc3738c848d373c", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/3516/4785241/10190109.pdf", "oa_status": "HYBRID", "pdf_src": "IEEE", "pdf_hash": "f6cd70991221eef0e1431ea1c2bbad9a209dc419", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [] }
49742458
pes2o/s2orc
v3-fos-license
Predictive Value of Combined LIPS and ANG-2 Level in Critically Ill Patients with ARDS Risk Factors To investigate the predictive value of the acute physiology and chronic health evaluation 2 (APACHE2) score and lung injury prediction score (LIPS) for acute respiratory distress syndrome (ARDS) when combined with biomarkers for this condition in patients with ARDS risk factors. In total, 158 Han Chinese patients with ARDS risk factors were recruited from the Respiratory and Emergency Intensive Care Units. The LIPS, APACHE2 score, primary diagnosis at admission, and ARDS risk factors were determined within 6 h of admission, and PaO2/FiO2 was determined on the day of admission. Blood was collected within 24 h of admission for the measurement of angiopoietin-2 (ANG-2), sE-selectin, interleukin-6 (IL-6), and interleukin-8 (IL-8) levels. ARDS was monitored for the next 7 days. Univariate and multivariate analyses and receiver operating characteristic (ROC) analyses were employed to construct a model for ARDS prediction. Forty-eight patients developed ARDS within 7 days of admission. Plasma ANG-2 level, sE-selectin level, LIPS, and APACHE2 score in ARDS patients were significantly higher than those in non-ARDS patients. ANG-2 level, LIPS, and APACHE2 score were correlated with ARDS (P < 0.001, P < 0.006, and P < 0.042, resp.). When the APACHE2 score was used in combination with the LIPS and ANG-2 level to predict ARDS, the area under the ROC curve (AUC) was not significantly increased. Compared to LIPS or ANG-2 alone, LIPS in combination with ANG-2 had significantly increased positive predictive value (PPV) and AUC for the prediction of ARDS. In conclusion, plasma ANG-2 level, LIPS, and APACHE2 score are correlated with ARDS. Combined LIPS and ANG-2 level displays favorable sensitivity, specificity, and AUC for the prediction of ARDS. Introduction Acute respiratory distress syndrome (ARDS) is a critical illness characterized by noncardiogenic pulmonary edema and refractory hypoxemia [1]. Although great progress has been made in the methods used to improve the clinical prognosis of ARDS (such as the use of protective mechanical ventilation [2][3][4] and fluid balance therapy [5]), the morbidity and mortality of ARDS remain largely unchanged. Thus, early prediction and early therapy for ARDS will be helpful for reducing morbidity and mortality [6]. Unfortunately, although a variety of ARDS studies have been conducted, there is no favorable prediction model for ARDS. The multicenter study by Gajic et al. included more than 5000 cases, and the investigators constructed a predictor of ARDS: the lung injury prediction score (LIPS) [7,8]. However, the positive predictive value (PPV) of the LIPS was only 0.18, thereby limiting its clinical application. Other predictors of ARDS (such as early acute lung injury (ALI) and surgical lung injury prediction models) are not validated in clinical practice [9,10]. We hypothesized that the combined use of two or more parameters would be better than using only one factor in predicting ARDS. Thus, in the present study, the predictive value for ARDS by combining LIPS with one or more of 4 biomarkers was investigated. Study Population. In this prospective study, 254 Han Chinese patients with risk factors for ARDS were recruited from the Respiratory Intensive Care Unit (RICU) and Emergency Intensive Care Unit (EICU) of Xinqiao Hospital, Daping Hospital, and Southwest Hospital of the Third Military Medical University, between March 2013 and May 2016. The inclusion criterion was one or more risk factors for ARDS in the patients [8]. The exclusion criteria were as follows: (1) patients who developed ARDS before initial evaluation or blood collection (n = 16); (2) patients who were rehospitalized (n = 4); (3) the hospital stay was shorter than 7 days, and it was unfeasible to determine the clinical outcome (n = 12); (4) patients who died within 6 h of admission (n = 1); (5) patients had a history of chronic interstitial lung disease (n = 6) or were diagnosed with congestive heart failure (n = 5); (6) chest computed tomography (CT) or computed radiography (CR) was not performed within the prior 7 days (n = 21); and (7) sample collection was not performed until 24 h of admission (n = 31). Patients fulfilling one or more of the above conditions were excluded from the study. Finally, 158 patients were enrolled into our study ( Figure 1). This study was approved by the Ethics Committee of the Third Military Medical University. Informed consent was obtained from each patient or the patient's relatives before the study. Sample Collection. Blood was collected within 24 h of admission into the RICU or EICU, and plasma was separated and stored at −80°C. Biomarker Measurements. Plasma concentrations of ANG-2, sE-selectin, IL-6, and IL-8 were measured by commercial ELISA kits (Cusabio, China) according to the manufacturer's instructions as follows: standards for ANG-2, sE-selectin, IL-6, and IL-8 were prepared for generating corresponding standard curves. In each well, 100 μl sample or standard was added and the plate was sealed using a membrane for 90 min of reaction at 37°C. Then, 100 μl biotin-labeled anti-rat antibodies was added for 60 min of reaction at 37°C. Subsequently, 300 μl washing buffer was added, and after the mixture had soaked into the plate for 1 min, the buffer was discarded. In each well, 90 μl color development solution was added, and the plate was sealed using a membrane and placed in the dark for (1) Patients died within 6 h a er admission (n = 1). (2) e hospital stay was <7 days, and it was unfeasible to determine the clinical outcome (n = 12). 30 min of reaction at 37°C. Thereafter, 100 μl termination solution was added, and the color of the solution turned from blue to yellow. Samples were read at 450 nm using a microplate reader. Values were calculated based on a standard curve constructed for each assay. Clinical Data Collection. Baseline clinical information, including age, sex, admission source, primary diagnosis at admission, ARDS risk factors, ARDS risk modifiers, and other parameters, was collected within 6 h of admission into the RICU or EICU ( Table 1). The LIPS was calculated within 6 h of admission as previously reported [8]. The LIPS has two indexes including 22 categories, such as shock, aspiration, and sepsis. The scores range from 0 to 15.5. The acute physiology and chronic health evaluation 2 (APACHE2) score was calculated within 24 h of admission. The APACHE2 score has three categories, namely, acute physiology score, age score, and chronic health score [21]. The scoring was performed by 2 investigators in this study who were blinded to the measurement and expression of biomarkers. Noninvasive and invasive mechanical ventilation 7 (6.2%) 13 (28.9%) <0.001 * * * P < 0 05, * * P < 0 01. Primary Outcome and Definitions. The primary endpoints were ARDS onset within 7 days and clinical outcomes of ARDS within 60 days. The primary endpoints were determined by two experienced clinicians who were blinded to the expression of the plasma biomarkers. ARDS was diagnosed according to the Berlin definition for ARDS (2010) [1]. Sepsis, severe sepsis, and septic shock were diagnosed according to the criteria of the American College of Chest Physicians/Society of Critical Care Medicine Consensus Conference [22]. 2.6. Statistical Analysis. Statistical analysis was performed with SPSS version 20.0. Continuous data were expressed as the mean ± standard deviation and categorical data as numbers. Comparisons of continuous data were performed with the t-test or Student's t-test and of categorical data with the chi-square test or Fisher's exact test between two groups (ARDS group and non-ARDS group, the group of patients with and without ARDS, resp.). Univariate and multivariate logistic regression analyses were employed to identify factors associated with ARDS. For the establishment of the model with the LIPS and ANG-2, the probability value (P value) was obtained from logistic regression analysis and then used as a new indicator for the diagnosis of ARDS based on receiver operating characteristic (ROC) curve analysis. The accuracy of diagnosis was determined using area under the ROC curve (AUC; 95% confidence interval (CI) and P value < 0.05). Linear regression analysis was used for determining correlations. A value of P < 0 05 was considered statistically significant. Clinical Information and Patient Characteristics at Baseline 3.1.1. Baseline Characteristics of Patients. A total of 254 patients with risk factors of ARDS were recruited, and 158 patients were included for final analysis. The incidence of ARDS was 28.5% within 7 days of admission (45/158). As shown in Table 1, there were no significant differences in age, sex, initial diagnosis, or risk factors between ARDS and non-ARDS groups. However, the APACHE2 score, LIPS, use of invasive mechanical ventilation, and mortality within 60 days were significantly higher in the ARDS group than in the non-ARDS group, indicating that disease severity in the ARDS group is higher than that in the non-ARDS group. Characteristics of Patients in Different Groups at Baseline. Preexisting medical interventions and therapies before evaluation are important factors affecting the accuracy of a prediction model. However, whether patients receive prior interventions or therapies before admission is an uncontrollable factor. Thus, a good prediction model requires the inclusion of other medical confounding factors. In the present study, patients were divided into two groups as follows: (group A) patients who had received vasopressors or different kinds of respiratory support including oxygen inhalation through the nasal tubes, noninvasive mechanical ventilation, or/and invasive mechanical ventilation before admission and (group B) patients who received no prior therapy before admission. The APACHE2 score, use of invasive mechanical ventilation, and mortality within 60 days were comparable in the 2 groups, suggesting that disease severity was similar between them (Table 2). Prediction and Regression Analysis of LIPS, APACHE2 Score, and ANG-2, sE-Selectin, IL-6, and IL-8 Levels for ARDS 3.2.1. LIPS, APACHE2 Score, and ANG-2, sE-Selectin, IL-6, and IL-8 Concentration in the ARDS and Non-ARDS Groups. Plasma ANG-2 level, sE-selectin level, LIPS, and APACHE2 score in the ARDS group were significantly higher than those in the non-ARDS group, but plasma IL-8 and IL-6 level was not different between the two groups (Tables 3). Univariate and Multivariate Regression Analyses of LIPS and Biomarkers for the Prediction of ARDS. Univariate analysis showed that ANG-2 level, sE-selectin level, APACHE2 score, LIPS, and septic shock were closely associated with ARDS (Table 4, univariate analysis). However, multivariable logistic regression analysis indicated that only ANG-2 level, LIPS, and APACHE2 score were correlated with ARDS (Table 4, multivariate regression analysis). Prediction of ARDS with APACHE2 Score Alone or in Combination with LIPS or ANG-2 Level. When the APACHE2 score, LIPS, and ANG-2 level were independently used to predict ARDS, the APACHE2 score had the lowest AUC (0.649). When the APACHE2 score was used in combination with LIPS or ANG-2 level for the prediction of ARDS, the AUC did not significantly increase (Tables 5 and 6). The APACHE2 score had a low AUC for the prediction of ARDS, and the APACHE2 score in combination with LIPS or ANG-2 level also failed to increase the AUC for the prediction of ARDS. Thus, the APACHE2 score was not included as a factor for the prediction of ARDS. 3.3. Prediction of ARDS with LIPS, ANG-2, and LIPS + ANG-2 Models. In subsequent experiments, we used LIPS, ANG-2 level, and LIPS + ANG-2 level to establish models for the prediction of ARDS. In the LIPS + ANG-2 model, the probability of LIPS and ANG-2 was obtained from logistic regression analysis (Y = −3 586 + 0.317 * LIPS + 0.232 * ANG-2) and then used to predict ARDS. When the cutoff value of ANG-2 level was 4.121 ng/ml, the sensitivity, specificity, and AUC were 66.67%, 75.22%, and 0.735, respectively, in predicting ARDS. The predictive value of the ANG-2 model was slightly better than that of the LIPS model. The sensitivity, specificity, and AUC were 71.11%, 79.65%, and 0.803, respectively, in predicting ARDS with the LIPS + ANG-2 with a cutoff of 0.2821. The PPV and AUC for the LIPS + ANG-2 model were significantly higher than those for the LIPS or ANG-2 model, indicating that the LIPS in combination with ANG-2 level has a better capability to predict ARDS than when either of the parameters is used alone (Table 7 and Figure 2). Subgroup Analysis of the LIPS, ANG-2, and LIPS + ANG-2 Models. The major difference between group A and group B was the use of medical intervention or therapy before evaluation of the LIPS or measurement of biomarkers. However, prior medical interventions or therapies may affect the accuracy of prediction models. To evaluate the influence of medical intervention or therapy on the accuracy of the above models, we performed subgroup analysis. The results showed that the LIPS + ANG-2 model had the largest AUC (0.772), and the LIPS model had the smallest AUC (0.652) in group A; the LIPS + ANG-2 model had the largest AUC (0.847), and the ANG-2 model had the smallest AUC (0.720) in group B. These results suggest that the LIPS + ANG-2 model has a better predictive value for ARDS than the LIPS or ANG-2 model regardless of prior medical intervention or therapy. The AUCs for the LIPS + ANG-2 model and the Group A: patients who had received therapy before admission; group B: patients who had not received therapy before admission. * P < 0 05, * * P < 0 01. LIPS model in group A were smaller than those in group B (0.772 versus 0.847 and 0.652 versus 0.788, resp.), but the AUC for the ANG-2 model in group A was larger than that in group B (0.749 versus 0.720). These findings indicate that although the prediction of ARDS with the LIPS + ANG-2 model is affected by prior medical intervention; the LIPS + ANG-2 model has a better predictive capability for ARDS. Moreover, the prediction of ARDS with the LIPS model is also influenced by prior medical intervention. However, the prediction with the ANG-2 model does not seem to be affected by prior medical intervention, and its AUC is higher in group A (Table 8 and Figure 3). Discussion Our results showed that the LIPS, evaluated based on clinical information, could predict the occurrence of ARDS (AUC: 0.704, 95% CI: 0.618~0.789, P < 0 001). In addition, of the 4 investigated biomarkers of ARDS, only ANG-2 level displayed predictive value for ARDS (AUC: 0.735, 95% CI: 0.641~0.829, P < 0 001). The combined use of the LIPS and ANG-2 level increased the accuracy of prediction of ARDS (AUC: 0.803, 95% CI: 0.727~0.879, P < 0 001), and the PPV of the LIPS + ANG-2 model increased to 58.19%. The LIPS model was proposed in 2011 by Gajic and Trillo-Alvarez for the prediction of ALI/ARDS according to their multicenter study on a large sample. It has a good predictive value for ALI (AUC: 0.80~0.84). Our results showed that the LIPS was also correlated with ARDS (odds ratio (OR): 1.324, 95% CI: 1.083~1.618, P = 0 006). For patients with critical illness in the ICU, the APACHE2 score is a good parameter that can be used to predict mortality [23]. However, no study has been conducted on the usefulness of the APACHE2 score in the prediction of ARDS. In our study, the APACHE2 score was closely correlated with ARDS (OR: 1.070, 95% CI: 1.003~1.141, P < 0 042). However, compared with the LIPS and ANG-2 level, APACHE2 score displayed the smallest AUC for ARDS prediction. Moreover, when combined with the LIPS and ANG-2 level, APACHE2 score failed to increase the predictive power of these two parameters. Therefore, the APACHE2 score was not included for further analysis, but the LIPS was preserved. In this study, 4 biomarkers, namely, ANG-2, sE-selectin, IL-8, and IL-6, related to the pathogenesis of ARDS were measured in blood. ANG-2 is a secreted endothelial cell-specific growth factor. It can improve the sensitivity of vascular endothelial cells to vascular endothelial growth factors (VEGFs) and enhance angiogenesis in the presence of VEGF. On the other hand, ANG-2 can cause endothelial apoptosis, leading to vascular degeneration. Therefore, ANG-2 is an important biomarker of endothelial activation/dysfunction [24]. ANG-2 demonstrated proinflammatory activity and can regulate endothelial permeability [17]. ARDS is an uncontrollable pulmonary inflammation characterized by neutrophil activation and endothelial injury [11][12][13]. Increased vascular permeability and pulmonary vascular leakage are extremely important pathophysiological indicators of ARDS. Studies have shown that ANG-2 level is significantly increased in ARDS patients [18,19]. In patients with severe sepsis, ANG-2 level is correlated with the clinical outcomes of ARDS at 28 days and can be used to predict the prognosis of ARDS [25]. IL-8 and IL-6 are important proinflammatory cytokines involved in the pathogenesis of ARDS [7,14,16,26]. sEselectin is a proinflammatory cytokine expressed on endothelial cells and is can mediate adhesion and aggregation between white blood cells and endothelial cells [27]. sEselectin may predict ARDS with a PPV of 68% and negative predictive value (NPV) of 86% [28]. The activation and migration of neutrophils are important for the pathogenesis of ARDS. We found that plasma ANG-2 and sE-selectin levels in ARDS patients were dramatically higher than those in non-ARDS patients, but IL-8 or IL-6 level displayed no difference between the ARDS and non-ARDS groups. Further multivariate analysis showed that only ANG-2 had a close correlation with ARDS (OR: 1.258, 95% CI:1.137~1.392, P < 0 001). Thus, sE-selectin, IL-8, and IL-6 were not included in the model for the prediction of ARDS, and ANG-2 was employed to establish this model. Although our findings showed that the LIPS had the predictive capability for ARDS, its AUC was significantly lower than that reported by Gajic et al. and Trillo-Alvarez et al. [7,8]. This difference may be explained by the fact that some patients in the present study were transferred from other hospitals, and medical intervention before the evaluation of the LIPS may have biased the results. Nevertheless, the LIPS still had a high predictive value with an AUC of 0.704. Our results also revealed that ANG-2 level alone had favorable predictive capability for ARDS (AUC: 0.735). However, we attempted to identify a model with better predictive capability than LIPS or ANG-2 alone. Thus, we performed logistic regression analysis of the LIPS and ANG-2, and the probability value (Y = −3 586 + 0.317 * LIPS + 0.232 * ANG-2) was obtained and used to predict ARDS. The results showed that, with the cutoff value of this probability of 0.2821, the AUC for the LIPS + ANG-2 model was 0.803 in predicting ARDS, which was higher than that for the LIPS or ANG-2 model alone. In addition, the LIPS + ANG-2 model had higher PPV and NPV than did the LIPS or ANG-2 model. Unlike the study by Agrawal et al. [29], we further investigated whether the LIPS in combination with ANG-2 level had different predictive capabilities in patients with or without medical intervention before admission. The results showed that the prediction of ARDS with the LIPS model but not with the ANG-2 model was affected by prior medical intervention. Moreover, the predictive capability of the LIPS + ANG-2 model for ARDS was better than that of the LIPS or ANG-2 model alone, regardless Figure 2: ROC of ANG-2, LIPS, and LIPS + ANG-2 for predicting ARDS. The figure depicts that the AUC for the LIPS + ANG-2 model was significantly higher than that for the LIPS or ANG-2 model, indicating that the LIPS + ANG-2 model has a better predictive value for ARDS that the LIPS and ANG-2 models. of prior medical intervention. However, prior medical interventions actually affected the accuracy of the LIPS + ANG-2 model in the prediction of ARDS, and its AUC was reduced by 7%. Nevertheless, the LIPS + ANG-2 model had a good predictive capability for ARDS in group A. Thus, we speculate that the LIPS + ANG-2 model would be more suitable for predicting ARDS in complex clinical situations. Although strict inclusion and exclusion criteria were used in the present study to establish a better prediction model than the LIPS or ANG-2 model, our study had several limitations. (1) The volume of blood collected was relatively small, and thus it was impossible to detect all biomarkers for ARDS (such as biomarkers related to epithelial injury, endothelial injury, and other inflammatory factors). (2) The time and location of sample collection were limited, and we failed to dynamically observe changes in plasma biomarkers or compare plasma biomarkers with bronchoalveolar lavage fluid (BALF) biomarkers, which may have limited our understanding of these biomarkers. (3) The sample size was small. We need to expand the sample size in future studies. Nevertheless, our study had some advantages. This was a multicenter study in which patients were recruited from three general hospitals. In addition, the exclusion criteria were strict and excluded most clinical confounding factors to make our results reliable. Furthermore, patients who received medical intervention before admission were also recruited in the present study. These patients are special but are common in ICUs. Thus, our results are more likely to be widely applicable. Conclusions Taken together, our results demonstrate that the combined use of a clinical scoring system and biomarkers of ARDS is helpful for the early prediction of ARDS and might become Group A: patients who had received prior therapy before admission; group B: patients who had not received prior therapy before admission.
2018-07-17T00:50:00.731Z
2018-06-12T00:00:00.000
{ "year": 2018, "sha1": "be0059d91b3f66c2bbf83e2dd6b7394007f0dde1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2018/1739615", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ac6afe24aedece370e518e7f32bf8d18e63f6041", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268249751
pes2o/s2orc
v3-fos-license
Mutations in the SARS-CoV-2 spike receptor binding domain and their delicate balance between ACE2 affinity and antibody evasion Abstract Intensive selection pressure constrains the evolutionary trajectory of SARS-CoV-2 genomes and results in various novel variants with distinct mutation profiles. Point mutations, particularly those within the receptor binding domain (RBD) of SARS-CoV-2 spike (S) protein, lead to the functional alteration in both receptor engagement and monoclonal antibody (mAb) recognition. Here, we review the data of the RBD point mutations possessed by major SARS-CoV-2 variants and discuss their individual effects on ACE2 affinity and immune evasion. Many single amino acid substitutions within RBD epitopes crucial for the antibody evasion capacity may conversely weaken ACE2 binding affinity. However, this weakened effect could be largely compensated by specific epistatic mutations, such as N501Y, thus maintaining the overall ACE2 affinity for the spike protein of all major variants. The predominant direction of SARS-CoV-2 evolution lies neither in promoting ACE2 affinity nor evading mAb neutralization but in maintaining a delicate balance between these two dimensions. Together, this review interprets how RBD mutations efficiently resist antibody neutralization and meanwhile how the affinity between ACE2 and spike protein is maintained, emphasizing the significance of comprehensive assessment of spike mutations. Introduction Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), accounting for the devastating COVID-19 pandemic that burst out worldwide at the end of 2019, is a positive-strand RNA virus that belongs to the coronavirus family.As of 12 October 2023, over 771 million confirmed cases with nearly 7 million deaths due to SARS-CoV-2 and its numerous variants have been reported (count from the website of WHO).Despite the 3ʹ to 5ʹ exoribonuclease proofreading activity of its RNA polymerase (Duffy et al., 2008;Moeller et al., 2022), SARS-CoV-2 achieves a high error rate during replication, which offers great potential to gain mutations (Robson et al., 2020).Given the extremely extensive infected population and chronic infection cases reported in immunodeficient individuals, there is sufficient space at both spatial and temporal scales for SARS-CoV-2 to continuously develop various mutations, resulting in a huge reservoir for viral selection and ultimately drastic changes in viral characteristics (McGrath et al., 2022;Qu et al., 2023a;Riddell and Cutino-Moguel, 2023;Telenti et al., 2021;Tian et al., 2022).Particularly, the viral spike (S) protein, responsible for the virus-host interaction, undergoes extensive amino acid substitution under the pressure of strong immune surveillance generated by natural infection, vaccination, or antibody therapy (Cao et al., 2022b;Carabelli et al., 2023;Focosi et al., 2022;Shrestha et al., 2022;Thorne et al., 2022).Myriads of SARS-CoV-2 variants with various S Protein & Cell mutation profiles emerged, reflecting their distinct evolutionary trajectories. Functional and structural studies have revealed that S protein consists of two functional domains, named S1 (amino acid residues 1-686) and S2 (amino acid residues 687-1,273), respectively, both of which are of great significance for viral entry (Ke et al., 2020;Mannar et al., 2022;Walls et al., 2020;Wrapp et al., 2020;Zhang et al., 2021b).Trimeric S protein forms a spike protruding from the virion surface, while its S1 domain is exposed to recognize and bind to the host receptor, angiotensin-converting enzyme 2 (ACE2) (Scialo et al., 2020).The S-ACE2 interaction is through the receptor binding domain (RBD, amino acid residues 306-534) in the S1 domain (Shang et al., 2020).After receptor recognition and viral attachment, several proteolytic cleavage steps occur to remove the S1 domain (Walls et al., 2020).Meanwhile, the S2 domain gets exposed, facilitating the following membrane fusion and the ultimate viral entry (Jackson et al., 2022;Kumar et al., 2022;Meng et al., 2022). In this review, we discuss the effect of VOC mutations on both ACE2 affinity and antibody resistance, aiming to reveal how RBD point mutations coordinate and achieve a balance between these two viral characteristics.We review how specific RBD mutations enable further emergence of escape mutations by compensating their deleterious effect for ACE2 affinity (e.g., N501Y), and also restrict overall sequence variation (e.g., almost all Omicron sublineages contain Q498R-N501Y double mutation). Methods to characterize various RBD mutations To measure the ACE2 binding affinity for S proteins of distinct VOCs in vitro, surface plasmon resonance (SPR) (Piliarik et al., 2009) and biolayer interferometry (BLI) (Kumaraswamy and Tobias, 2015) techniques have been wildly used.Both of these techniques utilize sensitive optical biosensors to capture minor signals generated by the association and disassociation of target molecules in real time and calculate the affinity constant (K D ) to represent the S-ACE2 binding affinity (Han et al., 2022;Li et al., 2022;McCallum et al., 2022;Zhang et al., 2021a).Therefore, by comparing the K D value of a mutated S protein with that of a wildtype S protein under the same experimental condition, the effect of mutations on ACE2 binding affinity could be precisely measured and presented as the average change in logarithmic negative However, it is worth noting that the native interaction between trimeric S proteins and dimeric ACE2 receptors in vivo, such as the RBD open/closed conformations, could not be fully and precisely represented by the SPR/ BLI assays, which usually utilize only recombinant RBD protein and monomeric ACE2 (Walls et al., 2020;Yan et al., 2020).Besides, even utilizing S and ACE2 proteins under the same form, the absolute K D value also varied among different assays or studies (Dejnirattisai et al., 2021;Li et al., 2022;Zhang et al., 2021a). Protein & Cell To evaluate the antibody evasion capacity, the in vitro neutralization assays are usually performed in the presence of different concentrations of mAbs (Muruato et al., 2020;Nie et al., 2020;Zhou et al., 2022).Based on the neutralization curve, an IC 50 value (concentration of a mAb to neutralize 50% virus in the neutralization assay) could be calculated to show the absolute neutralization potency for each mAb-variant pair (Cox et al., 2023).Moreover, by comparing the IC 50 value of a mAb against a variant pseudovirus with the IC 50 value for the wildtype pseudovirus as a reference, the fold reduction in neutralization (FRN) could be calculated (Cox et al., 2023).Thus, the geometric mean of IC 50 and FRN (mFRN) values for a given mAb-variant pair reflects the absolute neutralizing activity and the relative reduction level of neutralization. Although SPR/BLI analysis and in vitro neutralization assays are performed to reveal the virus characteristics for different SARS-CoV-2 variants, there are limitations of these assessments that the experiment scale is too small.The fully mutated S proteins only reveal the combined effect of single mutations but could not elaborate the extent of the effect of each individual RBD mutations or their various combinations on viral characteristics.Thus, it would be very meaningful and crucial to dissect out the effect of each individual mutation or their various combinations. To address this need, deep mutation scanning (DMS) approaches, which conventionally integrate a yeastsurface display platform using a mutagenesis library of spike RBD with the fluorescence-activated cell sorting and deep sequencing techniques, have been developed (Cao et al., 2023;Frank et al., 2022;Greaney et al., 2021c;Starr et al., 2020;Taylor and Starr, 2023).Very recently, a novel DMS platform utilizing pseudovirus mutagenesis library instead of yeast has also been established to examine the immune evasion capacity of full spike (Dadonaite et al., 2023).Various strategies for library construction facilitate the high-throughput investigation to reveal the effect of point or combined mutations, even including all possible RBD substitutions (Greaney et al., 2021c;Moulana et al., 2022;Starr et al., 2022a).Moreover, DMS techniques exhibit reliable reproductivity and consistent output results as the same as conventional affinity and neutralization assays (Greaney et al., 2021b;Tan et al., 2023).Therefore, the established DMS systems enable high-throughput screening of functional alterations induced by S protein mutations and could greatly contribute to the understanding of the evolving SARS-CoV-2 mutants (Cao et al., 2022b). Together, SPR/BLI analysis using the recombinant S proteins could be performed to measure the S-ACE2 binding affinity [K D value and Δ−log(K D )], while the pseudoviruses-based in vitro neutralization assays could be carried out to evaluate the antibody neutralization potency and breadth (IC 50 and mFRN values). Combined with DMS techniques, the impacts of single RBD mutations (in wildtype background) on ACE2 affinity and mAbs resistance could be evaluated in a highthroughput, more precise, and less-biased way.With all these data and tools, we could characterize and compare the survival advantages for each different SARS-CoV-2 variant (Fig. 1B and 1C). ACE2 affinity and antibody evasion capacity of distinct VOCs We summed up 36 studies (up to 10 June 2023) about SARS-CoV-2 VOCs and other newly emerging variants in regard to their ACE2 binding affinity (Supplementary Data File 1).We further summed up 132 studies (118 studies concluded in one review article (Cox et al., 2023) and additional 14 recent individual studies) related to the neutralizing activity of RBD-targeting mAbs (Supplementary Data File 1). In this review, we mainly discuss 11 mAbs, including casirivimab, etesevimab and tixagevimab in Class-1 RBD epitope; bamlanivimab and cilgavimab in Class-2 RBD epitope; bebtelovimab, imdevimab and sotrovimab in Class-3 RBD epitope; cocktail of bamlanivimab and etesevimab; Evusheld (cocktail of tixagevimab and cilgavimab); and REGEN-COV (cocktail of casirivimab and imdevimab) (Fig. 1C).Of note, there are other groups of mAbs characterized by their distinct RBD epitopes.For instance, seven RBD epitopes in (Hastie et al., 2021) and 12 distinct RBD epitopes in (Cao et al., 2022c).However, the chosen 11 mAbs to be discussed in this review have been authorized by the U.S. Food and Drug Administration (FDA) for the emergency use authorization (EUA) as therapeutic drugs and they have been very well studied and characterized, including their neutralizing activity against various SARS-CoV-2 variants and their structural details of antibody-antigen interfaces. In SARS-CoV-2 VOC Alpha, its S protein with only one RBD mutation N501Y, showed significant enhancement in ACE2 affinity relative to the wildtype S protein [Δ−log(K D ) = 0.64, 4.3-fold enhancement] (Fig. 1B).This observation explained the enhanced SARS-CoV-2 infection and transmission induced by the N501Y substitution (Liu et al., 2022).Alpha variant showed very low level of antibody evasion, with only few mAbs losing neutralizing activity against Alpha.For example, only etesevimab, which directly contacts the RBD mutation site in the S protein of Alpha, showed a mild reduction in neutralizing Alpha (mFRN = 10.2;IC 50 = 150 ng/mL) (Fig. 1C). In SARS-CoV-2 VOC Beta and Gamma, their S proteins both possess mutations at the K417, E484, and N501 amino acid positions in the RBD, thus exhibiting similar mAbs evasion pattern.Specifically, among the 11 FDAauthorized EUA antibodies, the same five mAbs (casirivimab, etesevimab, tixagevimab, bamlanivimab, and Of note, a higher mFRN value only indicates a greater reduction of neutralizing efficiency for the given mAbvariant pair relative to the wildtype control, and should not be interpreted as an absolute low level of neutralizing activity.For instance, two mAbs, cilgavimab (mFRN = 3) and imdevimab (mFRN = 1.98), had higher mFRN values against Delta than mAb sotrovimab (mFRN = 1.3), but both cilgavimab (IC 50 = 17.86 ng/mL) and imdevimab (IC 50 = 19.71ng/mL) exhibited much more potent neutralizing activity against Delta than sotrovimab (IC 50 = 106.68ng/ mL) (Fig. 1C).Therefore, IC 50 and mFRN values should be combined to analyze mAb neutralizing potency and to evaluate the evasion capacity of different variants. XBB.1.5subvariant presented dominating global prevalence in early 2023.Preliminary data showed that, compared with its parental strain XBB.1, XBB.1.5has achieved comparable mAbs resistance and much stronger ACE2 affinity [Δ−log(K D ) = 0.84].However, many virological characteristics of XBB.1.5has not been understood very well, and more detailed studies are still required to comprehensively assess its ACE2 affinity and antibody evasion level.For example, since the F486P mutation constitutes the only specific alteration in XBB.1.5RBD relative to XBB.1, the mechanism underlying the F486P-induced enhancement of ACE2 receptor binding needs to be further revealed structurally (Uriu et al., 2023;Yue et al., 2023). Together, on one hand, all VOCs discussed here strengthen, to varying degrees, their S-ACE2 binding affinity relative to the prototype, while on the other Protein & Cell hand, the general trend of SARS-CoV-2 viral evolution to efficiently evade neutralization by antibodies elicited upon infection or vaccination is expected to persist.Accordingly, the assessment of both receptor affinity and antibody resistance capacity is crucial for the comprehensive analysis of the newly emerged SARS-CoV-2 variants.In the future, the SARS-CoV-2 virus will keep mutating, leading to a shortage of antibodies for clinical treatment.Meanwhile, it is very possible that a novel SARS-CoV-2 variant with superior ability to recognize ACE2 and evade antibody neutralization would trigger a new round of pandemic.Therefore, it is crucial to maintain vigilant surveillance of viral evolution and keep developing broad neutralizing antibodies targeting conserved epitopes and neutralizing all known variants. Key mutations for antibody resistance Despite their distinct RBD mutation profiles, the newly emerging Omicron variants undergo convergent evolution that accumulate mutations at several hotspots (Addetia et al., 2023;Bouhaddou et al., 2023;Cao et al., 2023;Focosi et al., 2023;Martin et al., 2021;Qu et al., 2023b).This signature of SARS-CoV-2 evolution is mainly attributed to the pressure of humoral immune response, thus many point mutations induce viral resistance to mAbs (Chen et al., 2021;Greaney et al., 2021aGreaney et al., , 2021b;;McCallum et al., 2022;Planas et al., 2023;Tuekprakhon et al., 2022).To better analyze the impact of individual RBD point mutations in antibody evasion, we collected the published FRN values for a series of single mutants in the wildtype background and overlaid them into plots containing VOC mutation positions (Fig. 2A). Thus, bamlanivimab showed a reduced neutralization efficacy against both Beta and Gamma. Correspondingly, one mAb is expected to maintain their neutralization efficacy when the target variant possesses no individual mutation to resist the neutralizing activity of this mAb.For example, in Delta variant, none of its RBD mutations showed significant resistance to sotrovimab, imdevimab, and bebtelovimab, making all these three Class 3 mAbs still neutralizing against Delta (mFRN = 1.3, 2, and 1.4, respectively). However, some escape mutations are outside of the antibody-antigen interface.For instance, S371F substitution, located outside of the binding epitope of these mAbs, exhibited moderate evasion of sotrovimab and imdevimab (mFRN = 12 and 50, respectively).Structural studies provided an explanation that this mutation results in the rearrangement of RBD helix comprising residues 364 to 372, adopting a distinct conformation (Park et al., 2022).Similarly, substitutions S373P and S375F, outside of the interface of imdevimab, also induced conformational changes to alter antigenic characteristics, so the recognition by imdevimab is hampered (mFRN = 4.4 and 4, respectively) (Cui et al., 2022).Thus, besides the interface mutations, the non-interface mutations might affect the global spike conformation to disrupt the antibody neutralization. Collectively, SARS-CoV-2 variants could exhibit resistance to a specific mAb only when they have acquired The mFRN values (y-axis) was determined using the wildtype pseudoviruses containing only one single mutation in the RBD of S protein (positions 305-534, x-axis) (Shang et al., 2020), with the wildtype pseudovirus as the reference control.Each dot represents the mFRN value of one mutation-mAb pair (mutation sites refer to Fig. 1A).The colors of dots represent the corresponding testing mAbs.Horizonal dashed line shows the mFRN = 3 threshold.Mutation sites of representative VOCs Beta, Gamma, Delta and Omicron XBB.1 (see Fig. 1A) are indicated by vertical red lines.The full dataset of mFRN values for each mAb in the presence of single RBD mutations are summarized in the Supplementary Data File 3. (B) Total escape scores of bebtelovimab (LY-CoV1404) determined by a full spike deep mutational scanning system at each site in the BA.1 RBD.A detailed explanation of escape score could be found in the reference (Yu et al., 2022).An interactive data set is available at Github.(C) X-ray crystal structure of the bebtelovimab Fab bound to the S protein RBD (PDB 7MMO).The rectangular region indicates the Wuhan-Hu-1 RBD epitope recognized by bebtelovimab and is showed as ribbons in the zoomed view.The key escape sites, such as N450 and P499, correspond to the region (site 444-450 and 499) with escape score > 10 (see Fig. 2B).(D) Key escape sites in (C) are showed as atoms. Protein & certain point escape mutations, and these point mutations individually contribute to the antibody evasion, irrelevant of their S protein backbone.Therefore, it is concluded that a certain point mutation has comparable antibody evasion capacity regardless of their variant background.This phenomenon contradicts with the notion of "epistasis" observed for the mutations affecting ACE2 affinity, which will be discussed in the following section in this review. Epistatic mutations for rescuing ACE2 affinity The ACE2 affinity of S protein is of primary importance for SARS-CoV-2 infection, thus distinct VOC mutations work together to maintain the ACE2 affinity and even enhance it to varying degrees (Cao et al., 2022a;Chakraborty, 2022;Liu et al., 2022;Ma et al., 2023;McCallum et al., 2022;Ozono et al., 2021).We compared Δ−log(K D ) values of VOCs and examined the summed effects of each individual constituent mutations in the wildtype backbone (Fig. 3A). The DMS profile showed that the N501Y substitution (the only mutation harbored in Alpha RBD) conferred an 8.3-fold enhancement in ACE2 affinity [Δ−log(K D ) = 0.92] (Fig. 3A).This was consistent with the result of individual SPR assay (Barton et al., 2021).However, the actual ACE2 affinity with Alpha S protein was only 4.4-fold stronger than that of the wildtype S protein [Δ−log(K D ) = 0.64].This inconsistency could probably be explained by the mutations outside of the RBD, which inhibit ACE2 binding capacity and lead to the overall decrease of ACE2 affinity. Despite this, the cumulative impact of RBD mutations in the Beta, Gamma, and Delta variants [Δ−log(K D ) = 0.43, 0.58, and 0.14, respectively] aligned closely with the ACE2 affinities observed in their full-length complete spike proteins [Δ−log(K D ) = 0.46, 0.68, and 0.04, respectively] (Fig. 3A).This observation explains the stronger ACE2 affinity of Gamma compared with Beta and Delta.Since the only difference of RBD sequence between Beta and Gamma is the K417 substitution (K417N harbored by Beta while K417T within Gamma), K417N in Beta led to a greater reduction in ACE2 affinity than K417T [Δ−log(K D ) = −0.61for K417N and −0.46 for K417T, respectively]. Together, among these RBD mutations in Alpha, Beta, Gamma, and Delta, N501Y exhibited the most significant enhancement in ACE2 affinity, while K417N substitution led to the greatest reduction (Barton et al., 2021;Han et al., 2022;Mannar et al., 2021).For these SARS-CoV-2 variants, the mutations independently influence ACE2 affinity, and the overall ACE2 binding affinity is the combined effect of all individual mutations. However, surprisingly, the fully mutated spike proteins of Omicron subvariants managed to maintain or even strengthen their ACE2 affinity compared with the wildtype control, completely deviating from the summed effect of their constituent mutations (Fig. 3A).This obvious and typical "epistasis" phenomenon indicates the complex interaction among Omicron RBD mutations (Starr and Thornton, 2016).In another word, the actual effect of a single point mutation on ACE2 binding depends on the specific amino acid sequence of the whole spike protein. Taking N501Y mutation as an example, it exerted an enhancement effect on ACE2 affinity for various SARS-CoV-2 variants but severely impaired ACE2 binding in SARS-CoV-1 or other sarbecoviruses (Starr et al., 2022c).Therefore, N501Y, as an RBD mutation, exhibited a distinct effect on ACE2 binding in different S protein background. On the other hand, certain mutations could also affect ACE2 affinity very differently in an N501Y-positive background versus an N501Y-negative background.In a recent study, DMS analysis studied the effects of a series of single mutations in different VOC backgrounds and revealed that the epistatic effects on ACE2 binding are mainly attributable to the N501Y substitution (Starr et al., 2022a).Specifically, in N501Y-containing Alpha (N501Y only in RBD) and Beta (K417N, E484K, and N501Y in RBD) background, several RBD mutations, such as Q498R, one of RBD mutations in Omicron, showed significantly enhanced ACE2 binding than that in N501Y-absent wildtype background, suggesting that these RBD sites exhibited positive epistatic shifts in the presence of N501Y (Starr et al., 2022a).Conversely, RBD mutations in the Delta (L452R and T478K in RBD) background barely altered the ACE2binding affinity compared with those in wildtype RBD background, suggesting that no such epistatic shift for RBD mutations was observed in the absence of the N501Y substitution (Starr et al., 2022a).Since most of the popular Omicron sublineages possess the N501Y mutation, its strong epistatic effect might reverse the negative effect on ACE2 binding for other RBD mutations and be of great significance in maintaining the overall ACE2 affinity (Fig. 3A). Systematical evaluation of the effect of point mutations on ACE2 affinity in different variant contexts even 0 .120 .12-−0.12 −0.12 −0.12 −0.12 −0.12 −0.12 −0.12 before their actual appearance successfully unravethe epistasis phenomena.However, as new point mutations emerge and accumulate in newly emerged SARS-CoV-2 variants, it becomes crucial to understand the effect of not only single but also combinational mutations. A recent study constructed a mutagenesis library containing all possible combinations of 15 mutations in the RBD of BA.1 variant (a total of 2 15 = 32,768 genotypes of variants) and measured their ACE2 binding affinity to capture the epistatic interactions among these mutations (Moulana et al., 2022).Although all these mutational variants exhibited ACE2 binding, only a small proportion showed an enhanced ACE2 binding affinity compared with the wildtype RBD (Fig. 3B, left).Interestingly, among the 321 (the top 1%) genotypes with extraordinarily strong ACE2 binding capacity, the relative mutation composition (%) at the 15 BA.1 mutation sites are radically different (Fig. 3B, right).For example, all 321 sequences had Q498R and N501Y, indicating the top 1% variants with a superior ACE2 affinity exhibited a strong preference of Q498R-N501Y double mutation (Moulana et al., 2022) (Fig. 3B).This observation is consistent with a 387-fold enhancement of ACE2 affinity for this double mutation-containing RBD (Starr et al., 2022a). We further analyzed these mutation biases (ratio values in Fig. 3B) in combination with the effect of these individual mutations in the wildtype background (Fig. 3B), and found two scenarios (Fig. 3C).(1) It is consistent between the effect of the individual mutation on ACE2 affinity and the selection preference observed among the top 1% ACE2-binding variants.For example, a K478 substitution showed no alteration in ACE2 binding when individually introduced into wildtype backbone [Δ−log(K D ) = 0].Consistently, no apparent preference was observed between T478 and K478 among the 321 genotypes of the top 1% ACE2-binding variants.Another example is K417.Among the 321 genotypes of the top 1% ACE2-binding variants, K417 accounted for 97%.Correspondingly, compared with N417 substitution in a wildtype background [Δ−log(K D ) = −0.61], the K417 showed stronger ACE2 binding.(2) An inconsistency was observed between the individual effect and selection preference.For example, a F375 substitution largely dampened ACE2 binding when individually introduced into wildtype backbone [Δ−log(K D ) = −0.53].However, among the 321 genotypes of the top 1% ACE2-binding variants, F375 accounted for 99% in the presence of Q498R-N501Y double mutations, suggesting the beneficial effect of F375 on ACE2 affinity.Another example is D339, which boosted the ACE2 binding in the wildtype backbone; but showed no preference compared with G339 among the 321 top 1% variants. Together, these results suggest that the reduction effect on ACE2 binding could be compensated or reversed by the presence of Q498R-N501Y (scenario 2 in Fig. 3C), emphasizing the positive epistasis effect induced by Q498R-N501Y double mutations.However, several mutations showed a similar impact as those seen in the wildtype (scenario 1 in Fig. 3C). Q498R and N501Y collaborate to exert potent epistasis effect on maintaining ACE2 affinity, and they also confer resistance to etesevimab (mFRN = 4 and 6, respectively), suggesting that some mutations could simultaneously contribute to ACE2 binding and antibody evasion. Apparently, combinatorial assembly study to assess the ACE2 binding affinity for all possible mutation combinations is a valuable tool to discover the variants with the utmost ACE2 affinity and to pinpoint mutations inducing positive epistatic effect (Fig. 3D). It is worth noting that certain BA.1 mutations, such as K417N (for antibody evasion) and Y505H (for mouse ACE2 adaption), have no enhancement effect on ACE2 affinity, or even harm the ACE2-S binding affinity.However, these mutations are consistently preserved in the recently emerged SARS-CoV-2 variants (Yuan et al., 2021;Zhang et al., 2022), suggesting that to enhance the ACE2 affinity is not the only direction for SARS-CoV-2 evolution.Considering many mutations formed for antibody evasion or host adaptation, the appearance of positive epistatic mutations could effectively mitigate the harmful effect of these mutations on ACE2 affinity; and Left: distribution of ACE2 binding affinity using all possible mutational intermediates of BA.1 (N = 2 15 = 32,768 RBD genotypes tested).Binding affinity is shown as −log(K D ).The vertical lines indicate the −log(K D ) for wildtype Wuhan-Hu-1 strain and Omicron BA.1 variant, respectively.An interactive data browser is available at Github.Right: among the top 1% intermediates with a superior ACE2 affinity (a total of 321 genotypes), relative proportions (%) of each amino acid are shown at all 15 mutation sites.On these RBD positions, the amino acids for BA.1 variant and the corresponding amino acid in wildtype Wuhan-Hu-1 strain are also shown.Colors depict the genotypes preference at each position: preferred (genotype proportion > 60%); no apparent preference (genotype proportion = 40%-60%); unappreciated (genotype proportion < 40%).(C) Two scenarios when comparing the effect of individual BA.1 constituent mutations in the wildtype backbone [Δ−log(K D ) values from Fig. 3A] with their proportions among the top 1% variants with highest ACE2 affinity (genotype proportions from Fig. 3B).(D) Co-crystal structure of Omicron BA.1 RBD and ACE2 receptor (PDB ID 7WPB).Mutated residues are shown, and their surfaces are colored as the corresponding residues in Fig. 3B. Protein & Cell as result, maintenance of sufficient ACE2 binding affinity provides more potential for various S protein mutations to confer more significant survival advantages (Javanmardi et al., 2022;Zhang et al., 2022). Conclusions and perspectives Random sequence alterations introduce amino acid mutations on the spike protein of SARS-CoV-2.Combined with selective pressure from host humoral and cellular immune responses, the SARS-CoV-2 virus continues to evolve and novel variants with specific mutation profiles keep merging.A series of RBD mutations display a significant antibody evasion capacity, and some of these mutations meanwhile dramatically weaken the ACE2 binding activity (Fig. 4).However, for the full-length spike of various variants, this weakened effect is counterbalanced by the epistatic effect of specific mutations, such as Q498R-N501Y double mutation (Fig. 4B).This highlights the significant role of epistasis in mitigating the deleterious effect against ACE2 affinity and maintaining sufficient ACE2-binding capacity for viral fitness. Together, such phenomena emphasized that the presence of these epistatic mutations, due to their beneficial effect in maintaining ACE2 affinity, could simultaneously restrict the potential trajectories for the future viral evolution.The complicated interactions among mutations call for more intensive investigation so that we could better understand their combinational effects on overall viral characteristics. As SARS-CoV-2 continues to evolve, it is worth noting that the overall pattern of epistasis may also drift due to the sequence variation.A recent DMS assay evaluated the epistatic shift between BA.2 RBD and RBD backbone and identified three sites (453, 455, and 456) with a significant functional alteration under an epistatic effect (Taylor and Starr, 2023).Specifically, the same F456L mutation decreased ACE2 affinity in BA.2 background but enhanced ACE2 binding in XBB.1.5backbone.Combined with the fact that both the single F456L mutation and the "Flip" mutations (L455F-F456L double mutation) render extraordinary immune evasion capacity, it is not surprising to see these mutations frequently detected in recently emerged XBB sublineages, like EG.5 and HK.3 (Dyer, 2023;Faraone et al., 2023;Kaku et al., 2023;Qu et al., 2023c;Wang et al., 2023a;Zhang et al., 2023).In conclusion, the deleterious effect of escape mutations on ACE2 binding might be reversed by specific epistatic background, which again underscores the evolution trajectory to keep a balance between ACE2 affinity and immune evasion capacity (Fig. 4).Furthermore, as data accumulates based on the standardized experiments, a more comprehensive assessment of viral characteristics could be accomplished by big data analysis.Recently, a generalizable modular framework named EVEscape has shown satisfying and promising results in predicting escape mutations (Rochman and Koonin, 2023).This framework combined a deep generative model trained on historical viral sequences with the structural and biophysical information to evaluate the escaping potential of specific mutations (Thadani et al., 2023).Similarly, high-quality datasets reflecting the ACE2 affinity as well as mAb evasion capacity of distinct spike sequences may also serve as vital resources in training deep learning model to more precisely predict the future trajectory of viral evolution. Balance between ACE2 affinity and mAb evasion Nevertheless, during the measurement of ACE2 affinity and antibody evasion capacity, one limitation should be taken into consideration.For K D values, distinct assays (SPR, BLI, or DMS), recombinant proteins (full-length trimeric spike protein, spike ECD protein, or recombinant RBD protein), forms of ACE2 ligand (dimeric or monomeric) are all different parameters that could lead to data variability for the same variant (Barton et al., 2021;Javanmardi et al., 2021;Liu et al., 2022;Ramanathan et al., 2021).Similarly, despite the neutralization results in different systems tended to highly correlate (Cox et al., 2023;Riepler et al., 2020), various experimental conditions (pseudovirus concentration, host cell type, experimental output and so on) during neutralization assay could also generate considerable variability for FRN values (Chen et al., 2021;Schmidt et al., 2020;Wang et al., 2022).Although we tried to calibrate these values by utilizing the data of control virus strain (usually the Wuhan-Hu-1 or B.1 sequence), it is still needed to establish a standardized affinity measurement and neutralizing assay for a better comparison in the future (Knezevic et al., 2022). In this review, we focus on RBD mutations as this region accounts for direct contact with ACE2 and is targeted by majority of therapeutic mAbs, and mainly discuss how the RBD mutations achieve the delicate balance between ACE2 affinity and antibody evasion (Fig. 4B).Besides, mutations within RBD could affect other spike characteristics, such as protein expression and spike stability (Gupta et al., 2021;Kemp et al., 2021;Kumar et al., 2021). Moreover, mutations outside of RBD could also confer functional alteration and ultimately promote viral survival (Iketani et al., 2023;Kabinger et al., 2021;Liu et al., 2022;Saito et al., 2022).For instance, mutations in the amino-terminal domain (NTD) of spike protein might disturb antibody recognition (Chi et al., 2020;Ray et al., 2021), and mutations in the spike S2 domain, such as A942S, promote virus-host membrane fusion (Yang et al., 2022).D614G, as the most prevalent mutation of SARS-CoV-2, efficiently promotes S-ACE2 affinity, increases RBD "up" (open) state and enhances S1/S2 junction proteolysis, thereby contributing to SARS-CoV-2 fitness (Gobeil et al., 2021;Korber et al., 2020;Ozono et al., 2021;Plante et al., 2021).Obviously, a comprehensive assessment of ACE2 affinity and antibody evasion during viral evolution cannot be accomplished by analyzing RBD mutations alone, since the non-RBD mutations could also induce diverse effects.Therefore, it remains necessary to routinely monitor viral prevalence to timely identify dominant variants and to thoroughly assess the biological effects of the newly emerging point mutations and mutational combinations. Figure 2 . Figure 2. Epitope mutations drive resistance to antibody neutralization.(A) Geometric mean fold reduction in neutralization (mFRN) data for each class of monoclonal antibody (mAb).The mFRN values (y-axis) was determined using the wildtype pseudoviruses containing only one single mutation in the RBD of S protein (positions 305-534, x-axis)(Shang et al., 2020), with the wildtype pseudovirus as the reference control.Each dot represents the mFRN value of one mutation-mAb pair (mutation sites refer to Fig.1A).The colors of dots represent the corresponding testing mAbs.Horizonal dashed line shows the mFRN = 3 threshold.Mutation sites of representative VOCs Beta, Gamma, Delta and Omicron XBB.1 (see Fig.1A) are indicated by vertical red lines.The full dataset of mFRN values for each mAb in the presence of single RBD mutations are summarized in the Supplementary Data File 3. (B) Total escape scores of bebtelovimab (LY-CoV1404) determined by a full spike deep mutational scanning system at each site in the BA.1 RBD.A detailed explanation of escape score could be found in the reference(Yu et al., 2022).An interactive data set is available at Github.(C) X-ray crystal structure of the bebtelovimab Fab bound to the S protein RBD (PDB 7MMO).The rectangular region indicates the Wuhan-Hu-1 RBD epitope recognized by bebtelovimab and is showed as ribbons in the zoomed view.The key escape sites, such as N450 and P499, correspond to the region (site 444-450 and 499) with escape score > 10 (see Fig.2B).(D) Key escape sites in (C) are showed as atoms. Figure 3 . Figure 3. Potent epistatic mutations reverse the deleterious summed effect on ACE2 affinity.(A) Heat map depicting relative enhancement in ACE2 affinity for variants of concern (VOCs) (top row) and individual mutations (other rows).Each column shows data for each variant, including Alpha, Beta, Gamma, Delta, BA.1, BA.2, BA.4/5, BA.2.75, BQ.1.1,XBB.1, and XBB.1.5.The data in each row are the calculated logarithmic negative K D [Δ−log(K D )] value, to present the effect of mutations on ACE2 binding affinity.These values are calculated by comparing the K D value of a mutated S protein with that of a wildtype S protein under the same experimental condition.The effect for each constituent VOC mutation individually on a wild type background, together with their summed effect is shown.At the left-most column of the table, the original amino acids of the S protein and their corresponding positions are shown; and the amino acids (K417, F486, Q493, G496, Q498, N501, and Y505) directly contacting ACE2 are highlighted.Different colors represent the changed levels on ACE2 affinity: strong enhancement [Δ−log(K D ) > 0.8]; moderate enhancement [Δ−log(K D ) = 0.5-0.8]; mild enhancement [Δ−log(K D ) = 0.3-0.5];slightly enhancement [Δ−log(K D ) < 0.3]; slightly decreased affinity [Δ−log(K D ) = −0.5-0];moderate decreased affinity [Δ−log(K D ) = −0.5-−0.8];strongly decreased affinity [Δ−log(K D ) < −0.8]."−" indicates no mutation at this position for the corresponding variants/column.(B)Systematic analysis of ACE2 binding affinity for RBD proteins containing all possible combinations of 15 mutations in the RBD of BA.1 variant. Figure 4 . Figure 4. Maintaining the balance between ACE2 affinity and mAb evasion for the viral fitness.(A) To inhibit infection, the neutralizing antibody competitively binds to SARS-CoV-2 RBD, while the interaction between the viral S protein and the host receptor ACE2 is abolished.Created with Biorender.(B) Upper left:In the presence of certain escape mutation, a neutralizing antibody might lose its neutralizing capacity.However, some escape mutations could also decrease the ACE2-S binding affinity, reducing the viral infectivity and fitness.Upper right: some RBD mutations might exert epistatic effects to enhance ACE2 binding.Bottom: For each SARS-CoV-2 variant, a delicate balance needs to be kept between its immune evasion capacity (achieved through escape mutations) and sufficient ACE2 binding affinity (maintained by epistatic mutations).Created with Biorender.
2024-03-07T06:16:45.322Z
2024-03-05T00:00:00.000
{ "year": 2024, "sha1": "b8ac10e74280e8a785d7c330836efbd579eff008", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/proteincell/advance-article-pdf/doi/10.1093/procel/pwae007/56843355/pwae007.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d794943b010593eea3da35fc9beff10e7a370d27", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252202890
pes2o/s2orc
v3-fos-license
Findings from a Qualitative Survey in the Asia-Pacific Region on Maternal and Perinatal Death Surveillance and Response (MPDSR) and Maternal Health Service Disruptions During the COVID-19 Pandemic 2020–2021 The global maternal mortality ratio (MMR) decreased by approximately 38% in two decades with reductions accelerating prior to 2020. The COVID-19 pandemic has caused major health system interruptions, and the direct and indirect consequences of this has worsened maternal and neonatal outcomes. The Maternal and Perinatal Death Surveillance and Response (MPDSR) system, used in countries with a high maternal mortality burden, and the “Near Miss” system, for countries such as Australia with very low maternal mortality, have been identified by the World Health Organisation (WHO) as essential interventions to mitigate against the indirect effects of COVID-19 on maternal and perinatal outcomes. We undertook a rapid stocktake process to understand the impact of COVID-19 on service provision and MPDSR in the Asia Pacific Region, where the majority of countries experience high maternal mortality. Data were collected by survey of 22 countries utilising a Likert scale measuring respondents’ agreement with statements regarding MPDSR practices and health service disruptions. Most frequently reported disruptions to MPDSR systems were lack of completion or delay of death reviews at both facility and country level and decreases in number of community death notifications. Redeployment of both midwives and those responsible for MPDSR activities was identified as key issues. Other Covid-19 related service disruptions included reduced attendance at facilities for birthing, shortages of life-saving drugs, reduced operating theatre availability, and difficulty accessing emergency transport. Alongside evidence from other epidemics and emerging evidence about the global impact of the COVID-19 pandemic on maternal and newborn outcomes, the survey results indicate continued disruptions to essential maternal and newborn services. Midwives working together as part of a team and supported to provide clinical care and roles in health improvement systems such as MPDSR, have the capacity to ensure the gains made in tackling maternal and perinatal death will not be undone. O10 Findings from a Qualitative Survey in the Asia-Pacific Region on Maternal and Perinatal Death Surveillance and Response (MPDSR) and Maternal Health Service Disruptions During the COVID-19 Pandemic 2020-2021 Ms Kara Blackburn 1 , Ms Rachel Smith 1 , Caroline Homer 1 , Ms Catherine Breen Kamkong 2 , Animesh Biswas 3 1 Burnet Institute, Melbourne, Australia 2 UNFPA Asia Pacific Regional Office, Bangkok, The global maternal mortality ratio (MMR) decreased by approximately 38% in two decades with reductions accelerating prior to 2020. The COVID-19 pandemic has caused major health system interruptions, and the direct and indirect consequences of this has worsened maternal and neonatal outcomes. The Maternal and Perinatal Death Surveillance and Response (MPDSR) system, used in countries with a high maternal mortality burden, and the "Near Miss" system, for countries such as Australia with very low maternal mortality, have been identified by the World Health Organisation (WHO) as essential interventions to mitigate against the indirect effects of COVID-19 on maternal and perinatal outcomes. We undertook a rapid stocktake process to understand the impact of COVID-19 on service provision and MPDSR in the Asia Pacific Region, where the majority of countries experience high maternal mortality. Data were collected by survey of 22 countries utilising a Likert scale measuring respondents' agreement with statements regarding MPDSR practices and health service disruptions. Most frequently reported disruptions to MPDSR systems were lack of completion or delay of death reviews at both facility and country level and decreases in number of community death notifications. Redeployment of both midwives and those responsible for MPDSR activities was identified as key issues. Other Covid-19 related service disruptions included reduced attendance at facilities for birthing, shortages of life-saving drugs, reduced operating theatre availability, and difficulty accessing emergency transport. Alongside evidence from other epidemics and emerging evidence about the global impact of the COVID-19 pandemic on maternal and newborn outcomes, the survey results indicate continued disruptions to essential maternal and newborn services. Midwives working together as part of a team and supported to provide clinical care and roles in health improvement systems such as MPDSR, have the capacity to ensure the gains made in tackling maternal and perinatal death will not be undone. http://dx.doi.org/10.1016/j.wombi.2022.07.016 RHW, Hoxton Park, Australia Aim: To provide a contemporary, evidence based and consumer driven maternity service that ensures each woman achieves her optimal outcome. Description: Australian healthcare services measure the quality of their maternity care via clinical outcomes e.g. Perinatal statistics, Healthcare Acquired Conditions (HAC's) and Clinical Indicators. The increasingly broad variation in maternal demographics between health services Australia wide make accurate benchmarking difficult e.g. rates of obesity, assisted fertility, maternal age >35 years and poor education and socio economic status. A Tertiary maternity service in NSW reviewed how women Australia wide measure their healthcare during pregnancy, labour and birth. The search identified that the factors that are important to women do not correlate with the priorities of health care services. The maternity service responded by reviewing how it would provide maternity care through a different lens. Core business was no longer '4000 births' per annum but became the creation of '4000 mothers' per annum. The team ensured that the evidence based strategies that impact physical birth outcomes were in place and aligned with evidence e.g. models of care, medical and non-medical interventions and provision of information. However a new approach saw the hospital team aligned in their language, energy and actions when interacting with the woman. This approach brought a new philosophy and energy to the team, impacting the emotional outcome for each woman following her birth experience. Rationale: Obstetric interventions have been increasing nationwide for a number of decades with no demonstrated improvement in outcomes and in some cases a deterioration e.g. increasing rates of postpartum haemorrhage. An alternate strategy was required that would engage both women, medical and midwifery clinicians and optimise each woman's pregnancy and birth outcome. Implications: A multidisciplinary hospital team aligned in their vision and belief resulting in the creation of strong and confident mothers…. 'this is the moment I dreamed about'. Background: While there is awareness that multiple birth pregnancies and postnatal experiences are more challenging generally, little is known of the mental health impacts. Aim: To explore multiple birth mothers pregnancy experience and mental health outcomes during pregnancy, following delivery and postnatally. Methods: An open online anonymous survey was used to collect data from multiple birth parents, 1006 responses were collected. 713 completed the survey fully, providing very detailed responses to openended questions, whilst 293 provided high level responses only. Findings: The challenges of a multiple birth pregnancy was associated with high levels of mental distress and mental health problems. 73.3% of respondents noted that they experienced challenges during their pregnancy, and of these, 84.7% cited these challenges as directly impacting upon their emotional or mental health. Despite the challenges, 70% of these respondents did not seek treatment or a diagnosis. At birth, 73.7% of those surveyed had a caesarean delivery and another 2.3% had at least one baby delivered via caesarean. Almost 28% of respondents reporting experiencing a traumatic birth, with over 60% not seeking support or treatment.
2022-09-13T13:19:09.853Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "255f0d25b2dfbcfda8935ec5204a2a5efb2a8afd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ae4ccae2a60fa383bd87c668c9d0302c9b631ce3", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
6580411
pes2o/s2orc
v3-fos-license
Polyphasic characterisation of three new Phyllosticta spp. Three new species of Phyllosticta, P. hostae on Hosta plantaginea (China), P. schimae on Schima superba (China), and P. ilicis-aquifolii on Ilex aquifolium (UK), are described and illustrated in this study. They are compared with morphologically similar and phylogenetically closely related species. A polyphasic approach using phylogeny, host association, disease symptoms, colony and morphological characteristics, is employed to justify the introduction of the new taxa. Phylogenetic relationships of the new species with other Phyllosticta species are revealed by DNA sequence analyses based on the nrDNA-internal transcribed spacer (ITS) regions and a combined multilocus alignment of the ITS, partial translation elongation factor 1-alpha (TEF1), actin (ACT), and glyceraldehyde 3-phosphate dehydrogenase (GPDH) gene regions. INTRODUCTION Many Phyllosticta (teleomorph Guignardia) species cause plant diseases such as leaf spots, leaf blotch, as well as black spots and lesions on fruits of various plants (van der Aa & Vanev 2002). These plant pathogenic fungi may cause serious damage to the host plant through reduced photosynthetic ability and premature leaf or fruit fall (Glienke-Blanco et al. 2002, Baldassari et al. 2008. Phyllosticta species have also been recorded as endophytes and saprobes on a wide range of host plants (Baayen et al. 2002, van der Aa & Vanev 2002, Okane et al. 2003, Wulandari et al. 2009, Glienke et al. 2011). The generic circumscription of Phyllosticta as defined by van der Aa (1973) has been widely accepted (Bissett 1979, 1986, Yip 1989, Crous et al. 2006, Motohashi et al. 2008, Glienke et al. 2011, Wikee et al. 2011. The main characters are: pycnidial conidiomata, holoblastic conidiogenous cells with percurrent proliferation, conidia aseptate, surrounded by a mucilaginous sheath, and provided with an apical extracellular appendage, a Guignardia sexual state, and Leptodothiorella spermatial state (van der Aa 1973, van der Aa & Vanev 2002. According to these criteria, van der Aa & Vanev (2002) reconsidered 2 936 names in Phyllosticta, accepting 141 species based on original literature and a re-examination of herbarium specimens. About 50 % of the species were reclassified in Phoma, 20 % in Astero mella, 5 % in Phomopsis and c. 18 % in other coelomycetous genera or other taxonomic groups. Some Phyllosticta species have been linked to their teleomorph states, for example, P. ampelicida is the anamorph of G. bidwellii (van der Aa 1973), but most appear to be asexual. Recent changes to the rules that govern fungal nomenclature require that only one name for a single biological species should be used instead of different names for different morphs (Hawksworth et al. 2011, Wingfield et al. 2012. The earlier and well-known generic name Phyllo sticta (Persoon 1818), thus has priority over Guignardia (Viala & Ravaz 1892), as followed by Glienke et al. (2011). The systematics of Phyllosticta species has long been problematic because of the limited morphological characters and the unreliable use of host-association based nomenclature. Polyphasic approaches combining morphological characters and phylogenetic relationships can resolve species relationships, based on which a natural classification could be established (Wulandari et al. 2009, Glienke et al. 2011. Although the rDNAinternal transcribed spacer (ITS) locus has some resolution at species level, it is insufficient for separating cryptic species in Phyllosticta (Wulandari et al. 2009, Glienke et al. 2011). Therefore, multilocus phylogenetic analyses have been increasingly used for species discrimination in this genus (Wulandari et al. 2009, Glienke et al. 2011, Wang et al. 2012. For example, it was shown that G. mangiferae is a distinct taxon from P. capi talensis, which is a species complex awaiting more detailed phylogenetic study (Glienke et al. 2011). In the present study, three new species of Phyllosticta are described based on morphological characters and phylogenies derived from ITS and combined multilocus gene sequences. Isolates Phyllosticta species were isolated from diseased leaves of ornamental or forest plant species from China and the United Kingdom. Infected leaves were incubated in moist chambers at room temperature to induce sporulation. Pure cultures were obtained by single spore isolation as described by Choi et al. (1999). Alternatively, 5 × 5 mm pieces of surface-sterilised tissue were taken from the margin of leaf lesions and were consecutively immersed in 70 % ethanol solution for 1 min, sodium hypochlorite solution with 3 % available chlorine for 2 min, rinsed in sterile distilled water, blotted dry in sterile paper towels and incubated on 2 % potato-dextrose agar (PDA) (Cai et al. 2009). Morphology Cultures were grown on PDA for microscopic examination. Fungal structures were mounted on glass slides in clear lactic acid, and studied by means of a light microscope. Colony morphologies were assessed after 7 d growth on PDA, and colours rated according to the colour charts of Rayner (1970). DNA extraction, PCR amplification and sequencing Mycelial discs were taken from actively sporulating areas near the growing edge of 10 d old cultures and transferred to PDA. Genomic DNA was extracted with a Biospin Fungus Genomic DNA Extraction Kit (Bioer. Technology Co., Ltd., Hangzhou, P.R. China) according to the manufacturer's protocol. Quality and quantity of DNA were estimated visually by staining with GelRed after 1 % agarose gel electrophoresis. The ITS1 and ITS4 primer pair (White et al. 1990) was used to amplify the ITS region following the procedure described by White et al. (1990). The primers EF1-728F and EF1-986R (Carbone & Kohn 1999) were used to amplify a partial fragment of the translation elongation factor 1-α gene (TEF1); the primers ACT-512F and ACT-783R (Carbone & Kohn 1999) were used to amplify a partial fragment of the actin gene (ACT); the primers GDF1 (Guerber et al. 2003) and Gpd2-LM (Myllys et al. 2002) or GDR1 (Guerber et al. 2003) were used to amplify a partial fragment of the glyceraldehyde 3-phosphate dehydrogenase gene (GPDH). Amplification conditions followed Arzanlou et al. (2008). DNA sequencing was performed at the SinoGenoMax Company Limited, Beijing. Sequence alignment and phylogenetic analyses Sequences from forward and reverse primers were aligned to obtain a consensus sequence. Sequences of our isolates, together with reference sequences obtained from GenBank (Table 1), were aligned using Clustal X (Thompson et al. 1997). The separate ITS and the combined multilocus alignments were manually optimised in BioEdit 7.0.9.0 for maximum alignment and minimum gaps (Hall 1999). Both these alignments were subjected to phylogenetic analyses. Phylogenetic analyses were performed using PAUP v. 4.0b10 (Swofford 2003). Ambiguously aligned regions were excluded from all analyses. An unweighted parsimony (UP) analysis was performed. Trees were inferred using the heuristic search option with TBR branch swapping and 1 000 random sequence additions, branches of zero length were collapsed and all equally most parsimonious trees were saved. Descriptive tree statistics such as tree length [TL], consistency index [CI], retention index [RI], rescaled consistency index [RC], and homoplasy index [HI], were calculated for trees generated. Clade stability was assessed in a bootstrap analysis with 1 000 replicates, each with 10 replicates of random stepwise addition of taxa. A Shimodaira-Hasegawa test (SH test) (Shimodaira & Hasegawa 1999) was performed in order to determine whether trees were significantly different. Trees were visualised in TreeView v. 1.6.6 (Page 1996). For the Bayesian analyses, the models of evolution were estimated by using MrModeltest v. 2.3 (Nylander 2004). Posterior probabilities (PP) (Rannala & Yang 1996, Zhaxybayeva & Gogarten 2002 were determined by Markov Chain Monte Carlo sampling (BMCMC) in MrBayes v. 3.0b4 (Huelsenbeck & Ronquist 2001), under the estimated model of evolution. Six simultaneous Markov chains were run for 1 000 000 generations and trees were sampled every 100th generation (resulting in 10 000 total trees). The first 2 000 trees, representing the burn-in phase of the analyses, were discarded and the remaining 8 000 trees were used for calculating posterior probabilities (PP) in the majority rule consensus tree. Novel sequence data were deposited in GenBank (Table 1), alignments in TreeBASE (www.treebase.org, submission no.: 12430), and taxonomic novelties in MycoBank (Crous et al. 2004 RESULTS Phylogenetic relationships were inferred using the ITS alignment, and the combined ITS, TEF1, GPDH, and ACT sequence alignment. The 67 ITS sequence dataset from 52 taxa comprised 517 characters after alignment. Of these, 252 characters were parsimony informative, 47 were variable and parsimonyuninformative, and 218 were constant. Parsimony analysis generated two trees, and one of the equally most parsimonious trees with shorter tree length (TL = 935, CI = 0.539, RI = 0.827, RC = 0.446, HI = 0.461) was selected and shown in Fig. 1. For the Bayesian analyses, model (GTR+I+G) was selected in Mr-Modeltest 2.3. The branches with significant Bayesian posterior probability (≥ 95 %) were thickened in the phylogenetic tree. All three species described as new in this manuscript appear in distinct lineages (Fig. 1). The combined datasets of ITS, TEF1, GPDH, and ACT contained 32 combined sequences from 18 taxa and comprised 1 791 characters after alignment. Of these, 407 characters were parsimony informative; 129 were variable and parsimonyuninformative, and 1 255 were constant. The parsimony analysis generated three equally most parsimonious trees and the tree with shortest tree length (TL = 1051, CI = 0.669, RI = 0.841, RC = 0.562, HI = 0.331) was selected and shown in Fig. 2. For the Bayesian analyses, the best-fit model (GTR+I+G) was selected in MrModeltest 2.3. The branches with significant Bayesian posterior probability (≥ 95 %) were thickened in the phylogenetic tree. Similarly all three species appear in distinct lineages (Fig. 2). Etymology. Named after its host, Hosta plantaginea. The phylogenetic tree generated from a multilocus sequence alignment showed that the three strains of P. hostae constituted a distinct lineage with 100 % bootstrap support (Fig. 2). DNA sequence analysis showed that P. hostae was most closely related to P. citribraziliensis, P. cussonia, P. hypoglossi, P. spi narum, and P. vaccinii (teleomorph Guignardia vaccinii). Of these species, P. vaccinii is morphologically most similar, but the ex-type strain (CBS 126.22) shares only 94 % identity to P. hostae in ITS sequence. Phyllosticta vaccinii was isolated from the leaves of Vaccinium arboretum. The pycnidia of P. vaccinii are larger (80-175 μm vs 40-150 μm) than that of P. hostae, and the conidia are slightly smaller (8-12 × 5-8 μm vs 8-15 × 5-9 μm). In addition, the appendages of P. vaccinii can be up to 17 μm long, while that of P. hostae is less than 8 μm (van der Aa 1973). DISCUSSION In this paper we have described and named three new Phyllo sticta species based on morphological and molecular characters. Each has morphological characters typical for Phyllosticta, i.e., stromatic conidiomata, holoblastic conidiogenesis, onecelled conidia provided with a surrounding mucoid layer and an apical appendage ( Motohashi et al. 2008, Wulandari et al. 2009, Glienke et al. 2011, Wang et al. 2012. In our study, the new species were compared with other species reported from the same host family and species that are morphologically and phylogenetically closely related. These results showed that the three species were distinct, representing novel taxa. Jin (2011) reported that the conidial appendages of some Phyllosticta species might disappear with time or elongate when mounted in water. Therefore, fresh cultures were used for morphological observations, and the conidial appendages were not given undue significance in species delimitation. In this study, the morphological comparisons were made mainly based on other characters, e.g., the shape and size of conidia, pycnidia, and conidiogenous cells. Although the generic concept of Phyllosticta as defined by van der Aa (1973) is extensively accepted, the identification of species is still difficult due to limited morphological characters that can be used for comparison. Recent molecular studies have revealed the ambiguity of taxonomy based on morphological characters and host associations (Wulandari et al. 2009, Glienke et al. 2011. Multilocus phylogenetic analysis has been shown to be more useful in predicting natural species relationships in the genus (Motohashi et al. 2009, Wulandari et al. 2009, Glienke et al. 2011. Traditionally applied phenotypic characters (host, symptom, colony characteristics, and morphology) should therefore be re-evaluated for their taxonomic usefulness in light of phylogenetic relationships (Hyde et al. 2010).
2016-08-09T08:50:54.084Z
2012-04-27T00:00:00.000
{ "year": 2012, "sha1": "92677bea72443d7caff1348954dc77464d760cdf", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc3409417?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "92677bea72443d7caff1348954dc77464d760cdf", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245909224
pes2o/s2orc
v3-fos-license
Theranostic PSMA ligands with optimized backbones for intraoperative multimodal imaging and photodynamic therapy of prostate cancer Abstract  Introduction The first generation ligands for prostate-specific membrane antigen (PSMA)–targeted radio- and fluorescence-guided surgery followed by adjuvant photodynamic therapy (PDT) have already shown the potential of this approach. Here, we developed three new photosensitizer-based dual-labeled PSMA ligands by crucial modification of existing PSMA ligand backbone structures (PSMA-1007/PSMA-617) for multimodal imaging and targeted PDT of PCa. Methods Various new PSMA ligands were synthesized using solid-phase chemistry and provided with a DOTA chelator for 111In labeling and the fluorophore/photosensitizer IRDye700DX. The performance of three new dual-labeled ligands was compared with a previously published first-generation ligand (PSMA-N064) and a control ligand with an incomplete PSMA-binding motif. PSMA specificity, affinity, and PDT efficacy of these ligands were determined in LS174T-PSMA cells and control LS174T wildtype cells. Tumor targeting properties were evaluated in BALB/c nude mice with subcutaneous LS174T-PSMA and LS174T wildtype tumors using µSPECT/CT imaging, fluorescence imaging, and biodistribution studies after dissection. Results In order to synthesize the new dual-labeled ligands, we modified the PSMA peptide linker by substitution of a glutamic acid into a lysine residue, providing a handle for conjugation of multiple functional moieties. Ligand optimization showed that the new backbone structure leads to high-affinity PSMA ligands (all IC50 < 50 nM). Moreover, ligand-mediated PDT led to a PSMA-specific decrease in cell viability in vitro (P < 0.001). Linker modification significantly improved tumor targeting compared to the previously developed PSMA-N064 ligand (≥ 20 ± 3%ID/g vs 14 ± 2%ID/g, P < 0.01) and enabled specific visualization of PMSA-positive tumors using both radionuclide and fluorescence imaging in mice. Conclusion The new high-affinity dual-labeled PSMA-targeting ligands with optimized backbone compositions showed increased tumor targeting and enabled multimodal image-guided PCa surgery combined with targeted photodynamic therapy. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-022-05685-0. Introduction Despite recent advances in imaging, staging, and therapy, prostate cancer (PCa) remains a significant health problem with a substantial morbidity and mortality [1]. First-line PCa treatment often consists of the surgical removal of the prostate [2]. Unfortunately, the narrow tumor resections performed to prevent comorbidities lead to positive surgical margins in 5-30% of patients, which can even increase up to 65% of patients in case of extra-prostatic extension of the tumor (pT3-pT4) [3][4][5]. Moreover, metastatic lymph nodes embedded in highly vascularized abdominal lipid tissue can easily be missed by the surgeon, leading to biochemical recurrences in up to 35% of these patients [6,7]. The challenges mentioned above stress the importance of improved intraoperative visualization of tumor margins and adjuvant ablative procedures for the primary tumor and improved tumor detection in (metastatic) lymph nodes. A promising strategy to achieve these goals is combined radio-and fluorescence-guided surgery followed by intraoperative photodynamic therapy (PDT) [8][9][10]. PDT is a method to induce cellular damage through administration and subsequent selective activation of a photosensitizer. Excitation of the photosensitizer induces fluorescence for intraoperative fluorescence imaging [8,9], but it also leads to the production of highly toxic singlet oxygen ( 1 O 2 ) and reactive oxygen species (ROS) [11][12][13]. ROS and 1 O 2 can cause immunogenic, necrotic, and apoptotic cell death [11,[14][15][16]. A highly suitable target for imaging and therapy in PCa is the prostate-specific membrane antigen (PSMA) [17,18]. In the past decade, characterization of the active substrate recognition site of PSMA has allowed for the development of numerous highly specific small-molecule PSMA-targeting ligands [19][20][21][22]. Previously, our group developed a first-generation photosensitizer-based dual-labeled PSMA ligand for intra-operative imaging and therapy of PCa called PSMA-N064, which showed the potential of this approach in PSMA-positive xenografts [23]. Nonetheless, achieving the highest possible tumor uptake is essential for fluorescence imaging and PDT, warranting further ligand optimization. Well-known high-affinity PSMA targeting tracers with excellent tumor uptake that are currently used in clinical trials include PSMA-617 and PSMA-1007 [19][20][21]. These ligands precisely fit both the active site and the entrance funnel of PSMA [19,20,24,25]. Since PSMA-617 and PSMA-1007 are not dual-labeled and lack a photosensitizer, they are not suited for multimodal intraoperative imaging and PDT of PCa. However, backbone modification of these high-affinity ligands to provide a handle for multiple functional moieties could lead to dual-labeled ligands while preserving excellent tumor uptake. Therefore, we made a crucial modification to the backbone of PSMA-1007 by the incorporation of a lysine side residue. Based on the crystal structure of PSMA-1007 in the active site of PSMA ( Supplementary Fig. 1), the side chain of this lysine residue is oriented towards the exterior of PSMA, providing ample space for (multiple) functional elements [20,25]. Using these new backbones, we synthesized three duallabeled PSMA ligands consisting of both the photosensitizer/fluorophore IRDye700DX and a DOTA chelator for indium-111 ( 111 In) labeling (Fig. 1). Affinity, PSMA-targeted PDT potential, and tumor uptake of the new duallabeled ligands were determined using PSMA-expressing tumor cells and PSMA-positive xenograft models. Moreover, we directly compared ligand performance with our previously published first-generation ligand (PSMA-N064) and a control ligand with an incomplete PSMA-binding motif (PSMA-N064inc) [23]. Synthesis of dual-labeled ligands The new PSMA-binding ligands (PSMA-N01, PSMA-N02, and PSMA-N03) were synthesized using solid-phase chemistry. After cleavage from the resin, the ligands were conjugated with IRDye700DX in solution using N-hydroxysuccinimide chemistry. Full synthetic procedures and results can be found in the supplementary data (Page 1-4, Supplementary Fig. 2). Regarding PSMA-N064, we previously published a detailed description of the synthetic procedures and chemical analyses (reverse-phase high-performance liquid chromatography (RP-HPLC), matrix-assisted laser desorption ionization time-of-flight (MALDI-ToF)) [23]. As a control, a ligand similar to PSMA-N064 was included that is lacking the glutamic acid in the PSMA-binding motif, referred to as PSMA-N064-incomplete (PSMA-N064inc). Cell culture LS174T cell line was acquired from the American Type Culture Collection. LS174T colon carcinoma cells were stably transfected DNA encoding for human PSMA using the plasmid pcDNA3.1-hPSMA as described before [3]. Cells were cultured in RPMI 1640 medium supplemented with 10% FCS and 2-mM glutamine (5% CO 2 , 37 °C). LS174T-PSMA cells were cultured in the presence of 0.3 mg/ml G418 geneticin as well. Radiolabeling and RP-HPLC Peptides were labeled under metal-free conditions with 111 InCl 3 (Curium) in 0.5 M 2-(N-morpholino)ethanesulfonic acid (MES) buffer (pH 5.5, twice volume of 111 InCl 3 ) or sodium acetate buffer (NaOAc in 0.04 M acetic acid solution, pH 4.5). Labeling was performed at 45 °C for 10 min [26]. To chelate unincorporated 111 InCl 3 , ethylenediaminetetraacetic acid (EDTA, 50 mM) was added to a final concentration of 5 mM after the incubation. Radiochemical yield (RCY) was determined by instant thin-layer chromatography (ITLC) using silica gel-coated paper (Agilent Technologies) and 0.1 M ammonium acetate containing 0.1 M EDTA, pH 5.5, as the mobile phase. In addition, RCY was determined using RP-HPLC on an Agilent 1200 system (Agilent Technologies) with an in-line radiodetector (Elysisa-Raytest). A reversed-phase C18 column (5 µm, 4.6 × 250 mm; HiChrom) was used at a flow rate of 1 ml/min. We used the following buffer system: buffer A, triethylammonium acetate (TEAA, 10 mM, pH 7); buffer B, 100% methanol; and a gradient of 97 to 0% buffer A (35 min). Peptides were purified by a Sep-Pak C18 light cartridge (Waters) and eluted from the cartridge with 50% ethanol in water. In vitro internalization assay The binding and internalization characteristics of the ligands were determined using LS174T-PSMA cells. Cells were cultured to confluency in 6-well plates followed by incubation with 50,000 counts per minute of 111 In-labeled ligand (0.1-0.25 pmol/well) in 1-ml RPMI + 0.5% BSA (37 °C, 2 h). Non-specific binding was determined by coincubation with 2(phosphonomethyl)-pentane-1,5-dioic acid (2-PMPA, 21.57 µM). PSMA-specific binding was defined as total binding minus the non-specific binding. To retrieve the membrane-bound fraction, cells were washed twice with PBS and incubated for 10 min at 0 °C with acid buffer (154-mM NaCl, 0.1 M acetic acid, pH 2.6). After incubation, the membrane-bound fraction was collected. Then, cells were washed and lysed with 0.1 M NaOH, and the cell lysate (intracellular activity) was collected. Intracellular and membrane-bound activity fractions were measured in a gamma-counter (2480 WIZARD 2 Automatic Gamma Counter, PerkinElmer) [3,27]. In vitro targeted PDT assays LS174T wildtype (LS174T-WT) and LS174T-PSMA cells were cultured to confluency in 48-well plates. Cells were incubated (2 h, 5% CO 2 , 37 °C) with 30-nM PSMA ligand in binding buffer (RPMI 1640 + 0.5% BSA) in triplicate. After washing with PBS, a 0.5-ml binding buffer was added to each well, and cells were irradiated with a NIR light-emitting diode (690 ± 20 nm) [28]. The typical forward voltage was 2.6 V creating a power output of 490 mW using 126 individual LED bulbs to ensure homogenous illumination of the area of interest predefined as 5 × 3 cm. The cells were irradiated at NIR radiant exposures of 100 J/cm 2 (450 mW/ cm 2 ) and subsequently incubated for 1 h at 37 °C. Cells that only received the PSMA ligand, only the NIR light, or neither the ligand nor the light were included as controls. Cytotoxic effects of PDT with PSMA ligands were determined with a CellTiter-Glo™ assay (Promega Benelux) according to the manufacturer's instructions. The binding buffer was replaced with 100-µl fresh binding buffer and 100-µl CellTiter-Glo® 2.0 Assay. Plates were shaken (2 min) and incubated for 10 min at room temperature. Next, luminescence was measured in a plate reader (Tecan Infinite® 200 PRO) to determine the metabolic activity of the cells. Animal tumor model All animal experiments were approved by the institutional Animal Welfare Committee of the Radboud University Medical Center and were conducted in accordance to the guidelines of the Revised Dutch Act on Animal Experimentation. Animal experiments were performed in 8-10 weeks old male BALB/c nude mice (Janvier). The mice were housed in individually ventilated cages (Blue line IVC, 3-5 mice per cage), under standard non-sterile conditions with cage enrichment present. There was free access to chlorophyllfree animal chow (Sniff Voer) and water. Mice were subcutaneously inoculated with 3.0 × 10 6 LS174T-PSMA cells in the right flank and 1.5 × 10 6 LS174T WT cells in the left flank (diluted in 200-µl RPMI 1640 medium). When xenografts were approximately 0.5 cm 3 (10-14 days after injection), mice were block-randomized into groups based on tumor size. The researchers were not blinded for the experimental groups. In vivo biodistribution, SPECT/CT imaging, and fluorescence imaging Mice were intravenously injected with 0.3 nmol PSMA ligand, labeled with 10 MBq 111 In (molar activity 33.3 MBq/ nmol) in PBS + 0.5% (w/v) BSA. For the ex vivo biodistribution, five groups (one group for each ligand) of four mice were included. Two hours p.i., all mice were euthanized by CO 2 /O 2 -asphyxiation. For two mice of each group (2 mice/ ligand), background-subtracted fluorescence images were acquired with the IVIS imaging system (Xenogen VivoVision IVIS Lumina II, PerkinElmer), with a 640-nm excitation filter and a Cy5.5 emission filter and an acquisition time of 10 s. Next, µSPECT/CT imaging was performed in the same two mice per group, with a 1.0-mm diameter pinhole mouse collimator tube (U-SPECT II, MILabs) [29]. Mice were scanned for 30 min followed by a CT scan for anatomical reference (spatial resolution 160 μm, 615 μA, 65 kV). MILabs reconstruction software was used to reconstruct the µSPECT/CT scans, via an ordered-subset expectation maximization algorithm, energy windows 154-188 keV and 220-270 keV, 3 iterations, 16 subsets, and voxel size of 0.75 mm. SPECT/CT maximum intensity projections (MIPs) were created using the Inveon Research Workplace software (Siemens Preclinical Solutions, version 4.1). NIR fluorescence images were analyzed using Living Image software (PerkinElmer, version 4.2). After imaging, relevant tissues were dissected, weighed, and measured for radioactivity in a gamma-counter (2480 WIZARD 2 Automatic Gamma Counter, PerkinElmer). In addition, a blocking experiment with PSMA-N064 and PSMA-617 was performed. Full experimental procedures and results can be found in the supplementary data (Page 5, Supplementary Fig. 3). Statistical analysis Graphpad Prism software (version 5.03) was used to perform statistical analyses. Results are presented as mean ± SD. Differences in in vitro PDT efficacy, affinity, and in vivo tumor and organ uptake were tested for significance using a one-way ANOVA with a Bonferroni's multiple comparisons posttest. Differences were considered significant at P < 0.05, two-sided. Design and synthesis of the ligands We designed three glutamate-urea-lysine-based PSMA ligands with various backbones conjugated to DOTA and IRDye700DX (Fig. 1). The design of our ligands is based on high-affinity ligands PSMA-1007 and PSMA-617. This means that they consist of naphtylalanine, aminomethyl benzoic acid, aminomethyl cyclohexane, glutamic acid, benzoic acid, and nicotinic acid (not-fluorinated) groups. However, we introduced extra-functional groups to the linker by substitution of the most C-terminal glutamic acid of PSMA-1007 into a lysine residue (red circle, Supplementary Fig. 1). Next, an additional lysine residue was connected to the lysine ε-amine of the peptide linker. With this modification, we aimed to preserve the perfect fit of the ligands in PSMA and their high affinity towards PSMA while enabling dual-labeling of the ligands. The exact differences between the backbone structures of the three newly synthesized ligands are as follows: PSMA-N01 and PSMA-N02 are PSMA-1007-based and thus contain a 4-(aminomethyl)benzoic acid, whereas PSMA-N03 is PSMA-617-based and therefore contains a 4-(aminomethyl)cyclohexane-1-carboxylic acid. Moreover, PSMA-N02 was capped with a nicotinic acid instead of a benzoic acid on its N-terminus, which was hypothesized to form an extra-hydrogen bond with PSMA (Fig. 1). As a control, we included two previously developed dual-labeled ligands called PSMA-N064 and PSMA-N064inc that have a backbone partly based on PSMA-I&T (Fig. 1). Chemical analysis using MALDI-TOF and RP-HPLC confirmed the synthesis of all three ligands (PSMA-N01, -N02, -N03) as well as the two control ligands (PSMA-N064, PSMA-N064inc) (Supplementary Fig. 4). As an example, the chemical analysis of PSMA-N02 is depicted in Fig. 2. In vitro characterization of ligands IC 50 determination showed that all ligands had a similar IC 50 in the low nanomolar range (Fig. 3a). Next, the PSMA-binding potential of the ligands was examined in an in vitro binding and internalization assay using PSMAexpressing LS174T cells, in which all ligands showed PSMA-specific binding (Fig. 3b). A direct comparison of the three ligands revealed that PSMA-N02 has the highest membrane-bound and internalized fraction (P < 0.01). As expected, we observed no binding and internalization upon incubation with control ligand PSMA-N064inc, signifying the specificity of the ligands. Next, we compared the in vitro targeted PDT effects between the three ligands and control ligands (Fig. 3c). When cells were incubated with 30-nM ligand and irradiated with 100 J/ cm 2 , cell viabilities of 23% ± 5%, 19% ± 6%, and 25% ± 4% were observed for PSMA-N01, -N02, and -N03, respectively. The targeted PDT efficacy did not significantly differ between the three ligands and also did not differ from PSMA-N064 (30% ± 1%, p = 0.053). After incubation with 30-nM PSMA-N064inc, cell viability was not affected (100% ± 16%). Cell viability of controls, consisting of irradiated PSMA-negative LS174T-WT cells and nonirradiated LS174T-PSMA and LS174T-WT cells, was also not affected (cell viability range 87-102%). Backbone modifications influence tumor uptake of ligands To elucidate the importance of the backbone composition on ligand accumulation in PSMA-expressing tumors, we compared uptake of the three new dual-labeled ligands with the uptake of PSMA-N064, and control ligand PSMA-N064inc. All ligands showed uptake in LS174T PSMApositive tumors, which was significantly higher compared with uptake in PSMA-negative tumors (P < 0.001) (Fig. 4a, Table S1). PSMA-N01, -N02, and -N03 showed a comparable uptake of 21 ± 3, 23 ± 2, and 20 ± 2%ID/g in the PSMA-positive tumor, respectively. The uptake of PSMA-N064 was significantly lower (14 ± 2%ID/g (P < 0.01) and the control ligand PSMA-N064inc showed minimal uptake of 0.5 ± 0.2%ID/g. For comparison, we also measured the uptake of PSMA-617 in our LS174T-PSMA tumor model, which was 19 ± 2%ID/g ( Supplementary Fig. 3A, Table S2). In addition, we determined the specificity of PSMA-N064 in a blocking experiment (Supplementary Fig. 3B, Table S2). Ligand-mediated multimodal imaging of PSMA-expressing tumors To determine the imaging potential of our new ligands, we scanned two mice per group with a NIR fluorescence scanner and a μSPECT/CT scanner. Representative images of all ligands are shown in Fig. 5a and b. Using both imaging modalities, the subcutaneous LS174T PSMA-positive tumors (right flank) could be clearly visualized with all ligands, except for PSMA-N064inc. PSMA-negative LS174T WT tumors (left flank) demonstrated no visible ligand uptake. The images visualized high renal ligand accumulation in all mice, which was lowest for PSMA-N064inc in accordance with biodistribution results. Discussion Local and metastatic relapses often occur following intended curative resection of PCa [2,4,5]. Radio-and fluorescenceguided surgery followed by adjuvant photodynamic therapy is a promising strategy that may assist the surgeon to achieve complete removal of tumor tissue while sparing surrounding healthy tissue. In our previous work and the current study, we developed and characterized dual-labeled PSMAtargeting ligands suited for this strategy [23]. These ligands allowed for highly specific tumor localization, visualization, and PDT in PSMA-expressing tumor cells and xenograft models. Proper ligand design, including a backbone connecting the PSMA binding motif to one or multiple functional elements of the ligand, is highly important as it must enable high-affinity binding of the ligand to the active site of PSMA and result in favorable pharmacokinetic properties [30,31]. Previously, we developed and characterized the PSMA-N064 ligand and its control PSMA-N064inc, demonstrating the proof-of-concept for dual-labeled PSMA-targeted imaging and PDT [23]. Nonetheless, we continued ligand development since achieving the highest possible tumor uptake of the ligands is essential, particularly for fluorescence imaging and PDT. For PDT, high absolute uptake in the tumor may lead to increased PDT effects, and may mean that less NIR exposure is needed to produce sufficient amounts of oxygen radicals, possibly leading to fewer side effects of the treatment [14][15][16]. With the aim to develop dual-labeled PSMA ligands that have the highest possible tumor uptake, we merged the chemical structure of PSMA-N064 with those of well-known high-affinity ligands PSMA-1007 and PSMA-617 [19,20]. We incorporated a lysine residue in the peptide backbones, of which the side chain is postulated to point towards the exterior of PSMA. On this lysine, we attached an additional lysine residue to providing handles for conjugation of multiple functional moieties [25]. In vitro, the dual-labeled ligands with a DOTA chelator (PSMA-N01, -N02, and -N03) had a lower labeling efficiency compared with the DOTAGA-based PSMA-N064 and PSMA-N064inc (45 °C). Nonetheless, all ligands could be purified, leading to radiochemical purities > 95%. PSMA-N01, -N02, and -N03 all had a PSMA affinity in the nanomolar range (IC 50 < 50 nM) and showed internalization percentages of 73-90% in the LS174T-PSMA cells (percentage of cell-associated ligand that was internalized). In a head-to-head comparison, [ 111 In]In-DOTA-PSMA-617 demonstrated an IC 50 of 52.7 nM and an internalization ratio of 46% [23]. The IC 50 of 18 F-PSMA-1007 reported in the literature is 4.2 ± 0.5 nM and the internalization ratio is 67% [20,32], indicating that in vitro performance of our newly developed ligands is in a similar range to that of PSMA-617 and PSMA-1007. In vivo, we demonstrated that our novel ligands are able to visualize PMSA-positive tumors using both radionuclide and fluorescence imaging in a mouse model. The new backbone composition significantly improved tumor targeting in the PSMA-positive xenograft model compared to PSMA-N064 (P < 0.01). Although a direct comparison is difficult due to differences in measurement time points and tumor model used, tumor uptake values of radiolabeled tracers such as PSMA-617, PSMA-I&T, PSMA-1007, and PSMA-I&F reported in the literature range from 5 to 13%ID/g (LNCaP, 1/2 h p.i) [10,[33][34][35], whereas the uptake of PSMA-N01, -N02, and N03 in the current study was ≥ 20%ID/g (LS174T-PSMA, 2 h p.i.). In addition, a previous direct comparison of the LNCaP and LS174T-PSMA xenograft models did not show major differences in PSMA-I&T tracer uptake between these models [35], indicating that the performance of PSMA-N01, -N02, and N03 is in a similar range to those of the clinically available ligands. As expected, we measured very low uptake in PSMApositive tumors when using our control ligand PSMA-N064inc, signifying the PSMA specificity of the ligands and the need for an intact PSMA binding motif. Withal, these findings support the increasing evidence that properties such as charge, ability to fit the entrance funnel of PSMA, and overall molecular structure of the ligands contribute to efficient in vivo tumor targeting. Although not dual-labeled, two IRDye700DX-based PSMA ligands suited for PDT have been reported in the literature with IC 50 values in the low nanomolar range similar to our ligands [12,36]. However, in the study of Wang et al. [36], in vitro incubation of PC3-pip cells with 1-µM IRDye700DXlabeled PSMA ligand and subsequent NIR light exposure did not lead to any PDT effects, whereas in our study, incubation with 30-nM dual-labeled ligand led to a significant decrease in cell viability. Interestingly, in vivo PDT subsequent to fluorescence-guided surgery using a IRDye700DX-conjugated PSMA ligand was shown to reduce tumor recurrence and significantly elongate animal survival compared with white light surgery [37]. The preclinical feasibility of multimodal intraoperative image guidance with subsequent ablative PSMAtargeted PDT using a dual-labeled tracer was first shown using the murine antibody [ 111 In]In-DTPA-D2B-IRDye700DX [9]. In recent literature, a first dual-labeled photosensitizer-based PSMA ligand was described named LC-pyro, a PSMA ligand coupled to a porphyrin photosensitizer that can be labeled with copper-64 ( 64 Cu) for PET imaging. However, the positron emitter 64 Cu is difficult to detect with a gamma probe system during surgery and therefore not suitable for radioguided surgery [38]. In conclusion, we modified the PSMA peptide linker by substitution of a glutamic acid into a lysine residue, providing a handle for conjugation of multiple functional moieties. Using this new backbone, we synthesized and characterized three dual-labeled ligands for intraoperative radiodetection, fluorescence-guided surgery, and PDT of PCa. Ligand modification performed in our study showed that the new backbone structure (PSMA-N01, -N02, and -N03) leads to high-affinity dual-labeled PSMA ligands with excellent PSMA-specific tumor uptake. These results encourage further preclinical and clinical testing of the dual-labeled ligands to refine the surgical treatment of PCa.
2022-01-14T14:45:54.913Z
2022-01-14T00:00:00.000
{ "year": 2022, "sha1": "59eae0497cefb3965348889796c4aa7903a2498b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00259-022-05685-0.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "59eae0497cefb3965348889796c4aa7903a2498b", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
255883983
pes2o/s2orc
v3-fos-license
Post‐thyroidectomy voice and swallowing disorders and association with laryngopharyngeal reflux: A scoping review Abstract Objective Postthyroidectomy voice and swallowing symptoms (PVSS) may occur even in absence of laryngeal nerve injuries, which remains poorly understood. The objective of this review was to investigate the occurrence of PVSS and the potential etiological role of laryngopharyngeal reflux (LPR). Design Scoping review. Methods Three investigators search PubMed, Cochrane Library, and Scopus databases for studies investigating the relationship between reflux and PVSS. The authors adhered to PRISMA statements and the following outcomes were investigated: age, gender, thyroid features, reflux diagnosis, association outcomes, and treatment outcomes. Based on the study findings and bias analysis, authors proposed recommendations for future studies. Results Eleven studies met our inclusion criteria, accounting for 3829 patients (2964 females). Postthyroidectomy swallowing and voice disorders were found in 5.5%–64%; and 16%–42% of patients, respectively. Prospectively, some results suggested an improvement of swallowing/voice disorders postthyroidectomy, whereas others did not observe significant changes. The prevalence of reflux ranged from 16.6% to 25% of subjects who benefited from thyroidectomy. There was an important heterogeneity between studies regarding the profile of included patients, the PVSS outcomes used, the delay of PVSS assessment and reflux diagnosis, making difficult the study comparison. Some recommendations were provided to guide future studies, especially about the reflux diagnosis approach and clinical outcomes. Conclusion The potential etiological role of LPR in PVSS is not demonstrated. Future studies are needed to demonstrate an increase of pharyngeal reflux events with objective findings from prethyroidectomy to postthyroidectomy. Level of Evidence 3a. | INTRODUCTION Thyroid surgery is one of the most common surgical procedures performed worldwide. 1,2 A substantial number of patients reported postthyroidectomy voice and swallowing symptoms (PVSS), for example, globus sensation, dysphagia, sticky mucus, or throat clearing, even in absence of laryngeal nerve injuries. 3,4 The etiology and the pathophysiological mechanisms of PVSS remain unresolved. Many hypotheses were proposed, including endotracheal intubation injury, laryngotracheal fixation, cricothyroid muscle damage or injury of the external branch of the superior laryngeal nerve or lesion of the anastomotic branches between recurrent and superior laryngeal nerves. [5][6][7] PVSS was also suspected to be related to laryngopharyngeal reflux (LPR) because both conditions report similar clinical picture. 7 Moreover, the thyroid surgery may be associated with upper esophageal sphincter (UES) nerve microtraumas, which may theoretically lead to lower sphincter pressure and tonicity and related esophago-pharyngeal reflux events. 7 In this scoping review, we investigated the current literature on the occurrence of PVSS and the potential etiological role of LPR. | MATERIALS AND METHODS The criteria for consideration of inclusion of studies were based on the population, intervention, comparison, outcome, timing, and setting (PICOTS) framework. 8 For each study, three investigators (Jérôme R. Lechien, Stéphane Hans, and Alexandra Rodriguez) independently reviewed and extracted data regarding the PRISMA checklist. 9 | Patient population Prospective or retrospective, controlled, uncontrolled, or randomized clinical studies published in English, Spanish, or French peerreviewed journals were considered. Studies had to include ≥10 adult patients with history of partial (lobectomy) or total thyroidectomy for whom the occurrence of PVSS and reflux was investigated. The diagnosis of reflux was based on symptoms, findings, or objective examinations, for example, gastrointestinal endoscopy, pH study, or (hypopharyngeal-esophageal) multichannel intraluminal pH-impedance study ([HE]MII-pH). Patients with an LPR diagnosis based on symptoms and findings were considered as suspected LPR, while those with a pH-impedance monitoring diagnosis were considered as LPR patients. Authors investigating the association between PVSS and gastroesophageal reflux disease (GERD) had to report the GERD diagnosis criteria, for example, DeMeester score, Montreal, or Lyon guidelines. 10 There were no exclusion criteria based on age, ethnicity, socioeconomic status, and comorbidities. | Intervention Studies assessing the impact of reflux treatment on PVSS were included in the analysis. | Comparison Studies investigating/comparing the prevalence of reflux in PVSS patient populations versus healthy population were considered. | Outcomes Three investigators (Jérôme R. Lechien, Stéphane Hans, and Alexandra Rodriguez) reviewed the following outcomes: number of patients, age, gender ratio, reflux diagnoses, PVSS investigations and features, outcome association, and potential therapeutic outcomes. The occurrence and types of postthyroidectomy complications were additional outcomes that were investigated by authors. | Timing and setting Populations with PVSS had to be investigated in a delay of 1-12 months postsurgery. | Search strategy The publication search was conducted on PubMED, Scopus, and Cochrane Library databases by three investigators (Jérôme R. Lechien, Stéphane Hans, and Alexandra Rodriguez) the same weekend. The databases were screened for abstracts and titles referring to the description of data of patients with PVSS by the authors. Among the investigators, two authors (Stéphane Hans and Jérôme R. Lechien) analyzed full texts of the selected papers. Findings of the search strategy were reviewed for relevance and the reference lists of publications were examined for additional pertinent studies. Any discrepancies in synthesized data were discussed and resolved by an additional investigator. The following keywords were included and combined: "larynx," "laryngeal," "reflux," "gastroesophageal," "laryngopharyngeal," "thyroid," "swallowing," "voice," "surgery," "thyroidectomy," and "lobectomy". The type of study was classified according to the levels of evidence (I-V). 11 Note that the review was not registered. | Bias analysis The bias/heterogeneity analysis was performed by two independent authors with the Tool to Assess Risk of Bias in Cohort Studies developed by the Clarity Group and Evidence Partners (McMaster University, Canada). 12 The bias analysis consisted of evaluation of cofactors that may impact the association between PVSS and reflux, that is, epidemiological (comorbidities, tobacco use, contributing factors, etc.); complications (laryngeal nerve injuries, etc.); clinical; diagnosis approaches; and therapeutic characteristics of patient groups. | Postthyroidectomy swallowing disorders A myriad of swallowing and voice outcomes were used in studies ( were investigated prethyroidectomy to postthyroidectomy in six studies. 15,17,18,20,21,23 The following postoperative time of study of outcomes was considered in papers: 2 weeks, 23 1 month, 15,19,20,23 6 weeks, 23 3, 13,14,16,20 6, 21 12, 23 and 18-24 months. 17 The swallowing impairment score and the eating assessment tool-10 were both subjective outcomes used in studies. 15,17,20 In the controlled study of Sober et al., 22 patients who underwent total or partial thyroidectomy reported higher swallowing impairment score than healthy individuals. According to the outcomes used, uncontrolled studies reported a prevalence of abnormal scores of subjective swallowing tools between 5.5% to 64%. 15,19 Among studies comparing prethyroidectomy versus postthyroidectomy times, subjective swallowing symptom questionnaires significantly improved postsurgery in two studies. 17,20 Scerrino et al. 15,17 reported prethyroidectomy to postthyroidectomy significant improvement of UES pressure at the manometry, while there were no significant differences in the lower esophageal sphincter (LES) pressure between both times. | Postthyroidectomy voice quality disorder Authors assessed voice quality with the following validated outcomes: voice impairment score, voice handicap index (VHI), grade, roughness, breathiness, asthenia, and strain (GRBAS), and acoustic measurements (Table 2). Kovatch et al. 13 observed in a large cross-sectional survey that 25.8% of patients who underwent total thyroidectomy for a thyroid carcinoma reported 3-month dysphonia, while there were 12.7% with an abnormal VHI-10 scores. Depending on the patient-reported outcome questionnaires or perceptual tools used, the voice quality was considered as subjectively impaired in 16%-42% of cases. 15,19 The occurrence of postthyroidectomy voice quality impairment was not supported by Sober et al. 22 who suggested that subjective (VHI, GRBAS) and objective (acoustic measurements) voice quality Note: The group comparison was presented as "better results" (Gr1 > Gr2: Gr1 results were better than Gr2) or "no significant difference" (Gr parameters appeared to be comparable between patients and controls. Voice quality was evaluated from prethyroidectomy to postthyroidectomy in two studies. 16,20 Tedla et al. 16 | Reflux investigation The prevalence of reflux in patients with PVSS was investigated in three studies and ranged from 16.6% to 25% of subjects. 14,19,20 In the controlled study of Sober et al., 22 RSI was significantly higher in postthyroidectomy patients compared with controls, whereas reflux finding score (RFS) did not significantly differ between groups. In total, 31% of postthyroidectomy patients had RSI > 13 compared with 19% in the control group. 22 Authors used various reflux definition and diagnosis approaches (Table 3). No author used hypopharyngealesophageal intraluminal impedance pH monitoring (HEMII-pH) to confirm the diagnosis (Table 3). There was a significant association between the occurrence of dysphonia (VHI-10 > 11) and the presence of GERD in the study of Kovatch et al. 13 Similarly, Tedla et al. 16 observed that LPR was a predictor of the voice quality at 3-month postthyroidectomy, whereas Scerrino et al. 17 observed a significant association between acid proximal esophageal reflux events at the esophageal pH-impedance monitoring and the occurrence of LPR symptoms. In the study of Sahli et al., 19 patients with postthyroidectomy dysphagia had a significantly higher proportion of GERD than others. Depending on the reflux outcomes used, the prethyroidectomy to postthyroidectomy findings reported significant improvements of RSI, 21 GERD questionnaires 20,21 or no significant change. 23 Regarding studies, the RFS did not change postthyroidectomy 20 or worsened. 23 Note that in the study of Yoon et al., postthyroidectomy RSI and RFS improved in patients treated with proton pump inhibitor. 23 From a basic science standpoint, Xiaoli et al. 18 assessed the gastric secretion of acid and gastrin during total thyroidectomy and did not find significant changes throughout the surgery times. | Bias analysis The scoping review included studies with the following level of evidence: III (N = 1), and IV (N = 10). According to the bias analysis, there was an important heterogeneity between studies regarding inclusion/exclusion criteria, preoperative/postoperative assessments, reflux diagnosis, and outcomes (Appendix A). Note that we adapted the questions of the bias tool focusing on five data outcomes (exclusion criteria; preoperative-postoperative outcomes; reflux diagnosis and outcomes; and timing of evaluation), which were available in included studies. Among inclusion bias, some authors did not exclude patients with postthyroidectomy laryngeal nerve injury, 13 while this information was not provided in three studies. 14,18,21 The rest of authors excluded patients with laryngeal nerve injuries. [15][16][17]19,20,22,23 The selection of patients regarding the thyroid features is another inclusion bias. Some teams excluded patients with a high thyroid volume, 15,17 thyroid carcinoma, 15-17 nonpapillary carcinoma, 20 or T A B L E 2 Swallowing, voice, and reflux outcomes of studies. thyroiditis, 15,17 which led to heterogeneity in populations across studies. Confounding factors of reflux were poorly considered in most studies (Appendix A). The use of unreliable reflux diagnosis approaches was an important bias in the present review. The HEMII-pH was not used. Only one team used esophageal impedance-pH monitoring in which the reflux diagnosis was not based on proximal reflux events but on DeMeester score (GERD diagnosis criteria). 15,17 The timing of postoperative evaluation of reflux features was judged as adequate in most studies (Appendix A). | DISCUSSION The occurrence of swallowing and voice disorders after partial or total thyroidectomy was observed since a long time, leading some authors to Third, an important point that was considered in most studies is the postthyroidectomy delay for investigating the occurrence of LPR. LPR is considered as a silent reflux, without significant correlation between pharyngeal reflux events and symptoms at the HEMII-pH. 29 In other words, the inflammatory reaction of laryngopharyngeal mucosa required a few weeks of backflow of gastroduodenal content into the upper aerodigestive tract, 30,31 which supports that LPR symptoms develop a few weeks after the surgery. In most studies, authors assessed LPR symptoms and findings at least 6 weeks postthyroidectomy, which seems adequate. Fourth, the main weakness of most studies is the lack of consideration of objective findings (e.g., HEMII-pH or high-resolution manometry) to investigate the prevalence of reflux and UES dysfunction before versus after thyroidectomy. To date, the HEMII-pH is commonly accepted as the best diagnostic tool for LPR, identifying acid, weakly acid, or nonacid hypopharyngeal reflux events. 29 The importance of HEMII-pH is strengthened by the nonspecificity of LPR symptoms and findings (e.g., globus sensation, throat clearing, dysphagia, dysphonia), which may be commonly found in patients with laryngeal nerve injury without reflux. Scerrino et al. 15 used esophageal pH-monitoring to investigate esophageal proximal events, but, in practice, we know that many esophageal reflux episodes do not reach pharynx because UES tonicity. 29 From an epidemiological standpoint, the nonspecificity of LPR-symptoms and findings may support the careful exclusion of comorbidities that may be associated with laryngopharyngeal symptoms. 1 According to the bias analysis, it appears that most authors did not exclude some of these prevalent otolaryngological conditions associated with laryngopharyngeal symptoms, for example, chronic rhinosinusitis, allergy, asthma, or inhaled corticosteroid intake. [32][33][34] The Table 4.
2023-01-17T19:01:35.423Z
2023-01-11T00:00:00.000
{ "year": 2023, "sha1": "e3428aefd430d2639780386900e1675bea48f0ac", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Wiley", "pdf_hash": "056b1c12c21e890492adff4b78e175ed40870387", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
150369540
pes2o/s2orc
v3-fos-license
Thermal Behavior of Tunnel Segment Joints Exposed to Fire and Strengthening of Fire-damaged Joints with Concrete-filled Steel Tubes Owing to its discontinuous configuration, segment joint is of particular concern when the shield tunnel lining is exposed to fire. The thermal behavior of such joints when exposed to fire was investigated experimentally in full scale. In addition, the effectiveness of using concrete-filled steel tubes (CFSTs) to restore joint strength after a fire was also investigated. Five full-scale reinforced concrete segment joints were fabricated. Four were exposed to the ISO 834 standard fire for 60 or 120 min, with the fifth serving as a control. Two fire-damaged specimens were then strengthened with CFSTs. All five specimens were then loaded to failure at room temperature. It was found that: (1) The effect of the joint gap on the temperature distribution was observed to change markedly during heating; (2) the temperature of the bolt end was much higher than that of the bolt mid-point, insulating the bolt ends is probably called for; (3) the bearing capacity and flexural stiffness of the fire-damaged segment joints can be significantly improved by strengthening with CFSTs. Introduction In 2005, a small fire broke out at a construction site of a new line of the Shanghai Metro system.The upper part of approximately 430 m of tunnel lining was affected.Concrete spalling damage affected a length of 16.8 m, and the maximal damage thickness of the lining concrete was 25 mm.In recent decades, many serious fires have occurred in subway tunnels worldwide (e.g., the Baku and Daegu subway fires in 2013) [1]. Segment joints are a weak point in shield tunnel linings.A few studies have investigated the mechanical behavior of the shield tunnel lining exposed to high temperatures [2][3][4].The results indicate that the segment joints can significantly affect the mechanical performance and failure pattern of the lining under fire.In addition to the mechanical behavior, the sealing performance of the segment joints after fire is also of great concern in practical engineering.Obviously, such sealing performance is closely related to the residual properties of the rubber gaskets, which are greatly influenced by the thermal field within the joint gap.However, to the authors' knowledge, the temperature distribution within the joint gap during fire has not yet been sufficiently studied.Even similar studies on the effect of concrete crack on thermal field of the structural component are very limited.Wu and his colleagues [5] studied the thermal fields of cracked concrete members in fire, experimentally and using numerical methods.They found that the measured temperatures within the cracks were generally lower than in the neighboring concrete.Narrow cracks seemed to hinder heat transfer into the specimen.This is because the motion of air within the extremely narrow crack is very slow, due to the fluid layer sticking to the left and right surfaces of the pre-made crack having zero velocity according to the no-slip condition.Thus, the velocity profile of the air within the crack approaches zero in all directions, leading to the heat convection within the crack being approximately negligible.Since the heat conductivity of air is very low, the air within the crack acts as a thermal insulator to some extent.Yan's group [4] worked with reduced-scale segment joints (1:3).They measured the surface temperature of their specimens with a thermal infrared imager immediately after turning off the combustor, and they found that the surface temperature of the concrete was highest near the joints.This implies that heat had transferred into the joint gap, and the nearby concrete was heated on two sides.Both the concrete cracks in the literature [5] and the joint gaps in reference [4] belong to narrow crevices, however, the influences of them on heat transfer within the crevices seem to be different.That motivated the first part of this study in which fire tests on full-scale segment joints, which were preformed to investigate the influence of the joint gap on the temperature distribution in inter-segment joints.After that, the fire-damaged full-scale segment joints were strengthened in the second part of this study, and then the strengthening effect was examined by loading these joints till fail. Steel plates or fiber-reinforced plastic (FRP) are the two materials most commonly applied for strengthening subway tunnels.A group led by Liu [6] tested full-scale shield tunnel lining rings strengthened with 20 mm-thick steel plate rings.They found that such full-ring steel reinforcement significantly improves the bearing capacity and overall stiffness of tunnel structures.Further, such strengthened linings fail due to local bond failures between the steel plates and the segments.Chang [7] describes how tunnels of the Taipei Rapid Transit System are strengthened with an inner steel lining.Such strengthening methods satisfy the mechanical requirements for a tunnel, but they use a lot of steel, and can therefore be uneconomic.Furthermore, the weight of the steel can cause additional settling during construction and in future operations.Kiriyama [8] describes a method for reinforcing existing tunnel structures using thin steel panels, it involves elaborate design and construction work.A group led by Khan [9] investigated using arch supports made with steel I-beams, and they found that the strength and stiffness of steel supports depends largely on the confinement to which the support is subjected.Thin steel plates and I-beams can suffer from local buckling at the waist of a tunnel lining, which prevents the full exploitation of the steel's strength.Li and his colleagues [10] investigated numerically the reinforcement of the tunnel linings with FRP.Other researchers [11][12][13] have pursued retrofitting a tunnel using FRP grids and corrugates.Evidently, FRP strengthening parts only enhance the integrity of the tunnel ceiling, they have almost no reinforcing effect on the waist of a tunnel lining subjected to negative bending moments.Concrete-filled steel tubes (CFSTs) have also been proposed to support tunnel structures.Gao [14] and Chang [15] have described using CFSTs in a mine tunnel.In a CFST the most prominent natural characteristics of steel and concrete are used to advantage.The concrete is confined by the steel tube, and the concrete core supports the steel tube against the local buckling.The result is improved strength, ductility, stiffness, and energy absorption capacity for the combination [16].That suggests investigating the feasibility of using CFSTs to strengthen fire-damaged segment joints in tunnels.In this way, the second part of this study focused on the effectiveness of CFSTs for restoring segment joints damaged by a thermal shock.As compared with the traditional steel plate rings [6], the steel consumption and weight of the CFSTs employed in this paper are only about 40% and 65%, respectively.Obviously, a large amount of steel can be saved and the self-weight of the strengthening parts can be reduced greatly in practical engineering, by replacing steel plate rings with CFSTs. Experiments Five specimens were carefully fabricated and tested.The details are listed in Table 1.Untreated specimens are designated as S0; S60, and S120 designate the unstrengthened specimens after exposure to fire for 60 and 120 min, respectively; S60C and S120C are the strengthened specimens after 60 and 120 min of fire exposure. Specimens In general, the tunnel ceiling is more susceptible to fire damage than other parts of the tunnel lining [17].In view of this, the specimens represented a typical tunnel ceiling.Figure 1 presents a schematic diagram and the reinforcement details of the specimens.Each specimen was composed of two neighboring segments, which were identical to the segments used in the Shanghai Metro tunnels.The two segments were tightly jointed by two straight steel joint bolts (grade 6.8; 30 mm in diameter and 485 mm in length).The pre-tensioning of the joint bolts was estimated to be 115 kN based on the bolt strain. specimens are designated as S0; S60, and S120 designate the unstrengthened specimens after exposure to fire for 60 and 120 min, respectively; S60C and S120C are the strengthened specimens after 60 and 120 min of fire exposure.b FAC stands for fine aggregate concrete. c Initial joint opening is the gap at the joint before the fire test. Specimens. In general, the tunnel ceiling is more susceptible to fire damage than other parts of the tunnel lining [17].In view of this, the specimens represented a typical tunnel ceiling.Figure 1 presents a schematic diagram and the reinforcement details of the specimens.Each specimen was composed of two neighboring segments, which were identical to the segments used in the Shanghai Metro tunnels.The two segments were tightly jointed by two straight steel joint bolts (grade 6.8; 30 mm in diameter and 485 mm in length).The pre-tensioning of the joint bolts was estimated to be 115 kN based on the bolt strain.In order to meet the dimensional requirements of the load testing equipment, all specimens were cut after the fire treatment.The joint bolts were then retightened to recover the pre-loading, In order to meet the dimensional requirements of the load testing equipment, all specimens were cut after the fire treatment.The joint bolts were then retightened to recover the pre-loading, which decreased during the fire treatment.Afterward, S60C and S120C were repaired using fine-aggregate concrete (FAC) and strengthened with curved CFSTs.The details are presented in Figure 2. The repair and strengthening process comprised three steps: which decreased during the fire treatment.Afterward, S60C and S120C were repaired using fine-aggregate concrete (FAC) and strengthened with curved CFSTs.The details are presented in Figure 2. The repair and strengthening process comprised three steps: (1) Seriously fire-damaged concrete was first removed from the exposed surface, which was then roughened with a mattock. (2) The rough surface was cleaned and treated with interface agents, and then FAC was poured onto the surface to cast a FAC layer that brought the thickness of the repaired segment up to that of the intact segment. (3) After the FAC had cured, the curved CFSTs were installed on the FAC layer.The curved CFSTs were made of FAC, steel sheets, and U-shaped steel.The mix proportions of the concrete used for the segments and the FAC are listed in Table 2. Six cylindrical cores (100 mm in diameter and height) were extracted from the segments.The measured average compressive strength of the samples was 50.3 MPa.The cube compressive strength of the FAC was measured as 68 MPa.The reinforcement (D10 means the reinforcement diameter in 10 mm), steel sheet, and U-shaped steel used in the test were tested under tension.The results are listed in Table 3.The curvature of the CFSTs was designed to match the segments, so the CFSTs and the FAC layer were in close contact.Mechanical performance of post-installed anchors/rebar connections has been experimentally investigated by other researchers [18][19][20], and the test results indicate that their (1) Seriously fire-damaged concrete was first removed from the exposed surface, which was then roughened with a mattock. (2) The rough surface was cleaned and treated with interface agents, and then FAC was poured onto the surface to cast a FAC layer that brought the thickness of the repaired segment up to that of the intact segment. (3) After the FAC had cured, the curved CFSTs were installed on the FAC layer.The curved CFSTs were made of FAC, steel sheets, and U-shaped steel.The mix proportions of the concrete used for the segments and the FAC are listed in Table 2. Six cylindrical cores (100 mm in diameter and height) were extracted from the segments.The measured average compressive strength of the samples was 50.3 MPa.The cube compressive strength of the FAC was measured as 68 MPa.The reinforcement (D10 means the reinforcement diameter in 10 mm), steel sheet, and U-shaped steel used in the test were tested under tension.The results are listed in Table 3.The curvature of the CFSTs was designed to match the segments, so the CFSTs and the FAC layer were in close contact.Mechanical performance of post-installed anchors/rebar connections has been experimentally investigated by other researchers [18][19][20], and the test results indicate that their bond properties were good at room temperature.To further enhance the bond between the CFSTs and the fire treated segments, both the anchor bolts and the epoxy adhesive located between the CFSTs and FAC were employed simultaneously in this study.The epoxy adhesive was pasted on approximately 5 mm thick.According to the data provided by the manufacturer, the normal bonding strength between the epoxy adhesive and C45 concrete is 3.8 MPa.The anchor bolts (type: HVA2; grade: 5.8) were manufactured by Hilti, Inc, and had a diameter of 12 mm, a total length of 160 mm (anchorage length of 110 mm), a shear strength of 27.8 kN, and a tensile strength of 54 kN.To install the anchor bolts, holes in diameter of 14 mm were drilled through the steel sheet. Instrumentation For the fire tests, thermocouples were installed on the specimens.Their layout is presented in Figure 1.Three thermocouples were installed in Section 1-1 to monitor the temperature of the joint gap.In addition, four further thermocouples were used in Section 2-2 to measure the concrete temperature.To install the thermocouples in Section 2-2, four holes with a diameter of 10 mm were drilled 40 mm apart along the segment's width.Afterward, the holes were filled with cement paste.Owing to the joint configuration, most of the length of the joint bolts was entirely within the bolt holes and hardly experienced any direct fire exposure.Only the protruding bolt ends were exposed to the fire.To investigate the temperature histories of the joint bolts, one thermocouple was fixed to the end of one of the bolts and another was carefully placed at the same bolt's mid-point (see Figure 1a,b). In the loading tests, four linear variable differential transformers (LVDT) were installed on the internal segment edge to measure any joint opening and mid-span deflection during the flexure tests (see Figure 2a).The calibrated range and accuracy of the LVDTs were 100 mm and 0.01 mm, respectively.To obtain the strain profile in the CFST at different load levels, a series of strain gauges was installed on the tubes' surface.The positions and numbers of the strain gauges are presented in Figure 3. bond properties were good at room temperature.To further enhance the bond between the CFSTs and the fire treated segments, both the anchor bolts and the epoxy adhesive located between the CFSTs and FAC were employed simultaneously in this study.The epoxy adhesive was pasted on approximately 5 mm thick.According to the data provided by the manufacturer, the normal bonding strength between the epoxy adhesive and C45 concrete is 3.8 MPa.The anchor bolts (type: HVA2; grade: 5.8) were manufactured by Hilti, Inc, and had a diameter of 12 mm, a total length of 160 mm (anchorage length of 110 mm), a shear strength of 27.8 kN, and a tensile strength of 54 kN.To install the anchor bolts, holes in diameter of 14 mm were drilled through the steel sheet. Instrumentation For the fire tests, thermocouples were installed on the specimens.Their layout is presented in Figure 1.Three thermocouples were installed in Section 1-1 to monitor the temperature of the joint gap.In addition, four further thermocouples were used in Section 2-2 to measure the concrete temperature.To install the thermocouples in Section 2-2, four holes with a diameter of 10 mm were drilled 40 mm apart along the segment's width.Afterward, the holes were filled with cement paste.Owing to the joint configuration, most of the length of the joint bolts was entirely within the bolt holes and hardly experienced any direct fire exposure.Only the protruding bolt ends were exposed to the fire.To investigate the temperature histories of the joint bolts, one thermocouple was fixed to the end of one of the bolts and another was carefully placed at the same bolt's mid-point (see Figure 1a and b). In the loading tests, four linear variable differential transformers (LVDT) were installed on the internal segment edge to measure any joint opening and mid-span deflection during the flexure tests (see Figure 2a).The calibrated range and accuracy of the LVDTs were 100 mm and 0.01 mm, respectively.To obtain the strain profile in the CFST at different load levels, a series of strain gauges was installed on the tubes' surface.The positions and numbers of the strain gauges are presented in Figure 3. Test procedure In order to simulate a possible fire in a shield tunnel, the ISO 834 standard temperature-time curve [21] with a heating duration of 60 or 120 min was employed in the tests.According to the ISO 834 standard, temperature development within the furnace was as described via Equation (1): Where the t is the time (in minutes) and T(t) is the gas temperature within the furnace (in °C).As presented in Figure 4, the furnace consisted of a steel frame supported by four steel columns and a fire chamber.The fire chamber had firebrick walls and insulation plates.Natural gas was injected through four nozzles to provide the thermal energy.The flow was automatically adjusted to maintain the desired chamber temperature.To observe spalling of the specimens during the fire tests, two digital cameras with a water-cooling system were installed on two adjacent walls of the chamber.At the chamber bottom, two chimneys removed the smoke.The specimens were supported by the steel frame (see Figure 4), and the undersurfaces of the specimens were exposed to the fire.The fronts and backs of the specimens were insulated with ceramic-fiber plates. Test Procedure In order to simulate a possible fire in a shield tunnel, the ISO 834 standard temperature-time curve [21] with a heating duration of 60 or 120 min was employed in the tests.According to the ISO 834 standard, temperature development within the furnace was as described via Equation (1): where the t is the time (in minutes) and T(t) is the gas temperature within the furnace (in • C).As presented in Figure 4, the furnace consisted of a steel frame supported by four steel columns and a fire chamber.The fire chamber had firebrick walls and insulation plates.Natural gas was injected through four nozzles to provide the thermal energy.The flow was automatically adjusted to maintain the desired chamber temperature.To observe spalling of the specimens during the fire tests, two digital cameras with a water-cooling system were installed on two adjacent walls of the chamber.At the chamber bottom, two chimneys removed the smoke.The specimens were supported by the steel frame (see Figure 4), and the undersurfaces of the specimens were exposed to the fire.The fronts and backs of the specimens were insulated with ceramic-fiber plates. The upper surface of a specimen was exposed to the atmosphere.This simplification was considered justified based on the following considerations.Owing to the low thermal conductivity of the concrete and its 350 mm thickness, the temperature distribution within the specimen would differ only slightly regardless of whether the upper surface was covered with saturated soft soil or in contact with the atmosphere [4].The ambient temperature was approximately 8-10 • C at the beginning of the fire experiments.No load was imposed on the specimens during the heating.After the required heating time, the furnace was turned off, and the specimens were allowed to cool to ambient temperature.A comparison of measured time-temperature curves in the furnace chamber and the ISO 834 standard curve is presented in Figure 5. Evidently, the curves are in a good agreement. The upper surface of a specimen was exposed to the atmosphere.This simplification was considered justified based on the following considerations.Owing to the low thermal conductivity of the concrete and its 350 mm thickness, the temperature distribution within the specimen would differ only slightly regardless of whether the upper surface was covered with saturated soft soil or in contact with the atmosphere [4].The ambient temperature was approximately 8-10 °C at the beginning of the fire experiments.No load was imposed on the specimens during the heating.After the required heating time, the furnace was turned off, and the specimens were allowed to cool to ambient temperature.A comparison of measured time-temperature curves in the furnace chamber and the ISO 834 standard curve is presented in Figure 5. Evidently, the curves are in a good agreement.The upper surface of a specimen was exposed to the atmosphere.This simplification was considered justified based on the following considerations.Owing to the low thermal conductivity of the concrete and its 350 mm thickness, the temperature distribution within the specimen would differ only slightly regardless of whether the upper surface was covered with saturated soft soil or in contact with the atmosphere [4].The ambient temperature was approximately 8-10 °C at the beginning of the fire experiments.No load was imposed on the specimens during the heating.After the required heating time, the furnace was turned off, and the specimens were allowed to cool to ambient temperature.A comparison of measured time-temperature curves in the furnace chamber and the ISO 834 standard curve is presented in Figure 5. Evidently, the curves are in a good agreement.considered justified based on the following considerations.Owing to the low thermal conductivity of the concrete and its 350 mm thickness, the temperature distribution within the specimen would differ only slightly regardless of whether the upper surface was covered with saturated soft soil or in contact with the atmosphere [4].The ambient temperature was approximately 8-10 °C at the beginning of the fire experiments.No load was imposed on the specimens during the heating.After the required heating time, the furnace was turned off, and the specimens were allowed to cool to ambient temperature.A comparison of measured time-temperature curves in the furnace chamber and the ISO 834 standard curve is presented in Figure 5. Evidently, the curves are in a good agreement.Based on the load equivalence principle and the mechanical characteristics of the tunnel lining, all of the specimens were tested using the TJGPJ2000 facility at Tongji University after the fire tests.The TJGPJ2000 equipment consists of a reaction frame, supports for the specimen, and horizontal and vertical loading systems (see Figure 6).The horizontal loading system consists of two 1000 kN hydraulic actuators.The vertical loading system has a 1500 kN hydraulic actuator.The maximum travel ranges for the horizontal and vertical hydraulic actuators are 150 and 200 mm, respectively.The vertical load (F v ) was applied at two points 800 mm apart via a distributive girder (see Figure 6). The vertical load and the horizontal load were applied simultaneously to generate an eccentricity of 0.3 m in the cross-section at mid-span.The loading was implemented in two phases.In the first ten steps, F v was increased in steps of 55 kN; afterward F v was increased in steps of 27.5 kN.The loading and holding times for each step were 1 and 2 min, respectively. Fire Test Results and Discussion Figure 7 presents images of the specimens after the fire tests.Spalling occurred on the heated surfaces of the different specimens.The first significant spalling was observed through the digital camera during the first 8-15 min of the fire tests.The spalling lasted approximately 12-22 min, and mainly occurred at furnace temperatures of 645-856 • C. The spalling of Specimens S60, S120, and S120C was more severe than that of S60C because the moisture content of those three samples (see Table 1) was higher.A higher moisture content is known to promote spalling [22]. Based on the load equivalence principle and the mechanical characteristics of the tunnel lining, all of the specimens were tested using the TJGPJ2000 facility at Tongji University after the fire tests.The TJGPJ2000 equipment consists of a reaction frame, supports for the specimen, and horizontal and vertical loading systems (see Figure 6).The horizontal loading system consists of two 1000 kN hydraulic actuators.The vertical loading system has a 1500 kN hydraulic actuator.The maximum travel ranges for the horizontal and vertical hydraulic actuators are 150 and 200 mm, respectively.The vertical load (Fv) was applied at two points 800 mm apart via a distributive girder (see Figure 6).The vertical load and the horizontal load were applied simultaneously to generate an eccentricity of 0.3 m in the cross-section at mid-span.The loading was implemented in two phases.In the first ten steps, Fv was increased in steps of 55 kN; afterward Fv was increased in steps of 27.5 kN.The loading and holding times for each step were 1 and 2 minutes, respectively. Fire test results and discussion Figure 7 presents images of the specimens after the fire tests.Spalling occurred on the heated surfaces of the different specimens.The first significant spalling was observed through the digital camera during the first 8-15 min of the fire tests.The spalling lasted approximately 12-22 min, and mainly occurred at furnace temperatures of 645-856 °C.The spalling of Specimens S60, S120, and S120C was more severe than that of S60C because the moisture content of those three samples (see Table 1) was higher.A higher moisture content is known to promote spalling [22].After the specimens had cooled to room temperature, extensive map cracks were found in the un-spalled regions of the heated surfaces, and the color of the concrete surface had turned light yellow (see Figure 8a).In addition, a few vertical cracks spaced about 200 mm apart were observed on both the front and back of the different specimens (see Figure 8b).They can be attributed to the following reasons: (1) The excessive expansion of the concrete at elevated temperatures; and (2) the incompatible expansions and contractions of the concrete and steel reinforcements during heating and cooling [23].Owing to the structural characteristics of a tunnel lining, the vertical cracks within the circumferential joints between the segments would be difficult to discover and repair.Evidently, sufficient attention must be devoted to these vertical cracks, which would eventually aggravate the deterioration of the tunnel lining after a fire.After the specimens had cooled to room temperature, extensive map cracks were found in the un-spalled regions of the heated surfaces, and the color of the concrete surface had turned light yellow (see Figure 8a).In addition, a few vertical cracks spaced about 200 mm apart were observed on both the front and back of the different specimens (see Figure 8b).They can be attributed to the following reasons: (1) The excessive expansion of the concrete at elevated temperatures; and (2) the incompatible expansions and contractions of the concrete and steel reinforcements during heating and cooling [23].Owing to the structural characteristics of a tunnel lining, the vertical cracks within the circumferential joints between the segments would be difficult to discover and repair.Evidently, sufficient attention must be devoted to these vertical cracks, which would eventually aggravate the deterioration of the tunnel lining after a fire.Figure 1b and c show that the positions of the thermocouples along the segment thickness at Section 1-1 and Section 2-2 were different.ABAQUS software was therefore used to develop numerically a temperature distribution at Section 2-2, so the temperature distributions at the two sections could be compared.The meshes of section 2-2 are presented in Figure 9.The segment's concrete was meshed with eight-node solid heat transfer elements (DC3D8).The thermal properties of the concrete, including the density, specific heat capacity and heat conductivity were assumed based on the Eurocode 2 standard [24].The density of the concrete at room temperature was set to 2350 kg/m 3 .The specific heat cc (J/kg•°C) for a moisture content of u = 0 in the concrete was calculated using Equation ( 2).The effect of water evaporation on the temperature in the concrete was considered by adjusting the specific heat of the concrete: The maximum (cp, peak) of the specific heat was at 100-115 °C followed by a linear decrease in the specific heat between 115 and 200 °C (see Figure 10).cp, peak was taken as 1470 and 2020 J/kg•°C for concretes with moisture contents of u = 1.5% and 3%, respectively.The moisture content u was set to 2.6%, the cp, peak was set to 1873 J/kg•°C in that calculation.The upper and lower limits of the thermal conductivity λc of the concrete were determined using Equation (3).The value of α was set to 0.5.The convective heat transfer coefficients were 25 W/m 2 •K for the heated surfaces and 9 W/m 2 •K for the others.The resultant heat emissivity on the surface of the concrete was set to 0.5.Figure 1b,c show that the positions of the thermocouples along the segment thickness at Section 1-1 and Section 2-2 were different.ABAQUS software was therefore used to develop numerically a temperature distribution at Section 2-2, so the temperature distributions at the two sections could be compared.The meshes of section 2-2 are presented in Figure 9.The segment's concrete was meshed with eight-node solid heat transfer elements (DC3D8).The thermal properties of the concrete, including the density, specific heat capacity and heat conductivity were assumed based on the Eurocode 2 standard [24].The density of the concrete at room temperature was set to 2350 kg/m 3 .The specific heat c c (J/kg• • C) for a moisture content of u = 0 in the concrete was calculated using Equation ( 2).The effect of water evaporation on the temperature in the concrete was considered by adjusting the specific heat of the concrete: The maximum (c p, peak ) of the specific heat was at 100-115 • C followed by a linear decrease in the specific heat between 115 and 200 • C (see Figure 10).c p, peak was taken as 1470 and 2020 J/kg• • C for concretes with moisture contents of u = 1.5% and 3%, respectively.The moisture content u was set to 2.6%, the c p, peak was set to 1873 J/kg• • C in that calculation.The upper and lower limits of the thermal conductivity λ c of the concrete were determined using Equation (3).The value of α was set to 0.5.The convective heat transfer coefficients were 25 W/m 2 •K for the heated surfaces and 9 W/m 2 •K for the others.The resultant heat emissivity on the surface of the concrete was set to 0.5.Figure 1b and c show that the positions of the thermocouples along the segment thickness at Section 1-1 and Section 2-2 were different.ABAQUS software was therefore used to develop numerically a temperature distribution at Section 2-2, so the temperature distributions at the two sections could be compared.The meshes of section 2-2 are presented in Figure 9.The segment's concrete was meshed with eight-node solid heat transfer elements (DC3D8).The thermal properties of the concrete, including the density, specific heat capacity and heat conductivity were assumed based on the Eurocode 2 standard [24].The density of the concrete at room temperature was set to 2350 kg/m 3 .The specific heat cc (J/kg•°C) for a moisture content of u = 0 in the concrete was calculated using Equation (2).The effect of water evaporation on the temperature in the concrete was considered by adjusting the specific heat of the concrete: The maximum (cp, peak) of the specific heat was at 100-115 °C followed by a linear decrease in the specific heat between 115 and 200 °C (see Figure 10).cp, peak was taken as 1470 and 2020 J/kg•°C for concretes with moisture contents of u = 1.5% and 3%, respectively.The moisture content u was set to 2.6%, the cp, peak was set to 1873 J/kg•°C in that calculation.The upper and lower limits of the thermal conductivity λc of the concrete were determined using Equation (3).The value of α was set to 0.5.The convective heat transfer coefficients were 25 W/m 2 •K for the heated surfaces and 9 W/m 2 •K for the others.The resultant heat emissivity on the surface of the concrete was set to 0.5.Figure 11 compares the calculated and measured temperature-time curves at Section 2-2 of Specimen S120C.In general, the calculated results agree well with the measured results, so the numerical method provides a good prediction of the temperatures at any point on Section 2-2. Figure 11 compares the calculated and measured temperature-time curves at Section 2-2 of Specimen S120C.In general, the calculated results agree well with the measured results, so the numerical method provides a good prediction of the temperatures at any point on Section 2-2.(1) The temperatures at both Section 1-1 and Section 2-2 rapidly declined with distance from the heated surface in all of the specimens. (2) For a heating time of 60 min, in Specimens S60, S120, and S120C, the temperature at Section 1-1 declined more sharply than that at Section 2-2 for the first 25-60 mm from the heated surface.Between 60 and 270 mm, however, the temperature at Section 1-1 declined more gradually than that at Section 2-2.In addition, the temperature at most of Section 1-1 was below 100 °C and lower than that at Section 2-2.This is probably because water vapor released from the concrete accumulated in the narrow joint gap and barely dissipated.That would have hindered heat transfer into the joint gap to some extent.Please note that the initial joint opening (see Table 1) of S60C was larger than those of the other specimens.Consequently, for Specimen S60C, the temperature at Section 1-1 was higher than that at Section 2-2 for a heating time of 60 min.The larger initial joint opening Figure 11 compares the calculated and measured temperature-time curves at Section 2-2 of Specimen S120C.In general, the calculated results agree well with the measured results, so the numerical method provides a good prediction of the temperatures at any point on Section 2-2.(1) The temperatures at both Section 1-1 and Section 2-2 rapidly declined with distance from the heated surface in all of the specimens. (2) For a heating time of 60 min, in Specimens S60, S120, and S120C, the temperature at Section 1-1 declined more sharply than that at Section 2-2 for the first 25-60 mm from the heated surface.Between 60 and 270 mm, however, the temperature at Section 1-1 declined more gradually than that at Section 2-2.In addition, the temperature at most of Section 1-1 was below 100 °C and lower than that at Section 2-2.This is probably because water vapor released from the concrete accumulated in the narrow joint gap and barely dissipated.That would have hindered heat transfer into the joint gap to some extent.Please note that the initial joint opening (see Table 1) of S60C was larger than those of the other specimens.Consequently, for Specimen S60C, the temperature at Section 1-1 was higher than that at Section 2-2 for a heating time of 60 min.The larger initial joint opening (1) The temperatures at both Section 1-1 and Section 2-2 rapidly declined with distance from the heated surface in all of the specimens. (2) For a heating time of 60 min, in Specimens S60, S120, and S120C, the temperature at Section 1-1 declined more sharply than that at Section 2-2 for the first 25-60 mm from the heated surface.Between 60 and 270 mm, however, the temperature at Section 1-1 declined more gradually than that at Section 2-2.In addition, the temperature at most of Section 1-1 was below 100 • C and lower than that at Section 2-2.This is probably because water vapor released from the concrete accumulated in the narrow joint gap and barely dissipated.That would have hindered heat transfer into the joint gap to some extent.Please note that the initial joint opening (see Table 1) of S60C was larger than those of the other specimens.Consequently, for Specimen S60C, the temperature at Section 1-1 was higher than that at Section 2-2 for a heating time of 60 min.The larger initial joint opening promoted heat transfer into the joint gap and led to higher temperatures in the gap than in the neighboring concrete. (3) For a heating time of 120 min, the temperature at Section 1-1 was higher than that at Section 2-2.This is because all of the moisture within the joint gap eventually dissipated with the longer heating time.Heat then penetrated the gap, increasing its temperature.Yan [4] observed a similar phenomenon in fire tests with reduced-scale segment joints.In his study, the surface temperature of the concrete near the joint was higher than that farther away.This is mainly due to heat being transferred into the joint gap, the nearby concrete was heated on two sides.Note, though, that this is different to what was observed by Wu and his co-workers [5].They observed that the temperature in the crack was lower than that of the neighboring concrete.This is presumably because the joint opening tested here (9.5-11.8mm) was much larger than the crack width in Wu's tests (0.5-3 mm). Figure 13 presents temperature-time curves for the joint bolts.The temperatures of the bolt ends were much higher than those in the mid-point during the entire heating process.The temperatures of the bolt ends increased approximately linearly with heating time, and reached a critical temperature of 300 • C [25] after approximately 60 min, and 596 • C after approximately 120 min.To guarantee that the temperature of the joint bolts does not exceed the allowable value in a real tunnel fire, the bolt ends may need to be insulated.promoted heat transfer into the joint gap and led to higher temperatures in the gap than in the neighboring concrete. (3) For a heating time of 120 min, the temperature at Section 1-1 was higher than that at Section 2-2.This is because all of the moisture within the joint gap eventually dissipated with the longer heating time.Heat then penetrated the gap, increasing its temperature.Yan [4] observed a similar phenomenon in fire tests with reduced-scale segment joints.In his study, the surface temperature of the concrete near the joint was higher than that farther away.This is mainly due to heat being transferred into the joint gap, the nearby concrete was heated on two sides.Note, though, that this is different to what was observed by Wu and his co-workers [5].They observed that the temperature in the crack was lower than that of the neighboring concrete.This is presumably because the joint opening tested here (9.5-11.8mm) was much larger than the crack width in Wu's tests (0.5-3 mm). Figure 13 presents temperature-time curves for the joint bolts.The temperatures of the bolt ends were much higher than those in the mid-point during the entire heating process.The temperatures of the bolt ends increased approximately linearly with heating time, and reached a critical temperature of 300 °C [25] after approximately 60 min, and 596 °C after approximately 120 min.To guarantee that the temperature of the joint bolts does not exceed the allowable value in a real tunnel fire, the bolt ends may need to be insulated. Load test results and discussion Figure 14 presents the final failure patterns of all of the specimens after the mechanical tests.The following can be observed: (1) The failure modes of the unstrengthened specimens S60 and S120 were similar to that of untreated specimen S0.All three specimens primarily experienced concrete crushing in the compression region of the joint.However, flexural cracks in the tensile zones of the former two specimens were fewer than in the latter. (2) The failure patterns of strengthened specimens S60C and S120C were similar.Flexural cracks were observed on the back and front of the FAC, however they did not oblique propagate deep into the segment concrete.In addition, some cracks in the direction of the anchor bolt's root were found on the tension side of the specimens, and these cracks were resulted from the shear force transmitted from the bolts to the FAC.After the occurrence of the shear cracks, the restraint effect of surrounding FAC on the anchor bolts reduced, resulting in a certain bending deformation of the bolts.However, fracture of the anchor bolts was not observed during the loading tests.The epoxy adhesive between the strengthening members and the FAC experienced some local failure, and local debonding between the FAC and the segment concrete was observed. Load Test Results and Discussion Figure 14 presents the final failure patterns of all of the specimens after the mechanical tests.The following can be observed: (1) The failure modes of the unstrengthened specimens S60 and S120 were similar to that of untreated specimen S0.All three specimens primarily experienced concrete crushing in the compression region of the joint.However, flexural cracks in the tensile zones of the former two specimens were fewer than in the latter. (2) The failure patterns of strengthened specimens S60C and S120C were similar.Flexural cracks were observed on the back and front of the FAC, however they did not oblique propagate deep into the segment concrete.In addition, some cracks in the direction of the anchor bolt's root were found on the tension side of the specimens, and these cracks were resulted from the shear force transmitted from the bolts to the FAC.After the occurrence of the shear cracks, the restraint effect of surrounding FAC on the anchor bolts reduced, resulting in a certain bending deformation of the bolts.However, fracture of the anchor bolts was not observed during the loading tests.The epoxy adhesive between the strengthening members and the FAC experienced some local failure, and local debonding between the FAC and the segment concrete was observed.The measured ultimate loads Fv, max (maximum load in the vertical direction) of all of the specimens are listed in Table 4.The bearing capacities of Specimens S60C and S120C were large and beyond the capacity of the vertical actuator, so for those two specimens it is the maximum applied load (1455 kN) instead of the ultimate load that is listed in Table 4. Also included in Table 4 are the initial stiffness (i.e., the secant stiffness at 30% Fv, max) of all the specimens.As can be seen, the ultimate loads of the unstrengthened specimens S60 and S120 were about 24% smaller than that of the untreated specimen S0, the ultimate loads of strengthened specimen S60C and S120C were at least 55% larger than those of the unstrengthened specimens and at least 17% larger than that of the The measured ultimate loads F v, max (maximum load in the vertical direction) of all of the specimens are listed in Table 4.The bearing capacities of Specimens S60C and S120C were large and beyond the capacity of the vertical actuator, so for those two specimens it is the maximum applied load (1455 kN) instead of the ultimate load that is listed in Table 4. Also included in Table 4 are the initial stiffness (i.e., the secant stiffness at 30% F v, max ) of all the specimens.As can be seen, the ultimate loads of the unstrengthened specimens S60 and S120 were about 24% smaller than that of the untreated specimen S0, the ultimate loads of strengthened specimen S60C and S120C were at least 55% larger than those of the unstrengthened specimens and at least 17% larger than that of the untreated specimen S0.In addition, the initial stiffness of the strengthened specimen was at least 550% larger than that of the untreated specimen and at least 360% higher than that of the unstrengthened specimen.It seems, therefore, that a fire-damaged segment joint can maintain considerable loading capacity and initial stiffness even surpassing those of an untreated segment joint if it is strengthened with the proposed CFST. Table 4 also shows that exposure to fire for 60 or 120 min had no effect on the ultimate load of the unstrengthened specimens.The reason is probably that most parts of the joint bolts in Specimens S60 and S120 were not exposed to the fire directly.The temperatures of the joint bolts' mid-points were below 220 • C throughout the fire testing.Consequently, the residual mechanical properties of the joint bolts were very similar after cooling [26].Moreover, regardless of whether the fire lasted 60 or 120 min, the compression regions of the segments experienced only room temperature because they were so far from the heated surface.Thus, the duration of exposure to fire had little effect on the specimens' bearing capacity.Note: "*" denotes the increment of the ultimate load or initial stiffness with respect to the fire-damaged unstrengthened specimens. The relationship between the applied load and mid-span deflection for all of the specimens is presented in Figure 15.The following can be observed: (1) The flexural stiffnesses of the unstrengthened specimens S60 and S120 were generally higher than that of the untreated specimen S0.This is mainly because the flexural stiffness of a specimen mostly depends on the rotational stiffness of the joint [27], which increased after the joint bolts of the specimens were retightened after the fire tests. (2) The flexural stiffnesses of the strengthened specimens S60C and S120C were similar but much higher than those of the unstrengthened and untreated specimens.Consequently, the mid-span deflections of the strengthened specimens were significantly smaller under equal loads. (3) The load deflection curves of S60C and S120C basically coincide.The curves of specimens S60 and S120 are also similar.Thus, the duration of fire exposure had little effect on specimens' deformation behavior (at least up to 120 min). Joint opening is an important indicator of deformation in a shield tunnel and it is often investigated to assess a tunnel's safety [28].Figure 16 presents the joint opening increment versus vertical loading data for all of the specimens.It shows that: (1) The joint opening increments of the unstrengthened specimens S60 and S120 were smaller than that of the untreated specimen S0 under equal loads.The previously-mentioned explanation is that the rotational stiffness of the fire-damaged joints became higher than that of the untreated specimen when the joint bolts were re-tightened after the fire test. (2) The joint opening increments of strengthened specimens S60C and S120C were generally similar.At the end of the loading process, an increment of only 0.5 mm was observed, which was significantly smaller than those of the unstrengthened and untreated specimens. Figure 17 presents the strain profiles in the strengthening parts of Specimens S60C and S120C under different load levels.The details of the strain gauges are presented in Figure 3.It can be observed that: (1) All of the positions where strain gauges were attached were in tension.The deformations were approximately symmetrical about the mid-span.Further, the maximum and minimum strains were at the mid-span and the ends of the strengthening parts, respectively. (2) The strains recorded by gauges 11-15 were much higher than those of gauges 1-5 throughout the entire loading process.Thus, the strengthening played a significant role in determining the tension and bending resistance. The relationship between the applied load and mid-span deflection for all of the specimens is presented in Figure 15.The following can be observed: (1) The flexural stiffnesses of the unstrengthened specimens S60 and S120 were generally higher than that of the untreated specimen S0.This is mainly because the flexural stiffness of a specimen mostly depends on the rotational stiffness of the joint [27], which increased after the joint bolts of the specimens were retightened after the fire tests. (2) The flexural stiffnesses of the strengthened specimens S60C and S120C were similar but much higher than those of the unstrengthened and untreated specimens.Consequently, the mid-span deflections of the strengthened specimens were significantly smaller under equal loads. (3) The load deflection curves of S60C and S120C basically coincide.The curves of specimens S60 and S120 are also similar.Thus, the duration of fire exposure had little effect on specimens' deformation behavior (at least up to 120 min).Joint opening is an important indicator of deformation in a shield tunnel and it is often investigated to assess a tunnel's safety [28].Figure 16 presents the joint opening increment versus vertical loading data for all of the specimens.It shows that: (1) The joint opening increments of the unstrengthened specimens S60 and S120 were smaller than that of the untreated specimen S0 under equal loads.The previously-mentioned explanation is that the rotational stiffness of the fire-damaged joints became higher than that of the untreated specimen when the joint bolts were re-tightened after the fire test. (2) The joint opening increments of strengthened specimens S60C and S120C were generally similar.At the end of the loading process, an increment of only 0.5 mm was observed, which was significantly smaller than those of the unstrengthened and untreated specimens.Figure 17 presents the strain profiles in the strengthening parts of Specimens S60C and S120C under different load levels.The details of the strain gauges are presented in Figure 3.It can be observed that: (1) All of the positions where strain gauges were attached were in tension.The deformations were approximately symmetrical about the mid-span.Further, the maximum and minimum strains were at the mid-span and the ends of the strengthening parts, respectively. (2) The strains recorded by gauges 11-15 were much higher than those of gauges 1-5 throughout the entire loading process.Thus, the strengthening played a significant role in determining the tension and bending resistance.The variation in the local shear force between the strengthener and the segment can reflect the performance of the local connection between the strengthener part and the segment.The five sections (I-V) defined in the longitudinal direction are shown in Figure 3. Assuming that the plane-section assumption still applies, the longitudinal strains at different positions in each section can be determined quantitatively based on the measured strains obtained from strain gauges 1-15.Thus, the axial force on each section can be obtained via an integration of the observed stress distribution.The elastic modulus of the steel was assumed to be 200 GPa.Any hardening of the steel after yielding and any contribution of the core concrete in tension were neglected.The local shear force between the strengthener and the segment was determined via the difference in axial force between neighboring sections, as presented in Figure 18.The following observations were made: (1) Regarding Specimen S60C, when the applied load was less than 17.5% of F v, max the local shear force in regions II-III and III-IV increased much faster than in regions I-II and IV-V.There was thus significant strengthening near the joint during the initial phase of loading.Beyond 32.5% of F v, max , the local shear forces in regions II-III and III-IV reached their maxima and then decreased gradually with increasing applied load.The local shear force in regions I-II and IV-V increased continuously.Thus, when the applied load exceeded 1/3 of F v, max , the strengthening effect near the joint began to weaken, whereas far from the joint it was greater. (2) Regarding Specimen S120C, when the applied load was less than 17.5% of F v, max the trends in the local shear forces of the various regions were similar to those of S60C.Between 17.5 % and 90.6% of F v, max the shear force in regions I-II and IV-V increased faster than in regions II-III and III-IV.When the applied load exceeded 90.6% of F v, max the local shear forces in regions II-III and III-IV suddenly dropped.The connection between the strengthener and the segment was severely damaged in regions II-IV, and the strengthening effect mostly depended on the part far from the segment joint.Figure 17 presents the strain profiles in the strengthening parts of Specimens S60C and S120C under different load levels.The details of the strain gauges are presented in Figure 3.It can be observed that: (1) All of the positions where strain gauges were attached were in tension.The deformations were approximately symmetrical about the mid-span.Further, the maximum and minimum strains were at the mid-span and the ends of the strengthening parts, respectively. (2) The strains recorded by gauges 11-15 were much higher than those of gauges 1-5 throughout the entire loading process.Thus, the strengthening played a significant role in determining the tension and bending resistance.The variation in the local shear force between the strengthener and the segment can reflect the performance of the local connection between the strengthener part and the segment.The five sections (I-V) defined in the longitudinal direction are shown in Figure 3. Assuming that the plane-section assumption still applies, the longitudinal strains at different positions in each section can be determined quantitatively based on the measured strains obtained from strain gauges 1-15.Thus, the axial force on each section can be obtained via an integration of the observed stress distribution.The elastic modulus of the steel was assumed to be 200 GPa.Any hardening of the steel after yielding and any contribution of the core concrete in tension were neglected.The local shear force between the strengthener and the segment was determined via the difference in axial force between neighboring sections, as presented in Figure 18.The following observations were made: (1) Regarding Specimen S60C, when the applied load was less than 17.5% of Fv, max the local shear force in regions II-III and III-IV increased much faster than in regions I-II and IV-V.There was thus significant strengthening near the joint during the initial phase of loading.Beyond 32.5% of Fv, max, the local shear forces in regions II-III and III-IV reached their maxima and then decreased gradually with increasing applied load.The local shear force in regions I-II and IV-V increased continuously.Thus, when the applied load exceeded 1/3 of Fv, max, the strengthening effect near the joint began to weaken, whereas far from the joint it was greater. (2) Regarding Specimen S120C, when the applied load was less than 17.5% of Fv, max the trends in the local shear forces of the various regions were similar to those of S60C.Between 17.5 % and 90.6% of Fv, max the shear force in regions I-II and IV-V increased faster than in regions II-III and III-IV.When the applied load exceeded 90.6% of Fv, max the local shear forces in regions II-III and III-IV suddenly dropped.The connection between the strengthener and the segment was severely damaged in regions II-IV, and the strengthening effect mostly depended on the part far from the segment joint. Conclusions The thermal behavior of the tunnel segment joints exposed to fire and the strengthening of the fire-damaged joints with CFSTs were investigated experimentally.The effect of joint gap on the temperature distribution and the residual performance of the fire-damaged segment joint were studied.A method for strengthening fire-damaged segment joints was demonstrated.The experimental observations support the following conclusions. Conclusions The thermal behavior of the tunnel segment joints exposed to fire and the strengthening of the fire-damaged joints with CFSTs were investigated experimentally.The effect of joint gap on the temperature distribution and the residual performance of the fire-damaged segment joint were studied.A method for strengthening fire-damaged segment joints was demonstrated.The experimental observations support the following conclusions. (1) When the initial joint opening is smaller than 10 mm, the temperature at most positions in the joint gap remains lower than in the neighboring concrete with over 60 min of exposure to fire.After 120min, however, the temperature in the joint gap is higher than in the surrounding concrete. (2) The temperature of the bolt ends is much higher than that of a bolt's mid-point throughout the heating process.To prevent the temperature of the joint bolts from exceeding the allowable value in a real tunnel fire, they should be insulated. (3) The bearing capacity and flexural stiffness of the segment joints after a fire can be significantly improved by strengthening with CFSTs.They may even surpass those of untreated joints. (4) After exposure to fire for 60 or 120 min, specimen strengthened with the proposed CFSTs joint opening increment considerably less than untreated joints under equal loads. (5) Strengthening near a joint play a crucial role during initial loading, but its effectiveness gradually decreases, and strengthening far from the joint becomes more important as the loading increases. Figure 4 . Figure 4. Setup of fire test. Figure 5 . Figure 5. Measured temperature-time curves in the furnace chamber. Figure 4 . Figure 4. Setup of fire test. Figure 4 . Figure 4. Setup of fire test. Figure 5 . Figure 5. Measured temperature-time curves in the furnace chamber. Figure 5 . Figure 5. Measured temperature-time curves in the furnace chamber. Figure 4 . Figure 4. Setup of fire test. Figure 5 . Figure 5. Measured temperature-time curves in the furnace chamber. Figure 8 . Figure 8. Cracks of specimen.(a) Cracks on the soffit; (b) cracks on the front and back surfaces. Figure 10 . Figure 10.Specific heat of concrete as a function of temperature with different moisture contents. Figure 8 . Figure 8. Cracks of specimen.(a) Cracks on the soffit; (b) cracks on the front and back surfaces. 3 )Figure 8 . Figure 8. Cracks of specimen.(a) Cracks on the soffit; (b) cracks on the front and back surfaces. Figure 10 . Figure 10.Specific heat of concrete as a function of temperature with different moisture contents. Figure 10 . Figure 10.Specific heat of concrete as a function of temperature with different moisture contents. Figure 10 . Figure 10.Specific heat of concrete as a function of temperature with different moisture contents. Figure 11 . Figure 11.Comparison of calculated and recorded temperature-time curves related to Section 2-2 of Specimen S120C. Figure 12 Figure12compares the measured temperatures at Section 1-1 with the calculated temperatures at Section 2-2 for the different specimens.The following can be observed: Figure 11 . Figure 11.Comparison of calculated and recorded temperature-time curves related to Section 2-2 of Specimen S120C. Figure 12 Figure12compares the measured temperatures at Section 1-1 with the calculated temperatures at Section 2-2 for the different specimens.The following can be observed: Figure 11 . Figure 11.Comparison of calculated and recorded temperature-time curves related to Section 2-2 of Specimen S120C. Figure 12 Figure12compares the measured temperatures at Section 1-1 with the calculated temperatures at Section 2-2 for the different specimens.The following can be observed: Table 1 . Details of the test arrangement. a Moisture content was measured before the fire test.b FAC stands for fine aggregate concrete.c Initial joint opening is the gap at the joint before the fire test. Table 1 . Details of the test arrangement. a Moisture content was measured before the fire test. Table 4 . Ultimate loads (F v, max ) and initial stiffness.
2019-05-01T23:11:50.933Z
2019-04-29T00:00:00.000
{ "year": 2019, "sha1": "c8359be6f0259aa87f7f161bcb0f225f78120956", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/9/9/1781/pdf?version=1556527522", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c8359be6f0259aa87f7f161bcb0f225f78120956", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
219509356
pes2o/s2orc
v3-fos-license
Culturing Periprosthetic Tissues in BacT/Alert® Virtuo Blood Culture Bottles for a Short Duration of Post-operative Empirical Antibiotic Therapy Introduction: A post-operative empirical antibiotic therapy (PEAT) is required in periprosthetic joint infections. It commonly uses broad-spectrum antibiotics to cover most Gram-positive cocci and Gram-negative bacilli. It is currently continued until first microbiological results are available, no less than five days later. Methods: We performed a retrospective study in order to evaluate duration of incubation required for surgical samples using the BacT/Alert® Virtuo blood culture bottles system. Results: Among 216 surgical interventions and 199 clinical strains (53.8% staphylococci, 22,1% streptococci and enterococci, 14,6% Gram-negative bacilli, 5,5% anaerobes), 90.5% of the strains were detected between day 0 and day 2; 15 infective strains are cultured from day 3 including 8 Cutibacterium sp., 4 staphylococci, 2 streptococci and 1 Enterococcus. Conclusions: We suggest that the duration of PEAT in patients operated for a periprosthetic joint infection may be shortened to three days as Gram-negative rods are unlikely to grow after three days of culture by using BacT/Alert® Virtuo blood culture bottles. This is likely to shorten the overall length of hospital stay, to diminish the occurrence of adverse side effects, and the emergence of antimicrobial resistance. However, coverage of Gram-positive cocci should be maintained for 14 days until the definite culture results are available. Introduction Periprosthetic joint infections (PJIs) require both surgical intervention and prolonged course of intravenous and oral antibiotic therapy conducted in light of the most recent guidelines for the management of these potentially life-threatening infections [1,2]. The susceptibility profile of bacteria isolated from the intraoperative samples requires at least 3 to 5 days to become available. Until the results of intraoperative sample cultures are available, an initial broad-spectrum post-operative empirical antibiotic therapy (PEAT) is needed in order to prevent the colonization of newly implants or the prosthesis that has been cleaned but not removed during the socalled debridement antibiotic and implant retention (DAIR) intervention. PEAT needs to cover most Gram-positive cocci (including methicillin-resistant staphylococci) and most Gram-negative bacilli as it is usually difficult to reliably identify the pathogen(s) before the revision even by the means of a joint aspiration. In addition PEAT may favor the emergence of antibacterial resistance but also Ivyspring International Publisher increases the antibiotic-related adverse effects and the overall cost of the treatment. Moreover, delayed adequate antibiotic treatment of PJI is associated with worse outcomes [2,3]. For these reasons, rapid detection of infection is of paramount importance. Two-week duration of culture is generally proposed to allow isolation of slow-growing organisms [4] but the optimal duration is actually unknown [2]. In this context, use of semi-automated methods such as automated blood culture system may be an attractive alternative to enrichment broths: they contain antibiotic adsorbent, and do not require daily inspection, source of contamination. We set up in our laboratory the culture of these samples in aerobic and anaerobic bottles incubated in the BacT/Alert® Virtuo blood culture bottles system. The main objective of the present study was to evaluate duration of incubation required using the BacT/Alert® Virtuo blood culture bottle system to obtain microbiological results for diagnosis of PJIs, regardless of their sensitivity and specificity. The second objective was to determinate what is the minimum duration of PEAT until reliable microbiological results are available. Definitions PJIs were diagnosed in accordance with the Infectious Diseases Society of America definition [2]. Study design and population This retrospective study was performed at the French National Reference Centre for Complex Osteoarticular Infections in the North West region of France (CRIOAC Lille-Tourcoing France). Medical charts of all adult patients with documented PJI who received PEAT from January 2018 to December 2018 were reviewed. All patients included in this study had surgical management including DAIR and one or two-stage replacement. Surgical management and curative antibiotic therapy All surgical procedures were performed without antibiotic prophylaxis in patients who had not received antibiotics within two weeks before the intervention [2,5]. In accordance with our epidemiological data [6], PEAT consisting with ceftobiprole, or a combination of cefepime and daptomycin as first-line choices, were started intravenously as soon as perioperative samples were taken. According to our current protocol, PEAT was continued until the first results of intraoperative samples cultures were available, i.e the fifth day after surgery, and was then modified in accordance with the culture results (culture-based antibiotic therapy). Microbiology During surgical procedures, at least 3 samples were taken from different areas suspected of being infected using a separate sterile instrument for each sample and were sent to the laboratory within 2 hours. The samples were processed in a class 2 biosafety cabinet. Firstly, solid samples were vortexed without sonication in sterile water for one minute (Ultra Turrax®, IKA, Staufen, Germany). Each sample was then plated for five days at 35°C onto Columbia agar with blood 5% and chocolate agar with polyvitex, and for 14 days in aerobic and anaerobic bottles (1 mL in each bottle) in the BacT/Alert® Virtuo blood culture bottle system (BioMérieux, Marcy l'Etoile, France). Strains were identified using MALDI-TOF spectrometry mass (Bruker Daltonics, Wissembourg, France) with a minimal score requirement of 2. The antibiotic susceptibility profile of all pathogens identified from intraoperative samples was assessed either by the Vitek 2 cards (BioMérieux, Marcy l'Etoile, France) or by agar diffusion technique using the procedure and interpretation criteria proposed by the Comité de l'Antibiogramme de la Société Française de Microbiologie (CA-SFM EUCAST 2018) (http:// www.sfm-microbiologie.org). Ethical considerations All patients' collected data were anonymized and recorded on a standardized form preventing any personal identification according to procedures defined by the French information protection commission (Commission Nationale de l'Informatique et des Libertés-CNIL). Patients During the study period, we identified 106 patients managed for PJI (57 hip prostheses, 39 knee prostheses, 9 shoulder prostheses, 4 elbow prostheses and 4 other prostheses). A total of 216 surgical interventions were recorded from these 106 patients. The demographic characteristics of the included patients are reported in Table 1. Microbiology Over the 216 reported surgical procedures, microorganisms grew in blood culture bottles for 149 cases (69.0%). Among these positive surgical procedures, 58 remained sterile on Columbia agar with blood 5% and chocolate agar with polyvitex and 91 were also positive on solid agar cultures and bottles ( Table 2). Among the 149 positive surgical procedures, using these medium, 141 were detected positive within the first five days, accounting for 95.5% of the strains cultured (190/199, including redundant strains sampled from the same patients in different surgeries). Overall, a total of 199 clinical strains were identified from intraoperative samples (Table 3). Staphylococcus spp. accounted for 53.8% of all strains, especially coagulase negative staphylococci (CoNS) (31.7% of all strains). Streptococcus spp. and Enterococcus spp. accounted for 22.1% of all strains and Enterobacteriales for 12.6%. Among nonfermenting Gram-negative bacilli, 3 strains of Pseudomonas aeruginosa and 1 strain of Acinetobacter baumannii were identified. At last, for 2 patients, yeasts were identified. The 19 strains cultured from D3 are more precisely described in Table 4 (C. acnes and C. avidum excluded). Among them, 3 were contaminant strains. The 4 infecting staphylococci and 1 E. faecalis had always been cultured in previous surgeries of these patients. Strains of S. gordonii and S. adiacens were wild-type strains. Discussion Some requirements are commonly admitted for microbiologic studies of PJIs: among them, use of solid culture media and liquid culture media is recommended [2,3,5]. However, the optimal duration of incubation of specimens is unknown [2]. A duration of 14 days for liquid culture media is commonly used in order to allow isolation of slow-growing organisms such as "micro colony variants" and C. acnes, and in cases of suspected PJIs with low virulence organisms [1,4,7]. Until the microbiological results are available, an empirical broad-spectrum antibiotic therapy is usually prescribed in an attempt to cover most Gram-positive cocci and Gram-negative bacilli. Currently in our hospital, we wait until 5 days of culture before modifying this PEAT in accordance with microbiological findings. However, this treatment endangers the patient to select bacterial resistance but also increases both the antibioticrelated adverse effects and the overall cost of the treatment. Thus, accelerate the microbiological diagnosis could help in reducing the duration of broad-spectrum antibiotic therapy and could allow in most cases an oral therapy. Indeed, as soon as the results are available the coverage of Gram-negative rods can be stopped, and the empirical PEAT be focused on Gram-positive cocci bacilli (including methicillin-resistant strains) which represents a significant de-escalade in terms of antibacterial spectrum. If samples do not grow at day 5, the aerobic and anaerobic bottles are incubated until they are positive (maximum day 14). From day 6 to day 14, the most frequently isolated strain is C. acnes, which is a bacterium sensitive to antibiotics and does not require a broad-spectrum antibiotic such as daptomycin. Indeed, in our study, only C. acnes and C. avidum (and one strain of S. gordonii) are cultured from D6. In the meantime, semi-automated blood culture systems have been proposed and evaluated for culture of periprosthetic tissue specimens. Several studies pointed out an improvement of sensitivity of detection of organisms causing PJI compared to conventional culture methods [8,9,10,11]. Recently, Sanabria et al. studied BacT/Alert® Virtuo blood culture bottle system and showed it was as slightly more sensitive than conventional methods [12]. Other advantages include a reduced number of specimen required (3 for the most accurate diagnosis in a study performed in 2016) [13], a potential personnel time and labor cost savings [14], and a lower risk of contamination linked to daily inspection of plates. Moreover, it showed higher sensitivity than conventional culture methods in cases with previous antibiotic treatment since bottles contain antimicrobial removal systems [15,16]. Another advantage is a shorter time for microorganism detection compared to conventional culture methods [12,15]. In our study, all strains except C. acnes, one strain of C. avidum and one strain of S. gordonii grew within the first five days, accounting for 94.6% of positive surgical procedures. In the study by Peel et al., no organism was isolated in aerobic bottles after 7 days of incubation and only Cutibacterium sp. grew in anaerobic bottles after 7 days of incubation [8]. Minassian et al. reported in 2014 that the optimal combination of sensitivity and specificity occurred at day 3 [10]. In our study, among the 199 strains cultured in 149 surgical procedures, 15 infective strains are cultured from D3, including 7 C. acnes, 1 C. avidum, 4 staphylococci, 2 streptococci and 1 enterococcus. All Gram negative bacilli are detected with this method at D0 or D1. These results are of a major importance since it implies that PEAT could reasonably be changed 72h after surgical procedure to a narrower spectrum antibiotic in accordance with microbiological results obtained at D3. For the samples still sterile at D3, an antibiotic therapy effective against C. acnes is maintained until D14. Indeed, in our study, we point out that modifying antibiotic therapy in accordance with microbiological results obtained at D3, we take into account all of the Gram-negative bacilli (even at D2). For the infective strains detected from D3, all but one was previously isolated from the patients in other surgeries. The duration of 72h may thus be delayed for patients with previous microbiological history. For the last strain, a S. gordonii, it had wild-type antibiotic susceptibility profile and would have been covered by the anti-Cutibacterium sp. antibiotherapy proposed. PEAT frequently uses third-generation cephalosporin or cefepime that could be stopped the third day after surgery, allowing a potential oral therapy. In addition to the beneficial for the patient in terms of adverse side effects and lower resistance selection, reduced PEAT duration also represents an economic gain for the hospital. In our study, C. acnes and C. avidum were detected by the BacT/Alert® Virtuo blood culture bottle system between the fifth and the fourteenth day of incubation. Therefore, we suggest the need for a prolonged incubation of 14 days of bottles for these anaerobes. Conclusion Our results suggest that the duration of PEAT in patients operated for a PJI may be shorten to three days as Gram-negative rods are unlikely to grow after three days of culture by using BacT/Alert® Virtuo blood culture bottles. This is likely to shorten the overall length of hospital stay, to diminish the occurrence of adverse side effects, and the emergence of antimicrobial resistance. However, coverage of Gram-positive cocci should be maintained for 14 days until the definite culture results are available. Authors Contributions CL and ES conceived and designed the study. CL and FW obtained original data and analyzed the data. CD wrote the manuscript. CL, ES, HM and FW contributed to revise the manuscript.
2020-05-28T09:17:08.934Z
2020-05-16T00:00:00.000
{ "year": 2020, "sha1": "d78a27854b854fc115ba04cf55d54f798580f038", "oa_license": "CCBY", "oa_url": "https://jbji.copernicus.org/articles/5/145/2020/jbji-5-145-2020.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "60a694df6f64077db6fcaeb9be5d585c8412fe3c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
252166749
pes2o/s2orc
v3-fos-license
ParaTTS: Learning Linguistic and Prosodic Cross-sentence Information in Paragraph-based TTS Recent advancements in neural end-to-end TTS models have shown high-quality, natural synthesized speech in a conventional sentence-based TTS. However, it is still challenging to reproduce similar high quality when a whole paragraph is considered in TTS, where a large amount of contextual information needs to be considered in building a paragraph-based TTS model. To alleviate the difficulty in training, we propose to model linguistic and prosodic information by considering cross-sentence, embedded structure in training. Three sub-modules, including linguistics-aware, prosody-aware and sentence-position networks, are trained together with a modified Tacotron2. Specifically, to learn the information embedded in a paragraph and the relations among the corresponding component sentences, we utilize linguistics-aware and prosody-aware networks. The information in a paragraph is captured by encoders and the inter-sentence information in a paragraph is learned with multi-head attention mechanisms. The relative sentence position in a paragraph is explicitly exploited by a sentence-position network. Trained on a storytelling audio-book corpus (4.08 hours), recorded by a female Mandarin Chinese speaker, the proposed TTS model demonstrates that it can produce rather natural and good-quality speech paragraph-wise. The cross-sentence contextual information, such as break and prosodic variations between consecutive sentences, can be better predicted and rendered than the sentence-based model. Tested on paragraph texts, of which the lengths are similar to, longer than, or much longer than the typical paragraph length of the training data, the TTS speech produced by the new model is consistently preferred over the sentence-based model in subjective tests and confirmed in objective measures. ParaTTS: Learning Linguistic and Prosodic Cross-sentence Information in Paragraph-based TTS Liumeng Xue, Frank K. Soong, Shaofei Zhang, Lei Xie * Abstract-Recent advancements in neural end-to-end TTS models have shown high-quality, natural synthesized speech in a conventional sentence-based TTS. However, it is still challenging to reproduce similar high quality when a whole paragraph is considered in TTS, where a large amount of contextual information needs to be considered in building a paragraph-based TTS model. To alleviate the difficulty in training, we propose to model linguistic and prosodic information by considering cross-sentence, embedded structure in training. Three sub-modules, including linguistics-aware, prosody-aware and sentence-position networks, are trained together with a modified Tacotron2. Specifically, to learn the information embedded in a paragraph and the relations among the corresponding component sentences, we utilize linguistics-aware and prosody-aware networks. The information in a paragraph is captured by encoders and the inter-sentence information in a paragraph is learned with multi-head attention mechanisms. The relative sentence position in a paragraph is explicitly exploited by a sentence-position network. Trained on a storytelling audio-book corpus (4.08 hours), recorded by a female Mandarin Chinese speaker, the proposed TTS model demonstrates that it can produce rather natural and goodquality speech paragraph-wise. The cross-sentence contextual information, such as break and prosodic variations between consecutive sentences, can be better predicted and rendered than the sentence-based model. Tested on paragraph texts, of which the lengths are similar to, longer than, or much longer than the typical paragraph length of the training data, the TTS speech produced by the new model is consistently preferred over the sentence-based model in subjective tests and confirmed in objective measures. I. INTRODUCTION The development of sequence-to-sequence (seq2seq) based neural acoustic models [1,2,3,4,5,6] and neural vocoders [7,8,9,10], brought a significant improvement to the text-tospeech (TTS) synthesis quality, which allows to automatically synthesize human-like natural speech with high fidelity. TTS is widely applied to many scenarios, such as voice assistant, navigation, smart customer service, audiobook, just to name a few, due to its high efficiency and low cost compared to manual recordings. The audiobook is an important TTS application in rendering the text of stories into expressive voices. The text of an audiobook is composed of successive sentences, paragraphs, sections and chapters in a coherent and hierarchical form to describe a fictional story or thoughts of an author [11,12]. Moreover, such a coherent and hierarchical relationship in the text has an impact on how it is being uttered in human voice formation. A paragraph, consisting of one or more sentences is a self-contained unit of a discourse in writing for conveying a particular point or idea. Prosodic patterns in paragraph audios have been observed [13,14,15]. For instance, pitch resets -higher pitch and increased pitch range are usually observed at the beginning of the new paragraph. Similar reset pattern has also been found in energy [16] or RMS amplitude [17,18,19]. Declination -the tendency of pitch and energy to decline over the paragraph, ending with low pitch and energy. Lengthening -speech rate peaks in the middle of a paragraph, lengthening in the initial and final position. To synthesize speech on a paragraph basis, a straightforward way is to synthesize each sentence in a paragraph and then combined them together. A similar approach has been used in news and navigation broadcast applications [20]. However, there are three disadvantages: (i) an additional step of postprocessing is necessary to integrate individual sentences; (ii) during combining sentences, break duration length between two consecutive sentences should be considered to ensure the integrated paragraph speech sounds natural. Pause plays a significant role in the storytelling and pause prediction is studied for early statistical parametric speech synthesis (SPSS) [21,22,23]; (iii) the prosody in the combined paragraph speech may become inconsistent and not smooth perceptually, varying greatly from one sentence to the next and resulting in unnatural transitions. This issue can be alleviated by minimizing the acoustic variation and linguistic distance between a sentence and the previous one [24]. All these issues make the process of paragraph speech synthesis complex and challenging. Furthermore, it is not a suitable method for paragraph-level speech synthesis because prosodic and acoustic difference exists between sentences spoken in isolation or in a paragraph [25,26,27,28]. An alternative solution is to synthesize speech at a paragraph level directly. It is feasible in the current mainstream seq2seq TTS frameworks for longform speech synthesis [29,30], but it still tends to bring listening fatigue to the listeners because the model does not provide appropriate prosody information in a paragraph. Evidence has shown that paragraph-level or discourse-level prosody improves the naturalness and expressiveness of SPSS [31,32], but annotations of discourse structure or prosodic properties are required. Preparing a large, well-annotated labeling speech dataset is usually too time-consuming and expensive to be practical. In this paper, we aim at building a natural and expressive paragraph-based end-to-end TTS in a data-driven way. Constrained by the memory and computing resources, we train the model sentences-wise, incorporating paragraph-based contextual information into the training process. Without discourselevel structure annotation, we try to learn paragraph linguistic knowledge from the text itself. Specifically, we adopt a paragraph text encoder to extract high-level paragraph linguistic representation. Regarding prosody, we use typical prosody features including pitch, energy and duration extracted from the corresponding paragraph audios and capture paragraph prosodic information via a paragraph prosody encoder. Meanwhile, we apply multi-head attention mechanisms to learn inner linguistic and prosodic relations between sentences with their paragraph, a sentence-position network is used to further enhance the relevant context of sentences in the paragraph. The contributions of this paper are summarized as follows: • This paper presents, to our knowledge, the first attempt to build a paragraph-based, end-to-end TTS with the linguistic and prosodic knowledge learned in a datadriven way. The sentence-position network adopted in our model provides simple yet effective features of individual sentences in multi-sentence paragraph generation and improves the naturalness of the synthesized paragraph. • Experimental results show that our proposed model can achieve good performance for generating natural, goodquality paragraph-based speech. We demonstrate that the proposed model can also learn more accurate pause duration between consecutive sentences and has a good generalization capability of producing natural speech for a given long paragraph which can be longer, or much longer than the paragraph used in the training data. • We also find that subjectively it is rather difficult to evaluate a long paragraph due to the relative short memory in a sequential audio listening test, which is indicated early in [33]. II. RELATED WORKS Paragraph-related works A paragraph is a self-contained unit of a discourse in writing dealing with a particular point or idea, and the paragraph in spoken discourse carries a variety of information. Discourse relations (DR) expressing how different segments (i.e. elementary discourse units) of a text are logically connected have been studied and used to improve the naturalness of statistical parametric speech synthesis (SPSS) at sentence level [34]. In addition to discourse structure itself, some works have studied the correlations between discourse and prosody [35], demonstrating that discourse structure contributes to overall discourse prosody. Furthermore, the discourse structure and prosody were applied to Hidden Markov Model (HMM) based TTS to improve the prosody of passage synthesized speech [31]. Moreover, in addition to the intra-paragraph prosody patterns, interparagraph prosody patterns have also been investigated and then implemented into SPSS to improve the naturalness of the synthesized articles' speech [32]. Our work differs from these in the following two aspects: (i) they used SPSS while we use the current mainstream end-to-end TTS approach; (ii) they need discourse relation or prosody annotations for TTS modeling while we do not need any annotations for model training. Recently, a chapter-wise understanding system for TTS in Chinese novels [36] is related to our work, in which the chapter-wise understanding system realizes two text understanding tasks in Chinese novels -speaker determination and emotion classification for various voices and emotional expressions speech synthesis. The differences between this and ours are that: (i) they mainly focus on speakers and emotions from the text while we focus on linguistic and prosodic knowledge from both text and the corresponding audios; (ii) they capture information at the chapter level for audiobook speech synthesis while we learn information at the paragraph level for paragraph speech synthesis. Context-related works There has been a wide range of research focused on learning or extracting contextual information to improve the performance of TTS. Multiple studies used textual context information extracted from text to improve the sentence prosody [37,38] or cross-sentence prosody [39] for sentence-based speech synthesis, or capture conversation information for conversational speech synthesis [40]. Specifically, the textual context information can be semanticsrelated features extracted by pre-trained models [39,40], i.e., BERT [41] or syntax-related features represented by parse trees [38,42] or statistics [37,40]. Apart from the textual context, it has been reported that acoustic features from the previous sentence can also lead to improvement of sentence-based TTS [43]. Following this result, a comparison investigation on multiple context representation types of the previous sentence was studied [44], including textual and acoustic features, utterance-level and word-level features, and representations extracted with a large pre-trained model and learned jointly with the TTS training. Our work differs from these in two folds: (i) the contextual information used in these models was derived from either isolation sentences or consecutive sentences with a predefined length, while in our work, the contextual information is extracted from a variable length paragraph which is a self-contained unit of discourse and composed of several connective sentences; (ii) crosssentence linguistic context were used to improve sentencelevel or conversation-level speech synthesis in their models. By contrast, both linguistic and prosodic cross-sentence context information is used in our work to improve paragraph-level speech synthesis. III. THE PROPOSED MODEL The architecture of the proposed paragraph TTS model, or ParaTTS in short, is presented in Fig. 1. It consists of a modified Tacotron2 as the TTS backbone to generate mel spectrograms from a given phoneme sequence, a linguisticsaware network and a prosody-aware network to learn the linguistic and the prosodic knowledge of the whole paragraph and the relationship between sentences and the paragraph, and a sentence-position network to enhance the correlation context of sentences and its paragraph. In this work, phoneme sequences are used as text inputs 1 . Given a text, i.e., a Chinese character sequence, it is converted to the corresponding phoneme sequence by a front-end module which includes text normalization, part-of-speech tagging and grapheme-to-phoneme conversion. If not stated otherwise, we directly use the phoneme sequences as the inputs of the model for simplicity. A. The Modified Tacotron2 The architecture of the modified Tacotron2 is an attentionbased encoder-decoder TTS model. Different from the vanilla Tacotron2 [2], we use a CBHG [45] encoder instead of an LSTM encoder because the former is a powerful module for extracting representations from sequences. The encoder is composed of a phoneme embedding layer, a pre-net of 2 fully connected layers and a CBHG. The CBHG is composed of a bank of 1-D convolutional filters, followed by a highway network [46] and a bidirectional gated recurrent unit (GRU) [47] recurrent neural net (RNN). Moreover, we adopt the GMMv2b attention mechanism [29]. The GMMv2b is robust for long sequences because it alleviates occasional catastrophic attention failures, such as repeating or skipping. The encoder input is the phoneme sequence of a sentence x s in training or the phoneme sequence of a paragraph x p in inference. Although the different input, the way to be processed is the same. Given a phoneme sequence x = (x 1 , x 2 , · · · , x n ), n is the length of the phoneme sequence, the CBHG encoder encodes it into a hidden state h = (h 1 , · · · , h n ) : where h represents the extracted high-level phoneme representation. Then the decoder generates the current output s t conditioned on the previous prediction at each step: where c t is the context vector calculated by the attention mechanism which encourage the decoder to attend into the important encoder hidden states when generating the output: Thus the speech sequence y = (y 1 , · · · , y T ) is generated from the input phoneme sequence x based on conditional probability p (y 1 , · · · , y T | x 1 , · · · , x n ), and the conditional probability can be formulated as: A linear projection function f is used to predict acoustic features directly based on decoder outputs s. B. The Linguistics-aware Network The linguistics-aware network consists of a paragraph text encoder with a multi-head attention mechanism, designed to learn the linguistic information of the entire paragraph and the relationship between the component sentence in a paragraph. The paragraph text encoder is also a CBHG as described in III-A, which is used to extract high-level paragraph linguistic representation, h p = (h p1 , h p2 , ..., h pm ), from the given paragraph phoneme sequence, x p = (x p1 , x p2 , · · · , x pm ), m is the length of the paragraph phoneme sequence. In the attention mechanism, attention scores reflect the importance of the key vector with respect to the query vector, allowing the query vector to concentrate on the parts of the key vector [48]. Inspired by this, a attention is employed in this Chinese sentence text: 然后,他给公主讲了好多美丽的故事,他讲得很迷人也很动听。 English transcription: Then he told the princess quite a few lovely stories, which were both charming and moving. Sentence position code: 0 Phoneme sequence: r an h ou , t a g ei g ong zh u j ian l e h ao d uo m ei l i d e g u sh iii , t a j ian d e h en m i r en ie h en d ong t ing 。 Up-sampled position code: 0 0 0 00 00 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 00 0 00 0 0 00 00 0 0 00 0 0 0 00 0 0 0 0 0 0 0 0 0 Chinese sentence text: 所以,当他向公主求婚的时候,公主很快就答应了他。 English transcription: So, when he proposed to the princess, she quickly said yes. Sentence position code: 1 Phoneme sequence: s uo i , d ang t a x ian g ong zh u q iou h uen d e sh iii h ou , g ong zh u h en k uai j iou d a ing l e t a 。 Up-sampled position code: 1 1 11 1 1 11 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 11 11 1 Chinese sentence text: 但是,公主要求商人的儿子星期六要给国王和王后讲一个特别好的故事。 English transcription: But the princess asked the merchant's son to tell the king and queen a special good story on Saturday Sentence position code: 1 Phoneme sequence: d an sh iii , g ong zh u iao q iou sh ang r en d e er z ii sp x ing q i l iou iao g ei g uo uang h e uang h ou j iang i g e t e b ie h ao d e g u sh iii 。 Up-sampled position code: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 11 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 11 1 1 1 1 1 11 1 1 1 1 1 Chinese sentence text: 他们都非常喜欢听故事,只有这样才能同意他俩的婚事。 English transcription: They both enjoy listening to stories very much, and only in this way will they approve of the marriage. Sentence position code: 2 Phoneme sequence: t a m en d ou f ei ch ang x i h uan t ing g u sh iii , zh iii iou zh e iang sp c ai n eng t ong i t a l ia d e h uen sh iii 。 Up-sampled position code: 2 2 2 2 2 2 2 2 2 2 22 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 22 2 2 2 2 2 22 22 2 2 2 2 2 2 work to capture the important parts of the paragraph phoneme sequence for each sentence phoneme sequence, providing an intrinsic characterisation between a sentence and its paragraph, and meanwhile compensating the lack of paragraphrelated information when training model at each sentence unit. Furthermore, the multi-head attention mechanism [49] is utilized to explore the dependencies in different representation subspaces of the vectors. During training, the sentence hidden representation, h s = (h s1 , h s2 , ..., h sn ), which is encoded from the sentence phoneme sequence x s = (x s1 , x s2 , · · · , x sn ) (n is the length of the sentence phoneme sequence) via the text encoder in the modified Tacotron2, is used as the query vector Q, and the paragraph hidden representation h p from the encoder of the linguistic-aware network is used as the key vector K and value vector V. Thus, a linguistic context vector c l = (c l1 , c l2 , · · · , c ln ), representing the linguistic correlation between the sentence and the paragraph, is calculated as follows: where h is the head number of the multi-head attention mechanism, and i is the i-th head. The W O , W Q , W K and W V are different projection matrices for output, query, key and value, and the d k is the dimension of the vector K. The final state of the last bidirectional GRU layer in the paragraph text encoder accumulates the information forward and backward through the whole paragraph phoneme sequence. That is h pm , the last element of the paragraph linguistic representation h p = (h p1 , h p2 , ..., h pm ). We view it as a condensed paragraph linguistic representation and add it with the context vector c l as the output of the linguistic-aware network. The output contains the linguistic information of the paragraph and the relationship between the sentence and its paragraph. Finally, we add the output and the encoder output in the TTS backbone model to make the model aware of the linguistic knowledge related to paragraphs. C. The Prosody-aware Network The prosody-aware network consists of 4 submodules: paragraph prosody extractor, paragraph prosody predictor and paragraph prosody encoder with a multi-head attention mechanism. The input of the paragraph prosody encoder is a 3dimensional, phoneme-level, paragraph prosody feature vector. In training, these features are extracted from the corresponding paragraph speech by the paragraph prosody extractor. In inference, prosody features are predicted by the paragraph prosody predictor which is a 64-unit GRU layer and a 3unit dense layer. The 3-dimensional prosody features include mean-variance normalized logarithmic fundamental frequency (LF0), intensity, and duration. The paragraph prosody encoder is similar to but different from the reference encoder depicted in [50]. It consists of 6 convolution layers with batch normalization, followed by a 128-unit GRU layer. Here, we use the output of each time step in the GRU as a variable-length paragraph prosody representation, denoted as h q = (h q1 , h q2 , ..., h qm ), which has the same length (m) as the paragraph phoneme representation because the prosody feature input is at the phoneme level. Similar to the linguistics-aware network, a multi-head attention mechanism is also used to associate the component sentences with the corresponding paragraph. Specifically, the query vector Q is the addition result of the encoder output of the TTS backbone and the output of the linguistics-aware network. The key vector K and the value vector V are both the paragraph prosody representation h q . Accordingly, the prosodic context vector c p = (c p1 , c p2 , · · · , c pn ), representing the prosodic relationship between a sentence and its parent paragraph, can be calculated by the Eqs. 6-8. Additionally, the final state of GRU in the paragraph prosody encoder, i.e., the last element h qm of the paragraph prosody representation h q = (h q1 , h q2 , ..., h qm ), can be viewed as a compressed paragraph prosody representation. We add it with the prosodic context vector c p to form the output of the prosody-aware network. The output represents the prosodic knowledge related to the paragraph and relations among component sentences in the paragraph. To make the TTS model aware of the prosodic information, we add it with the encoder output in the TTS backbone model. In inference, the trained paragraph prosody predictor is used to predict the 3-dimensional, phone-level prosody features of the paragraph. The predictor input is the addition result of the linguistic-aware network and the encoder output of the TTS model. We conjecture that the input can predict the paragraphlevel prosody features since it contains rich paragraph-relevant information. We train the paragraph prosody predictor using L1 loss between predicted and extracted prosody features and stop gradient flow to ensure the prosody prediction error does not affect the linguistics-aware network and the encoder of the TTS backbone. D. The Sentence-position Network Distinctive differences in prosody can be found at the beginning, middle, and end sentences of a paragraph [35]. We analyze the corresponding prosody patterns in paragraphs in section IV-A. To incorporate this prosodic information into the TTS model, we utilize a sentence-position network module which is composed of an up-sampling layer followed by a linear layer. The input to the sentence-position network is a 3-dimensional, one-hot code encoded as the first, middle or last sentence in the paragraph. The up-sampling layer upsamples the position code from sentence to phoneme level by replicating it. Finally, the linear layer is used to project the 3dimensional, phone-level position code to the preset dimension of 256, facilitating the addition operation with the encoder output of the TTS backbone model. An example of the sentence position code is illustrated in Fig. 2. Given a sentence, the sentence position in the paragraph can be determined and the corresponding phoneme sequence can also be obtained through the front-end module. The sentence position codes of the first, the last and the middle sentences are encoded as 0, 2 and 1, respectively. Then the sentence position is up-sampled from sentence level to phoneme level according to the corresponding phoneme sequence length. The TTS backbone model, conditioned on (1) the sentence position code, and (2) the linguistic and prosodic information of the paragraph, generates the mel spectrograms of sentences from the phoneme sequences of sentences in training or the mel spectrograms of paragraphs from the phoneme sequences of the paragraphs in inference. The total loss L of the proposed model to be optimized is: where L recon is the mel spectrogram reconstruction loss of mean square error (MSE), L stop is stop token loss of cross entropy, and L prosody is prosody prediction loss of MSE. λ 1 , λ 2 , λ 3 are the weights of the corresponding losses, respectively. IV. EXPERIMENTS We first introduce the basic information of the corpus used in our experiments and then analyze the statistics of the corpus, intra-paragraph and inter-paragraph patterns. For experimental tests, we calculate objective metrics and also conduct subjective evaluations to measure the performance of the proposed model in generating paragraph speech. A. Corpus Information and Analysis Basic information In this work, we use a fairy-tale audiobook corpus to train and evaluate the proposed model. The information of the corpus is listed in Table I. The corpus contains 44 stories recorded by a Chinese female mimicking children's voices, about 4.27 hours in total. We randomly select 40 stories as the training set and split the stories into 801 paragraphs and 2,525 utterances, about 4.08 hours. The rest of the 4 stories are used as the test set, which is split into 32 paragraphs. Statistical analysis Then text length (in terms of sentences and Chinese characters) and speech duration (in seconds) statistics are presented in Fig. 3, in which (a), (b), and (c) are the distributions of the number of sentences in a paragraph, the number of Chinese characters in a sentence and a paragraph, respectively. On the average, each paragraph has 3 sentences, 55 Chinese characters, and each sentence has 17 Chinese characters. Additionally, (d) and (e) present the distributions of the duration length of sentences and paragraphs. The average duration lengths of a sentence and a paragraph are 6.5 and 11.4 seconds, respectively 2 . Intra-paragraph prosody patterns analysis To understand the variation of prosody features within a paragraph, we perform a statistical analysis of the individual prosodic features in sentences in the first, middle and last positions in a paragraph. Specifically, we calculate the mean values of sentence-level prosody features in different positions and plot the variation curves as exemplified in Fig. 4, where two prosody patterns are observed. Declination: pitch and intensity decline along with the paragraph. Lengthening: speech rate is faster in the middle than in the initial and final position of a paragraph, lengthening in the initial and final position. We observe that the range of prosodic features among three different sentence positions is not big, which may be attributed to two reasons: the scale of the corpus is relatively small, and the speaking style is not very distinctively different. Inter-paragraph prosody patterns analysis We also analyze the prosody feature variation across paragraph boundaries (break) and within a paragraph (no break), as shown in Table II. The values in the break are the mean discrepancy of prosody features between the last sentence in the current paragraph and the first sentence in the next paragraph. And the values in no break are the mean discrepancy of prosody features between the current sentence and the succeeding sentence within a paragraph. The positive values in no break suggest that there is a declination of prosody attributes within a paragraph. While, the negative values in break indicate that the phenomenon of prosody reset appears in paragraph boundaries, which means the LF0, intensity and speech rate increase at the beginning of a new paragraph. In this work, we focus on individual paragraph speech synthesis so that the inter-paragraph prosody patterns are not considered. B. Experimental Setup The relatively small size of the training data used in this study is a challenge to training a highly stable, end-to-end TTS model. Thus, we firstly pre-train the backbone of the modified Tacotron2 using a standard TTS corpus, containing 17.83-hour reading-style Chinese female speech data. Then we fine-tune the model for ParaTTS using the audio-book corpus. In the training stage, sentence phoneme sequences are fed into the model as input. Mel spectrograms are extracted from recordings, which are down-sampled from 44.1 kHz to 16kHz, and used as the target output. To obtain phone-level prosody features in paragraph prosody extractor, we first extract framelevel LF0 and intensity values using the Python library of Parselmouth 3 . Meanwhile, we conduct force alignment using Hidden Markov Model Toolkit (HTK) [51] to get the phone duration, and then calculate the phone-level LF0 and energy by averaging frame-level values based on the frame lengths of each phone. We train models on a single GPU with batch size of 16 up to 400k steps for the pre-trained model and 200k steps for the fine-tuned models, using Adam optimizer [52] with β 1 = 0.9 and β 2 = 0.999. The hyper-parameters of the model used in our experiments are described in Table III. At the inference stage, paragraph phoneme sequence is fed into the model as input. The output mel spectrogram is transformed into a waveform using the multi-band WaveRNN vocoder [6], which is pre-trained to 500k steps using the standard corpus and adapted to 200k steps using the audio-book corpus. In our evaluations, we compare the following five models for paragraph-level speech synthesis. • Baseline: the modified Tacotron2 as described in Section III-A. C. Objective Evaluation Naturalness We calculate mel-cepstrum distortion (MCD) to measure the naturalness objectively. Before computing MCD, we use dynamic time warping to align predicted and target mel spectrogram sequences because the lengths of the two sequences can be different. The MCD result is shown in the second column in Table IV. The model with linguistics-aware network (LingTTS) or prosody-aware network (ProsTTS) or both (ComTTS) gets a similar MCD and outperforms the Baseline. With the sentence-position network, the model (ParaTTS) decreases MCD further and achieves the lowest MCD. Prosody To measure the synthesized prosody in a paragraph, we calculate the Pearson correlation coefficient [53] on the prosody features at the syllable level, including LF0, intensity and duration. The LF0 and duration correlation results and the corresponding p-values are presented in Table IV. We do not list the intensity correlation because all models have a comparable high correlation value, around 0.90. Regarding the LF0 and duration correlations, all models achieve good correlations, and ParaTTS is the best. From the results, it is observed that even though ProsTTS is more related to the prosody, it does not achieve better results than the LingTTS. This may be because the prosody prediction heavily dependents on the linguistic information. Consequently, ComTTS which combines the linguistic-aware network and prosody-aware network does not achieve additional benefits compared with LingTTS. After adding sentence-position network, ParaTTS obtains improved performance, indicating the benefits of the sentence position information, which explicitly provides the simple yet effective features of individual sentences in multi-sentence paragraph generation. We also generate the results by feeding the ground-truth prosody features to the ParaTTS, referred as ParaTTS (GT prosody), to show the upper bound of the proposed Table IV as well. Obviously, ParaTTS (GT prosody) achieves the best results. Additionally, we also observe that ParaTTS (GT prosody) has a similar LF0 correlation with ParaTTS but obtains better MCD. The results indicate that the ground-truth prosody features benefit the naturalness and the prosody prediction in the proposed model is appropriate. To visualize the prosody prediction performance, we plot the pitch contours of a paragraph from the recording and the synthesized results of the Baseline, LingTTS, ProsTTS ComTTS and ParaTTS, as shown in Fig. 5. We can observe that the pitch contour of ParaTTS is the closest to that of the recordings compared to the other models. Break It has been shown that pause duration is highly correlated to the discourse structure [54]. The pause duration, particularly between successive sentences, is used to introduce suspense and climax in storytelling, which can enhance the audience's attraction to the story and build some anticipation [55]. We calculate the root mean square error (RMSE) of pause duration between consecutive sentences in a paragraph to explore if the models can learn the break across sentences. To be specific, the pause duration between two sentences is the difference value between the end time of the last word in the current sentence and the start time of the first word in the next sentence. The pause duration RMSE result is shown in the last column in Table IV. The Baseline achieves the worst RMSE result. The other four models all achieve better results than the Baseline, in which ComTTS is slightly better than others. We observe that ParaTTS can learn a more accurate pause duration than the Baseline. Fig. 6 shows mel spectrograms of the recording and the synthesized paragraph speech by the Baseline, LingTTS, ProsTTS, ComTTS and ParaTTS, in which white boxes are the pause duration between two consecutive sentences. We can find that the pause duration in Baseline is too long to appropriate, which may cause unnatural perceived performance. The corresponding samples (numbered 1.3) can be found in our sound sample page 4 . We conjecture that the multi-head mechanisms between the sentence and multi-sentence paragraph in linguistics-aware and prosody-aware networks can learn the cross-sentence context in training, including the pause duration. D. Subjective Evaluation A group of 20 listening subjects who are native Chinese speakers with normal hearing participates in the subjective tests and evaluates each paragraph as a whole rather than in its isolated sentences, a similar testing was conducted in [33]. Preference test The preference test between two models is to choose which model is preferable based upon the overall perceived impression. We first perform preference tests among Baseline, ComTTS and ParaTTS to compare the effectiveness of the linguistics-aware network, prosody-aware network and sentence-position network. The preference test results are presented in Fig. 7. ComTTS gets 30% more preference than the Baseline, indicating that the linguistic and prosodic information can indeed improve paragraph-based speech synthesis. With the additional sentence position information, ParaTTS gets an extra preference (4%) over ComTTS. This small preference gain is also consistent with the slightly better MCD and prosody correlations. As described in the intra-paragraph prosody patterns analysis, the prosody variation range among three different sentence positions is not large, hence leading to a relatively smaller improvement. In the following subjective tests, we compare Baseline, ParaTTS and recordings with mean opinion score (MOS) scores in in-domain and out-ofdomain tests. Detailed MOS test To further evaluate the perceived quality of the synthesized paragraph speech, we conduct 5-point mean opinion score (MOS) tests in four different dimensions: natu- ralness, pleasantness, pause and listening comfort 5 [56,57]. The detailed MOS test scores are shown in Fig. 8. We observe that the MOS scores difference is not big, which was similarly observed in [31]. We conjecture that may be due to the fact that long-form paragraph samples are too long for subjects to remember all the differences for a clearly distinguishable score. The results can still shed some light on the power of ParaTTS, which obtains scores consistently higher than the Baseline in all four testing fronts, where better naturalness and pleasantness are also reflected in the lower MCDs and higher LF0 correlations. The pause (break) MOS score of the ParaTTS is almost close to that of the recording, which is also confirmed with lower RMSE in pause duration. Listening to the multi-sentence paragraph audios synthesized by the ParaTTS does not increase the listening fatigue of the listeners and gets an on-par listening comfort score with the recordings, possibly helped by its naturalness and pleasantness close to the recordings. E. Out-of-domain Test To examine the proposed model's generalization ability, we extend the test to out-of-domain, long paragraphs and extralong paragraphs. The information on the in-domain and outof-domain test sets is listed in Table V. In addition to the in-domain 38 short paragraphs, there are 12 long paragraphs and 6 extra-long paragraphs. On the average, each paragraph contains 5, 23 and 51 utterances, respectively. The overall impression MOS results are shown in Fig. 9. It is observed that the MOS scores of the recordings decrease with the length of the paragraph increases. In other words, even for the original recorded speech, long paragraphs tend to be getting lower MOS scores or causing more listening fatigue than shorter paragraphs. In the extra-long paragraphs testing set, there is still occasional skipping issue even though we adopt the robust GMMv2b attention mechanism, indicating that it is still challenging for long-sequence modeling in the attention mechanism. ParaTTS yields higher scores than the Baseline, not only for the in-domain short paragraphs but for the out-of-domain, long and extra-long paragraphs. In inference, the multi-head attention in the linguistics-aware network plays a role of a selfattention mechanism. The encoders in both the TTS backbone and linguistic-aware network take the paragraph phoneme sequence as input to exploit the contextual embedding vector for rendering more natural speech. In this way, the advantage of long-range dependency utilized by the self-attention mechanism generalizes the model for synthesizing longer paragraphs. V. CONCLUSION In this research, we propose to use a new, paragraph-based, end-to-end TTS model to model linguistic and prosodic information embedded in paragraph text with the corresponding acoustic data. We design both linguistics-aware and prosodyaware networks to learn the information via a paragraph encoder and its multi-head attention mechanism. Additionally, a sentence-position network is used to exploit the inter-sentence information in the paragraph. Trained on a storytelling, audiobook corpus (4.08 hours), recorded by a female Mandarin speaker, experimental results show that the proposed new paragraph-based model can produce TTS speech better than the conventional sentence-based TTS baseline system, both objectively and subjectively. The new model can learn the crosssentence information well, e.g., the break durations between adjacent sentences, and generalize the learned information to longer or much longer paragraphs than those used in the training corpus.
2022-09-10T13:55:36.127Z
2022-09-14T00:00:00.000
{ "year": 2022, "sha1": "931481c2bab11305ebf3c19e2958ad15d90f6f73", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "868671d191a72476544fcc8f695e746685de9f0a", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
248354767
pes2o/s2orc
v3-fos-license
MtPT5 phosphate transporter is involved in leaf growth and phosphate accumulation of Medicago truncatula Phosphorus (P) is an indispensable mineral nutrient for plant growth and agricultural production. Plants acquire and redistribute inorganic phosphate (Pi) via Pi transporters (PHT1s/PTs). However, apart from MtPT4, functions of the M. truncatula (Medicago truncatula) PHT1s remain unclear. In this study, we evaluated the function of the PHT1 family transporter MtPT5 in M. truncatula. MtPT5 was closely related to AtPHT1; 1 in Arabidopsis (Arabidopsis thaliana) and GmPT7 in soybean (Glycine max). MtPT5 was highly expressed in leaves in addition to roots and nodules. Ectopic expression of MtPT5 complemented the Pi-uptake deficiency of Arabidopsis pht1;1Δ4Δ double mutant, demonstrating the Pi-transport activity of MtPT5 in plants. When overexpressing MtPT5 in M. truncatula, the transgenic plants showed larger leaves, accompanying with higher biomass and Pi enrichment compared with wild type. All these data demonstrate that MtPT5 is important for leaf growth and Pi accumulation of M. truncatula and provides a target for molecular breeding to improve forage productivity. Introduction Phosphorus (P) is an essential mineral nutrient for plant growth. It plays various biological functions and is a major determinant of crop production (Raghothama, 1999). Inorganic phosphate (Pi) is the main form of P that can be absorbed by plant roots (Chiou and Lin, 2011;Vincent et al., 2012;López-Arredondo et al., 2014). The total P level in soil is high, but the soluble Pi is always limited due to its low mobility, as well as precipitation and fixation (Marschner and Rimmington, 1988;Yan et al., 2021). It has been reported that about 70% of cultivated land in the world is deficient in plant-available Pi. P is one of the limiting factors for cultivated plants (Smith and Schindler, 2009;Péret et al., 2011;López-Arredondo et al., 2013). To maintain crop yield, the usage of P fertilizer is increased annually (Dobre et al., 2014;Heuer et al., 2017). However, excessive fertilizer is not only a Frontiers in Plant Science 02 frontiersin.org waste, but also leads to environmental issues (Zhang et al., 2013;Zak et al., 2018;Che et al., 2020). Plants absorb and translocate Pi via Pi transporters (PHT1s/PTs; Versaw and Garcia, 2017; Dai et al., 2022). Hence, PHT1 genes are potential targets for improving plants Pi efficiency and benefiting yields (Veneklaas et al., 2012). Most of the PHT1 genes are root-specific, while some are highly expressed in the aerial part or nodules and involved in Pi redistribution (Chen et al., 2019;Wang et al., 2020). The firstly identified PHT1 gene was Pho84 cloned from Saccharomyces cerevisiae (Bun-ya et al., 1991). Then, numerous PHT1s have been identified in plants including Arabidopsis (Arabidopsis thaliana; Muchhal et al., 1996;Shin et al., 2004), M. truncatula (Medicago truncatula; Liu et al., 2008), rice (Oryza sativa L.; Liu et al., 2011), maize (Zea mays L.; Wang et al., 2020) and soybean (Glycine max; Chen et al., 2019). Nine PHT1 members were identified in Arabidopsis (Mudge et al., 2002). Among them, AtPHT1;1 and AtPHT1;4 play predominant roles in Pi uptake (Shin et al., 2004). GmPT7 was reported to be responsible for Pi uptake from soil into nodules and distribution to the fixation zones. Overexpression of GmPT7 promotes plant growth and soybean yield (Chen et al., 2019). When overexpressing OsPT1 in rice, transgenic plants accumulated more Pi in shoots and displayed increased tiller numbers compared with wild-type plants (Seo et al., 2008). Thus, investigation of the functions of PHT1s provides an efficient route for improving plants nutrient efficiency. Currently, 11 PHT1s were identified in M. truncatula (Liu et al., 2008). Yeast kinetics assays showed that MtPT1, MtPT2, MtPT3, and MtPT4 are low-affinity Pi transporters. MtPT1, MtPT2, MtPT3, and MtPT5 share 84% sequence identity, but only MtPT5 displayed high affinity for Pi (Liu et al., 2008). MtPT4 is highly expressed in mycorrhizal roots, responsible for Pi acquisition from arbuscules (Harrison et al., 2002). It is also expressed in plant root tip in the absence of the arbuscular mycorrhizal (AM) fungus and modulates root branching, whereas it does not significantly affect Pi accumulation in plants without AM symbiosis (Cao et al., 2020). Recently, MtPT6 was reported to be involved in Pi uptake by heterologous expression of MtPT6 in Arabidopsis pht1;1 or pht1;4 mutant. However, the role of MtPT6 in M. truncatula is unknown (Volpe et al., 2016). Information on the functions of PHT1s in Medicago is still limited. In this study, we identified the role of MtPT5 in leaf growth and Pi accumulation of M. truncatula. MtPT5 is highly expressed in roots, leaves, and nodules and is low-Pi inducible. MtPT5 can rescue the Pi-uptake deficiency of Arabidopsis pht1;1Δ4Δ double mutant, indicating the Pi transport activity of MtPT5 in plants. When overexpressing MtPT5 in M. truncatula, the transgenic plants displayed larger leaf size and higher Pi content. These data demonstrate that MtPT5 plays important roles in M. truncatula vegetable growth and Pi nutrition. Plant materials and growth conditions Medicago truncatula ecotype R108, Arabidopsis thaliana ecotype Wassilewskija ecotype (Ws) and pht1;1Δ4Δ mutant were used in this study. For germination of M. truncatula, seeds were placed on wet filter paper at 4°C for 2 days. Then, the imbibed seeds were transferred to chamber with illumination of 120 μmol m −2 s −1 , temperature 24°C, and 16 h light/8 h dark photoperiod for 4 days. The seedlings were grown in 1/2 Hoagland or in soil for experiments. For nodulation, four-day-old seedlings were incubated with Sm1021 resuspended in 1/2 Hoagland, then transferred to soil and injected with Sm1021 every 2 days for 1 month. Different organs were harvested separately for RNA extraction. For Arabidopsis germination, the seeds were kept at 4°C for 2 days for imbibition, then transferred to medium with 200 μM arsenate or 1/2 MS under normal conditions (120 μmol m −2 s −1 , 22°C, 16 h light/8 h dark). Measurement of Pi content For Arabidopsis, the 20-day-old seedings grown on 1/2 MS medium were harvested for Pi content measurement. For M. truncatula, the top leaflets of 3-month-old plants grown in soil and 20-day-old seedlings grown in 1/2 Hoagland were collected. The measurement was assayed as described in the previous report (Ames, 1996). Briefly, different samples were collected and frozen in liquid nitrogen immediately. Pi was extracted in the buffer containing acetic acid at 42°C for 30 min. Pi concentration was measured at 820 nm wavelength using universal microplate spectrophotometer (BioTek Power Wave XS2). The Pi content was calculated based on the concentration and fresh weight of different samples. Plasmid construction and plant transformation The full-length coding sequence (CDS) of MtPT5 was cloned into pTOPO-TA Simple vector (Science Tool) for sequencing. Then the sequence-verified MtPT5 CDS was constructed into BamH I-linearized pCAMBIA1302 vector to generate 35S:MtPT5 plasmid via homologous recombination. The recombinant vector was used for plant transformation. For Arabidopsis (pht1;1Δ4Δ mutant), floral dip method was used as described (Clough and Bent, 1998) using Agrobacterium tumefaciens strain GV3101. The transformants were obtained on MS medium containing 50 mg/l hygromycin. For M. truncatula, the construct was introduced into R108 leaves via Agrobacterium EHA105-mediated transformation as described previously (Cosson et al., 2006). The transgenic M. truncatula plants were identified by PCR using vector-specific primers. T 2 and T 3 transgenic lines were used for Arabidopsis and M. truncatula separately in this study. qRT-PCR and RT-PCR analysis For quantification of gene expression, total RNA was isolated using Eastep Super Total RNA Extraction KIT (Promega) and quantified by nanodrop. 1 μg RNA was used for reverse-transcription using the PrimeScript II 1st Strand cDNA Synthesis Kit (Takara). qRT-PCR was performed using 2 × EasyTaq ® PCR SuperMix (TransGen Biotech) on CFX96 system (Bio-Rad). MtActin11 was used to calculate the relative quantitative results for M. truncatula. The transcripts of MtPT5 in R108, pht1;1Δ4Δ mutant and pht1;1Δ4Δ/MtPT5 were tested by RT-PCR using cDNAs as templates. EF1a was amplified as a quantitative control. Sequence alignment and construction of phylogenetic tree PHT1 amino acid sequences were obtained from NCBI 1 and EnsemblPlants. 2 Amino acid sequences were firstly aligned using CluxtalX. The neighbor-joining tree was conducted in MEGA5 using bootstrap method (900 replicates) on poisson model. Statistical analysis Significant differences were determined by One-way ANOVA with Tukey test or Student's t-test using SigmaPlot 12.5 software. Phylogenetic analysis of PHT1s from different species It has been reported that there are 11 PHT1 transporters in M. truncatula (Cao et al., 2020). We identified another two members (Mt4g083960 and Mt5g068140) by searching Ensemblplants. 3 All members shared the common secondary structures with 11 predicted transmembrane domains (TM) separated by a large hydrophilic loop between TM6 and TM7 (Supplementary Figure S1). The signature GGDYPLSATIxSE (Karandashov and Bucher, 2005;Loth-Pereda et al., 2011) was identified and conserved among all MtPHT1s, except two of them. The signature of Mt1g069930 was modified with a Thr (T) replaced by a Val (V), and Mt1g074940 was modified with an Ala (A) replaced by a Ser (S; Supplementary Figure S1). The amino acid sequences of PHT1 proteins from M. truncatula, Arabidopsis, soybean maize and rice were used for constructing the neighborjoining tree (Figure 1). The analysis showed that Mt1g074930 (MtPT5) was clustered phylogenetically to AtPHT1;1 and GmPT7, showing 80% and 86% amino acid sequence identities, respectively. Expression pattern of MtPT5 in Medicago truncatula MtPT1, MtPT2, and MtPT3 are paralogues of MtPT5 in M. truncatula (Liu et al., 2008). The coding sequences of MtPT1, MtPT2, and MtPT3 share 97% identity. A single pair of primers were used to test the expression of these three genes. Quantitative RT-PCR (qRT-PCR) analysis showed that MtPT1/2/3 was Phylogenetic analysis of PHT1s from different species. Phylogenetic tree of PHT1s from Medicago truncatula, Arabidopsis, soybean, maize, and rice. The tree was generated as described in materials and methods. Mt1g074930 (MtPT5) was labeled with a red spot. AtPHT1;1 and GmPT7 were labeled with blue spots. The bar shows 0.05 amino acid substitutions per site. Frontiers in Plant Science 04 frontiersin.org predominantly expressed in roots and nodules and nearly undetectable in shoots (Figure 2A). The transcription abundance of MtPT5 was around four-to six-fold higher than that of MtPT1/2/3 in the underground tissues. In addition, MtPT5 was also highly expressed in shoots (Figure 2A). The expression pattern of MtPT5 in the aerial part was further tested. qRT-PCR results showed that MtPT5 was mainly expressed in leaves (Supplementary Figure S2). Pi starvation analysis showed that MtPT5 was induced under Pi-deficient condition ( Figure 2B), in accordance with the previous report (Liu et al., 2008 (Shin et al., 2004). To examine the Pi uptake activity of MtPT5 in plants, the coding sequence of MtPT5 driven by 35S promoter (35S:MtPT5) was introduced into pht1;1Δ4Δ. Two independent transgenic lines, 35S:MtPT5/pht1;1Δ4Δ-1 and 35S:MtPT5/pht1;1Δ4Δ-2, were used in this study. RT-PCR analysis showed that the MtPT5 transcripts were present in the two transgenic lines and not detectable in wild type (Ws) and pht1;1Δ4Δ mutant ( Figure 3A). The fresh weight (FW) measurement showed that loss of PHT1;1 and PHT1;4 led to about 27% reduction in pht1;1Δ4Δ mutant biomass compared with wild type, similar to the previous report (Shin et al., 2004). Meanwhile, the biomasses of 35S:MtPT5/pht1;1Δ4Δ transgenic lines could be rescued to the level of wild type (Supplementary Figure S3). This indicates that MtPT5 can rescue the morphological defects of pht1;1Δ4Δ mutant. Next, we tested the Pi contents in different genotypic Arabidopsis seedlings grown under Pi-sufficient condition (1/2 MS). Pi content in pht1;1Δ4Δ mutant was significantly reduced compared with wild type, while the two overexpression lines exhibited similar Pi contents with wild type ( Figure 3B). These data suggest that MtPT5 can complement the Pi-uptake deficiency of pht1;1Δ4Δ mutant. Arsenate is a toxic metalloid structurally analogous of Pi and is transported into root cells mainly via PHT1 transporters (Catarecha et al., 2007;Castrillo et al., 2013;Wang et al., 2014). Phenotypes of wild type, pht1;1Δ4Δ mutant and 35S:MtPT5/pht1;1Δ4Δ transgenic plants were compared on the medium with or without arsenate. When grown on the medium with 200 μM arsenate, the pht1;1Δ4Δ mutant showed an arsenatetolerant phenotype as previously reported (Shin et al., 2004), while the wild type and 35S:MtPT5/pht1;1Δ4Δ seedlings were hypersensitive to arsenate with dramatically shorter roots and smaller shoots ( Figure 3C). Taken together, these data indicate that MtPT5 has Pi transport capacity and positively modulates Pi uptake in plants. FIGURE 2 Expression profiles of MtPTs in M. truncatula. (A) qRT-PCR analysis of MtPT1/2/3 and MtPT5 in different tissues of M. truncatula. Four-day-old wildtype seedlings (R108) were incubated with Sm1021 resuspended in 1/2 Hoagland and then transferred to soil and injected with Sm1021 every 2 days for 1 month. Shoots, roots, and nodules were harvested, respectively, for RNA extraction. Data represent mean ± SE (n = 3). (B) qRT-PCR analysis of MtPT5 in wild-type seedlings (R108) during phosphate starvation. Four-day-old M. truncatula seedlings were transferred to hydroponic solution with Pi (+P) or solution without Pi (−P) for 5 days. The whole seedlings were used for RNA extraction. Data represent mean ± SE (n = 3). ** indicates significant difference at p < 0.01 (Student's t-test). Frontiers in Plant Science 05 frontiersin.org MtPT5 promotes leaves growth of Medicago truncatula Given that MtPT5 was induced by low-Pi stress, two independent MtPT5-overexpressing lines, 35S:MtPT5-1 and 35S:MtPT5-2, were generated to examine the physiological role of MtPT5 in M. truncatula. qRT-PCR analysis showed that both MtPT5-overexpressing lines had significantly increased MtPT5 transcripts compared with wild-type M. truncatula ( Figure 4A). We performed phenotypic tests on wild type and MtPT5overexpressing plants. In both hydroponic culture and soil pots, the MtPT5-overexpressing lines displayed larger leaves compared with wild type (Figures 4B-D). Quantifications of leaf area confirmed this phenotype ( Figure 4E). Meanwhile, leaf biomasses of the MtPT5-overexpressing plants were significantly higher than that of wild type ( Figure 4F). These morphological traits indicate that overexpression of MtPT5 promotes leaves growth in M. truncatula. Overexpression of MtPT5 enhances Pi accumulation of Medicago truncatula To explore the function of MtPT5 in M. truncatula Pi nutrition, we measured the Pi content in leaves of wild type and MtPT5-overexpressing lines. The top leaflets of plants grown in soil for 3 months were collected for Pi extraction. The measurement showed that relative to wild type, the Pi content in MtPT5-overexpressing plants increased dramatically, especially in 35S:MtPT5-2 line ( Figure 5A). The Pi contents of Different letters indicate significant difference at p < 0.05 (One-way ANOVA, Tukey test). Frontiers in Plant Science 07 frontiersin.org Discussion Phosphorus (P) is a major determinant of agriculture production. Plants absorb Pi via PHT1 transporters (Harrison et al., 2002), while some of them participate in Pi translocation and remobilization among different organs and tissues (Chang et al., 2019;Wang et al., 2020). It provides opportunities for improving crop performance by studying the functions of PHT1s (Chen and Liao, 2017;Han et al., 2022). Currently, 11 PHT1s have been found in M. truncatula. MtPT4 is responsible for Pi acquisition from mycorrhiza and plant root branching (Harrison et al., 2002;Volpe et al., 2016). MtPT6 was reported to promote Pi acquisition in Arabidopsis (Cao et al., 2020). Except for MtPT4, the functions of other PHT1s in M. truncatula are still unclear. In this study, we uncovered that Pi transporter MtPT5 plays an important role in leaf growth and Pi accumulation in M. truncatula. Analysis of different PHT1s We found two more PHT1s (Mt4g083960 and Mt5g068140) in M. truncatula by searching Ensemblplants. 4 Alignment analysis showed that the 13 PHT1s all contained 12 predicted transmembrane domains, in accordance with the previous report (Pedersen et al., 2013). To choose one member of PHT1 4 http://plants.ensembl.org/Medicago_truncatula/Info/Index family for further study in M. truncatula, phylogenetic tree was firstly constructed using PHT1s from M. truncatula, Arabidopsis, soybean, maize, and rice. The analysis showed that MtPT5 was closely related to AtPHT1;1 and GmPT7. AtPHT1;1 is an essential Pi transporter in Arabidopsis. Under Pi sufficient condition, the mutation of PHT1;1 leads to about 50% reduction of Pi uptake compared with wild-type plants. The Pi uptake of pht1;1Δ4Δ mutant reduces about 75% compared with wild type (Shin et al., 2004). GmPT7 is a nodule-located Pi transporter and responsible for the direct Pi acquisition from soil and Pi translocation from nodules to plant. Overexpression of GmPT7 improves shoot P content, nitrogen (N) content and soybean yield (Chen et al., 2019). The phylogenetic analysis indicates that MtPT5 probably have essential roles in M. truncatula Pi nutrition. The amino acid sequence of MtPT5 shared 84% identity with MtPT1, MtPT2 and MtPT3, whereas MtPT5 displayed an opposite affinity for Pi (Liu et al., 2008). This indicates the multiple functions of different PHT1s in Pi utilization even though PHT1s share high amino acid identities. Function of MtPT5 MtPT5 was reported to be a membrane-located high-affinity Pi transporter (Liu et al., 2008). To examine the Pi uptake activity of MtPT5 in plants, the coding sequence of MtPT5 driven by 35S promoter (35S:MtPT5) was constructed and introduced into Arabidopsis double mutant pht1;1Δ4Δ. Phenotypic analysis A B FIGURE 5 Overexpression (Shin et al., 2004). The Pi contents in 35S:MtPT5/pht1;1Δ4Δ transgenic lines were rescued to the level of wild type. Taken together, these data demonstrate that MtPT5 has the Pi-transporter activity in plants. To identify the function of MtPT5 in M. truncatula, two independent MtPT5-overexpressing lines (35S:MtPT5-1 and 35S:MtPT5-2) were generated with significantly higher MtPT5 transcript levels. The MtPT5-overexpressing lines displayed larger leaves compared with wild type, and the leaf biomasses of the transgenic plants were increased dramatically. The Pi contents of top leaflets and whole plant in MtPT5-overexpressing lines were much higher than that in wild-type plants. These data demonstrate that overexpression of MtPT5 enhances M. truncatula leaf growth and Pi accumulation. Conclusion Expression analysis showed that MtPT5 was highly accumulated in shoots, roots and nodules. Previous reports demonstrated that ZmPT7, which is expressed in both roots and leaves, participates in Pi acquisition and redistribution in maize (Wang et al., 2020). GmPT7, located to nodules, is responsible for the direct Pi uptake from soil and translocation to fixation zones (Chen et al., 2019). The expression profile of MtPT5 suggests that it probably have multiple functions in Pi nutrition. In this study, we demonstrate that MtPT5 plays a vital role in Pi accumulation, and overexpression of MtPT5 promotes the leaf growth of M. truncatula dramatically. Leaf size is a vital trait to improve the yield and quality of forage, such as legume alfalfa (Medicago sativa L.) (Warman et al., 2011;Zhang et al., 2019). It was reported that about 70% protein of alfalfa is stored in leaves, while the cellulose content in leaves is only 1/3 of that in stems (Yang et al., 2016). Hence, our study provides a clue for elevating alfalfa Pi efficiency and genetic breeding. Data availability statement The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.
2022-04-24T15:09:58.529Z
2022-09-06T00:00:00.000
{ "year": 2022, "sha1": "8b2ed19b631561a4ac76d55dce5837df91ccf117", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "4880a44d164e00506975ab7c7ee098fa16c16320", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
260002176
pes2o/s2orc
v3-fos-license
Statistical analysis plan for a cluster randomised trial in Madhya Pradesh, India: support to rural India’s public education system and impact on numeracy and literacy scores (STRIPES2) Background India has made steady progress in improving rates of primary school enrolment but levels of learning achievement remain low. The Support To Rural India’s Public Education System (STRIPES) trial provided evidence that an after-school para-teacher intervention improved numeracy and literacy levels in Telangana, India. The STRIPES2 trial investigates whether such an intervention will have a similar effect on the literacy and numeracy of primary school age children in the Satna District of Madhya Pradesh, India. Methods/design The STRIPES2 trial forms one part of a cluster-randomised controlled trial with villages (clusters) randomised to receive either a health (CHAMPION2) or education (STRIPES2) intervention. Building on the design of the earlier CHAMPION/STRIPES trial, villages receiving the health intervention are controls for the education intervention and vice versa. The primary outcome is a combined literacy and numeracy score. Secondary outcomes include separate scores for literacy and numeracy; caregivers’ engagement with child’s learning; expenditure on education; enrolment in school; caregiver’s report of school attendance and the cost effectiveness of the intervention. Over 7000 primary school age children have been recruited and randomised in STRIPES2. Discussion This update to the published trial protocol gives a detailed plan for the statistical analysis of the STRIPES 2 trial. Trial registration Registry of India: CTRI/2019/05/019296. Registered on 23 May 2019. http://www.ctri.nic.in/Clinicaltrials/pdf_generate.php?trialid=31198&EncHid=&modid=&compid=%27,%2731198det%27 Background and rationale India has made steady progress in improving rates of primary school enrolment. In rural areas, about 97% of children between 6 and 14 years of age are now in school [1]. The levels of learning achievement, however, remain low. The 2018 Annual Status of Education Report (ASER) survey showed that proficiency in reading and numeracy is worryingly low and Indian children may spend several years in school without learning even the basic skills in literacy and numeracy [1]. The STRIPES trial and subsequent SCORE trial intervention demonstrated important results in improving numeracy and language scores in Telangana, India [2] and rural Gambia [3]. The STRIPES 2 trial [4] investigates whether such an intervention will have a similar effect on the literacy and numeracy of primary school age children in Satna District of Madhya Pradesh, India. Objectives The primary objective is to assess whether the success of the STRIPES and SCORE trials in providing an afterschool para-teacher intervention to raise learning levels among primary school students in rural India and rural Gambia can be replicated in Satna district of Madhya Pradesh, India. The primary outcome is a combined literacy and numeracy score. Secondary outcomes include separate scores for literacy and numeracy; caregivers' engagement with child's learning; expenditure on education; enrolment in school; caregiver's report of school attendance and the cost effectiveness of the intervention. Trial design This is a cluster-randomised controlled trial where the recruited clusters are villages in the Satna district of Madhya Pradesh, India. The villages included satisfied the following criteria: 1. Were considered rural, with fewer than 2500 population and with more than 120 children under the age of 6 years; 2. Were accessible by road; 3. Weren't within a 5 km radius of the Community Health Centres (as such villages are already wellserved by the local health services); 4. Had a minimum of 3 km between village centres, such buffer zones being included to minimize contamination. From a baseline survey conducted between July 2017 and January 2018 we enrolled children born between 16 June 2010 and 15 June 2013 whose caregivers were planning to enrol them in the first grade, for the first time, in the 2018-2019 school year in eligible villages. Before randomization of villages, from April-June 2019, we conducted a catch-up enumeration in all the selected villages to enrol eligible children who were missed during the baseline enumeration (this included some children who were by this time attending school). Villages were allocated in a 1:1 ratio to either the intervention (a programme provided by Pratham intending to provide remedial out-of-school lessons, focusing on literacy and numeracy, 6 days a week, 2 h a day for 17 months), or to control. Planned daily classes were temporarily stopped in compliance with government measures to reduce COVID-19 transmission from April-Dec 2020 and May-June 2021. The intervention was restarted with modifications according to the local COVID-19 guidelines such as daily small group and weekly (for children who couldn't attend daily classes) classes. The intervention period was also extended by 12 months, ending in June 2022. Between 24 th July and 19 th September 2022 participant children in both trial arms were tested with Early Grade Reading Assessment (EGRA) [5] and the Early Grade Mathematics Assessment (EGMA) [6] tests adapted to the local language and context. After the testing all the children were given a small set of school material as recompense for their time. Randomisation Randomisation of clusters was performed by the trial statistician based in London in June 2019 using a random number generator, with stratification by village size and distance to the nearest Community Health Centre or Civil Hospital. Sample size The relevant parts of the original sample size calculation as published in the protocol were as follows. Originally it had been the intention to randomise 300 villages, because this gave over 90% statistical power to detect a difference of 0.25 standard deviations in mean standardised test scores in STRIPES 2. However, incorporating the buffer zones described in the village selection procedure above meant that only 204 villages could be selected. These 204 villages have a mean population of 1487 (minimum 558, maximum 2490) and a standard deviation of 505 (equating to a coefficient of variation of 0.34). Estimating the number of children in each school year from the number under the age of six years old (divided by 6), the mean number of children in each school year is 38.3 (minimum 20, maximum 71) with a standard deviation of 13.3 (a coefficient of variation of 0.35). Assuming that 25% of the children will not satisfy the eligibility criteria, this gives an estimated mean number of eligible children per village of 28.7 with a minimum of 15. We estimated that the 204 villages will include an average of 28.7 eligible students. In the STRIPES trial the estimated effect was a 0.75 SD increase in mean score: however, effects of smaller magnitude than this would still be important to detect. Conservatively assuming that 60% of the eligible children will take the test at the end of the trial, and an intra-cluster correlation coefficient of 0.23 (as seen in the STRIPES trial [2]), then a trial with 194 villages (i.e. assuming that 5% of the 204 villages will not take part) will give 88% power to detect a difference of 0.25 SD in mean standardised scores between intervention and control villages using a conventional 2-sided statistical significance level of 5% (assuming a coefficient of variation in numbers taking the test by village of 0.35). If the treatment effect is of the order of that seen in the STRIPES trial then there will be reasonable statistical power to explore interactions by ethnicity, gender, wealth and geographic location. As described above, in the sample size calculation we anticipated that 194 of the 204 villages would be randomised. In fact, 196 were randomised, as 6 villages were removed since they were found to be too close to urban areas to be considered rural, and 2 removed because insufficient eligible children were found. Over 7000 children were enumerated in the randomised villages, with over 6000 children taking the test at the end of follow-up. Framework The trial will use a superiority hypothesis testing framework. Statistical interim analyses and stopping guidance As no potential harms are anticipated from this intervention, there is no Data Monitoring Committee, interim analyses or stopping rules. Timing of final analysis May 2023 to August 2023. Timing of outcome assessments The primary outcome (the endline composite mathematics and language score) was assessed through endline tests (EGRA and EGMA) carried out between 24 th July and 19 th September 2022. Additional data collection was carried out as follows: • Between January and February 2022, a midline test was carried out with the children to assess basic reading and mathematics levels using an ASER-like exam. • Between February and April 2022, a midline survey was carried out with the caregivers to record enrolment, reported attendance and educational support during the period that schools were closed. • In November and December 2022, a final survey was carried out to record changes in school enrolment and reported attendance, and caregivers' support to child's education. • Throughout the trial, data on attendance in classes in the intervention arm were collected by Pratham. Statistical principles Level of statistical significance Adjustments for multiplicity None (not applicable). Confidence intervals to be reported Yes, 95% confidence intervals. Definition of adherence to the intervention and how this is assessed including extent of exposure Villages did not all run the intervention classes in the same way. There was variability in the number of planned classes per week, the length of these and the size of classes. Also, some children who lived far from classes in their village could not be reached. This was further complicated by COVID-19 when schools were closed and no after-school classes were running. This makes calculation of measures of adherence challenging. For simplicity we will simply use counts of the numbers of classes i) offered to and ii) attended by each child. We also assume that, had the intervention run as planned, then each child would have been offered 360 classes (6 classes a week for 60 weeks, this corresponding approximately to a 17-month period with allowance for holidays etc.). We refer to this as the ideal number of classes. For the jth child in the ith village we will calculate, over the full follow-up period i) the total number of classes that were offered to that child ( O ij ) and ii) the total number of classes that that child attended ( A ij ). At child level we will define adherence in three ways. At village level, using N i to denote the number of children in the ith village, we will define adherence in the same three ways. a) Attended as a proportion of ideal j A ij /(360N i ). b) Offered as a proportion of ideal j O ij /(360N i ). c) Attended as a proportion of offered Each measure will be summarised using means and standard deviations, and in a contingency table with adherence bands of (0, > 0 to 25%, > 25% to 50%, > 50% to 75%, > 75% to 100%, Table 1). Definition of protocol deviations for the trial Deviation from the protocol is defined as either 1) an intervention village not receiving any of the intervention during the trial intervention period, or 2) a control village receiving the intervention during the trial intervention period. Such protocol deviations will be listed. Analysis populations The primary analysis will follow the intention to treat principle. For the primary outcome two secondary per-protocol analyses will be performed, one corresponding to each of the "attended as a proportion of ideal" measures of adherence defined above. In each case the per-protocol analysis will be restricted to those with adherence at above 75%. Screening data The CONSORT Flow diagram summarises the identification, randomisation and reasons for withdrawal of villages and children within the trial. The diagram (shown in Fig. 1) will show numbers of villages approached but not randomised, with reasons listed. Eligibility criteria A village was potentially eligible if the following conditions were met: 1. Village in Satna district, except villages in the tehsils of: Birsinghpur, Majhgawan and Raghurajnagar; 2. Village population less than 2500; 3. Village has more than 120 children under the age of 6 and at least 15 children eligible for the intervention; 4. Village is accessible by road; 5. Village centre is at least 5 km from a Community Health Centre (CHC); 6. Village centre is at least 3 km from the centre of any other included village. A child was eligible if he or she was resident in a village within an eligible cluster at the time of enumeration, and fit the following criteria: June 2013; 5. The caregiver consented to allow the child to participate in the trial. A child was also eligible during the catch-up enumeration (carried out before randomisation) if: He or she was born between 16 June 2010 and 15 June 2013; 2. He or she was enrolled in first grade in the 2018 -2019 academic year or was planning to enter first grade in the 2019 -2020 academic year; 3. He or she was expected to be resident in the village during 2019 -2020; 4. The caregiver consented to allow the child to participate in the trial. Recruitment Information to be included in the CONSORT flow diagram This is described in the Trial Population section. Withdrawal/follow-up No clusters withdrew from the trial. Children who have withdrawn will be considered to be those enrolled children whose caregivers subsequently rescinded consent for the child's participation in the trial. Loss to follow-up for the primary outcome will be considered to be children who do not attend both endline tests. For secondary outcomes, loss to follow-up will be considered to be children whose caregiver was not interviewed at the endline survey. Baseline patient characteristics The following baseline characteristics will be tabulated by treatment arm. No baseline hypothesis tests will be carried out. For categorical variables the overall proportions (with numerators and denominators) will be shown as will the mean and standard deviation of the cluster level proportions. For continuous variables the overall mean and standard deviation will be shown along with the mean and standard deviation of the cluster level means. Cluster-level variables ( Table 2): a) Village size b) Distance to community health center/civic hospital Individual-level variables (Table 3): Primary female caregiver (i.e., mother or other) f ) Literacy of female primary caregiver g) Education level of female primary caregiver h) Primary male caregiver (i.e., father or other) i) Literacy of male primary caregiver j) Education level of male primary caregiver k) Parents still alive at baseline l) Wealth index 1. Determined by the material the house is made of: 1. Floor, roof and wall materials all natural, 2. Some, but not all, of floor, roof and wall materials are synthetic, 3. Floor, roof and wall materials all synthetic (as in Eble et al., 2020) [3]. m) Wealth index 2. Number of Items (television, radio, motorbike, 4-wheeled vehicle) owned by the household members. Outcomes The primary outcome of the trial is the composite literacy and numeracy test score using the EGRA and EGMA, respectively (Table 4 with subgroup analysis in Table 5). A sensitivity analysis will be carried out omitting the score from EGRA subtask 5b question 1, which was judged to be potentially misleading. Secondary outcomes include the separate scores for literacy and numeracy; caregivers' engagement on child learning; enrolment in school at the end of follow-up; caregiver's report of school attendance and the cost effectiveness of the intervention. Secondary outcomes to be formally tested and a 95% confidence interval constructed are as follows. • Mathematics test score, to be calculated as a simple arithmetic mean of the percentage of correct answers on each of the six (some composite) subtasks, evenly weighting each task and not accounting for time remaining. The six subtasks are 1, 2, 3, 4 [mean of 4a and 4b], 5 [mean of 5a and 5b] and 6 ( Table 4). • Language test score, to be calculated as a simple arithmetic mean of the percentage of correct answers on each of the seven subtasks, evenly weighting each task and not accounting for time remaining. The seven subtasks are 1, 2, 3, 4, 5a, 5b and 6. A sensitivity analysis will be carried out omitting the score from EGRA subtask 5b question 1, which was judged to be potentially misleading (Table 4). • Midline test scores (mathematics and language, Table 6). • Whether child is enrolled in school at the endline survey (Table 7). • Number of hours caregiver spends engaging child in reading or writing activities post lockdown (Table 8). • Caregiver's report of school attendance; number of days of school missed in the past two weeks, conditional on enrollment. As recorded in the endline survey (Table 9). • Cost per 0.1 standard deviation improvement in the primary outcome. The standard deviation to be estimated by fitting a linear mixed model with clusterspecific random effects to the primary outcome in the control arm of the trial, with the standard deviation estimated via a summation of the between-and within-cluster variances. The included costs will be all costs for running the intervention and any capital costs will be amortized according to the item. It will include all costs that would occur if the trial intervention were continued without the research costs related to a trial. It does not reflect the costs that a government organization would observe if they took over the intervention. It does not include any costs to families. Secondary outcomes to be tabulated but not formally tested • Mathematics test score on the combined timed subtasks, to be calculated as a simple arithmetic mean of the fluency measures on each of timed subtasks (Table 4). Village size (total population) Distance (km) to nearest Community Hospital/Community Health Centre Family Religion: Family Caste: Child's main female caregiver Biological mother n (%) x (x) n (%) x (x) Step mother n (%) Other female family member n (%) Child's main male caregiver Step father n (%) Other male family member n (%) Main female caregiver's education: Higher secondary n (%) Main male caregiver's education: Higher secondary n (%) Missing n (%) x (x) n (%) x (x) • Language test score on the combined timed subtasks, to be calculated as a simple arithmetic mean of the fluency measures on each of the timed subtasks (Table 4). • Mathematics test score on the combined untimed subtasks, to be calculated as a simple arithmetic mean of the percentage of correct answers on each of the subtasks, evenly weighting each task (Table 4). • Language test score on the combined untimed subtasks, to be calculated as a simple arithmetic mean of the percentage of correct answers on each of the subtasks, evenly weighting each task (Table 4). • Whether child is enrolled in school pre-and post the covid lockdown (midline survey, Table 7). • Child's residence status (Table 10). • Data sources: • Grade (number 0-5) child is enrolled in during each phase of the trial (Table 11). • Specific challenges faced: Child's age mean (SD) x (x) mean (SD) x (x) Mother alive at baseline n (%) Father alive at baseline n (%) Main female caregiver's literacy: Analysis methods In the primary analysis of the primary outcome, childspecific composite test scores at endline will be compared between intervention and control arms using a linear regression model with randomisation arm and the stratification factors (and no other variables) as predictor variables. To take account of the cluster-randomisation, robust standard errors, allowing for the clustering, will be used here and elsewhere. Linear mixed models (with cluster as a random effect) which are also termed hierarchical or multilevel models are commonly used for the analysis of cluster randomised trials. The advantage of an approach using robust standard errors over linear mixed models is that homoscedasticity assumptions are not made. The adjusted difference in means will be divided by the SD of the test score in the control arm to give a standardised difference, with a nonparametric bootstrap confidence interval (bias corrected and accelerated, 2000 replications at cluster level) computed for this. Secondary outcomes that are continuous will be analysed using the same approach as above. Secondary analyses will extend the linear regression model (with robust standard errors that allow for clustering) for the primary outcome described above to (separately) investigate interactions by caste, gender, male and female primary caregiver literacy, village population and wealth. Secondary outcomes that are dichotomous (such as whether the child was enrolled in school) will be expressed as odds ratios with 95% confidence intervals Composite test score -sensitivity analysis N: Mathematics test score, overall N: Mathematics test, combined fluency scores N: Mathematics test, combined untimed subtasks N: Language test score, overall -sensitivity analysis N: Language test, combined fluency scores N: Language test, combined untimed subtasks N: Language 5b -sensitivity analysis N: obtained from a GEE model with a binary outcome, a logit link, and a 'working' assumption of independence, with robust standard errors to take account of clustering. Adjustment for covariates These are described in the Analysis methods section above. Methods used for assumptions to be checked for statistical methods The linear regression models used for the primary analysis assume that residuals are normally distributed. Robust standard errors allow for potential heteroscedasticity according to levels of predictor variables, but do make an assumption of normality conditional on levels of predictor variables. This assumption will be checked by examination of appropriate quantile-quantile plots of standardised residuals. The central limit theorem ensures that results are robust provided that violations of the normality assumptions are not substantial. Minor violations, even if statistically significant, are of little practical consequence. For this reason, formal hypothesis tests of normality assumptions will not be carried out. Alternative methods to be used if distributional assumptions do not hold Nonparametric bootstrap confidence intervals (bias corrected and accelerated, 2000 replications at cluster level) will be reported if the normality assumptions are seriously violated. Village population Below median N: Individual level N: mean (SD) Cluster level N = 98 mean (SD) Mathematics test score N: Language test score N: Sensitivity analyses for each outcome where applicable In the primary analysis, missing data will not be imputed. In secondary analyses of the primary outcome and key secondary outcomes, multiple imputation by chained equations (MICE) will be used. For analysis of clustered data it is important that the model for imputation includes cluster-specific random effects [7]. Such analyses will be carried out using the Jumo package within the statistical package R [8]. Imputation will be carried out separately in each trial arm. Auxiliary variables to potentially be used will include the randomisation stratification factors, caste, gender, male and female primary caregiver literacy, the wealth indices, the adherence to intervention variables defined above, the midline test scores, enrolment at endline, the number of hours the caregiver spends engaging child in reading or writing activities post lockdown, the caregiver's report of school attendance, whether or not the child is enrolled in school preand post the covid lockdown, school grade at endline, the child's residence status and the variables quantifying the learning support (and spending) provided by family, school teachers, NGOs and/or private tutors during the time when schools were closed. If the effect of the intervention is statistically significant, and remains so in the MICE analysis detailed above then the multiple imputation analysis will also be extended to determine the amount of bias over and above that allowed for by the multiple imputation model that would render the primary analysis non-statistically significant. Subgroup analyses We will conduct subgroup analyses (Table 5) of the primary outcome by. • Gender • Wealth index 1 (in three categories determined by the material the house is made of ) • Wealth index 2 (in five categories determined by the number of relevant items owned by the household, with the interaction tested using a trend test). Number of days of school missed in the last two weeks Midline Yes Missing n (%) x (x) n (%) x (x) • Caste • Primary female caregiver literacy in 3 groups. This to be replaced by female education if more than 10% of the participants have a missing value for literacy and education status is not missing. • Primary male caregiver literacy in 3 groups. This to be replaced by male education if more than 10% of the participants have a missing value for literacy and education status is not missing. • Village population (above/below median) For each of the above factors, statistical tests for interaction will be carried out, with claims of different effects in subgroups only made if there is strong evidence (p < 0.01) of an interaction. Reporting and assumptions/statistical methods to handle missing data (e.g., multiple imputation) These are described in the Sensitivity analysis section above. Additional analyses Additional analysis to be conducted include an economic evaluation calculating total average cost, and total average cost per 0.1 standard deviation improvement in the primary outcome. The standard deviation to be estimated by fitting a linear mixed model with cluster-specific random effects to the primary outcome in the control arm of the trial, with the standard deviation estimated via a summation of the between-and within-cluster variances. The included costs will be all costs for running the intervention and any capital costs will be amortized according to the item. It will include all costs that would occur if the trial intervention were continued without the research costs related to a trial. It does not reflect the costs that a government organization would observe if they took over the intervention. It does not include any costs to families. Also, as a result of the COVID-19 lockdowns, additional support was provided to enrolled children and their mothers. Summary data relating to this will be tabulated. Data collected included the number of direct messages sent to children and the response rate to these messages, the number of home-visits received, attendance of mothers in fortnightly meetings to encourage engagement, access to and use of books at local libraries and, access to and use of a tablet providing digital learning. Specific challenges No smartphone n (%) x (x) n (%) x (x) Limited access to smartphone n (%) x (x) n (%) x (x) Internet connectivity issues n (%) x (x) n (%) x (x) Internet costs too expensive n (%) Lack of schoolteacher support n (%) x (x) n (%) x (x) Lack of time to help child n (%) x (x) n (%) x (x) Low knowledge of technology n (%) x (x) n (%) x (x) Child not interested n (%) x (x) n (%) x (x) No money for a private tutor n (%) x (x) n (%) x (x) Child's progress/well-being n (%) x (x) n (%) x (x) Administrative information n (%) This is a cluster randomised trial, with all villages (clusters) randomised in 2019. Eligible children for the STRIPES2 trial were all enrolled prior to randomisation. Endline tests and surveys for STRIPES2 were conducted in 2022. Data cleaning for STRIPES2 is ongoing with possible return to the field for outstanding queries, prior to anticipated data-lock in May 2023. Data management plan The final EGRA and EGMA (literacy and numeracy) tests will be double-entered in the main office of the research team in Satna. The database has been developed by Sealed Envelope (https:// www. seale denve lope. com), an independent company contracted to construct and maintain a bespoke database for the trial, who will also keep a periodical backup of the data. Trial master file, statistical master file and standard operating procedures The trial master file is part of the standard operating procedures manual. The standard operating procedures manual is available upon request. The statistical master file is held securely and may be available upon request after final analyses. Authors' contributions SKe and CF led the development of the first draft with significant contribution from all authors. All authors contributed extensively to the design of the study and have contributed to, commented on and approved the final manuscript. The STRIPES2 intervention was designed by RB, DS, SSh, and colleagues from the Pratham Education Foundation team. SKa and HR provided field and data support for designing the research component. PB designed the economic analysis. Availability of data and materials Data sharing is not applicable to this article (a statistical analysis plan) as no datasets will be generated or analysed during this stage of the study. After publication of the initial results, the anonymised datasets used and/or analysed during the trial with relevant statistical code will be available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate The Ethics Committees of L V PRASAD Eye Institute, Hyderabad, India (LEC 02-16-008) and London School of Hygiene and Tropical Medicine (LSHTM Ethics Ref: 10482) have approved the trial protocol. We have obtained the necessary approvals from the Indian Council of Medical Research, New Delhi and the Government of Madhya Pradesh to conduct this trial in Satna district. The trial complies with the Declaration of Helsinki, local laws, and the International Conference on Harmonisation Good Clinical Practice (ICH-GCP). Any protocol modifications will be communicated to both the Ethics Committees, and consent will be re-obtained at the village and individual (woman or caregiver) level at that point if deemed necessary. For this trial, we received approval from the Indian Medical Council of Research (ICMR), New Delhi, India. At the state level, approval of the protocol was obtained from the Department of Health & Family Welfare of the government of Madhya Pradesh. This trial employs multiple tiers of consent: village, individual, and individual on behalf of the child. Agreement to approach eligible villages was first obtained from the Sarpanch. In the trial villages, consent was obtained from the village after the trial has been presented in a meeting with village elders representing all the castes and village residents. Verbal consent was given during a village meeting with written documentation (or thumbprint) of the approval given by the Sarpanch. This process of obtaining consent through meetings with approval of the "guardians" of the clusters is common in trials in which the intervention is delivered at the level of a cluster and it is not possible to obtain informed consent for randomisation from individuals within the cluster before a baseline survey. Once the trial was accepted at the village meeting, the villages were considered eligible for baseline enumeration. During the process of baseline interview, each head of household, each potentially eligible woman and one parent or caregiver of each potentially eligible child was informed in the local language (Hindi) about the trial and their participation and asked for a signature or thumbprint to indicate their consent to join the trial. Only people who agreed to participate were enumerated. Women and caregivers of enumerated children have the right to withdraw consent at any time during the trial. This process of consent is compatible with current standards for cluster randomised trials [9]. Consent for publication Participants (household heads, women, and caregivers on behalf of children) were informed that we would revisit the households to interview them about pregnancies, babies, and children's school enrolment so we could understand to the impact of the CHAMPION2 and STRIPES2 programmes. All participants agreed that all individual information collected during interviews will be used only for research purposes and in ways that will not reveal their identity. Competing interests PB is the Executive Chair of EI; IF is a paid employee of EI but has no competing interests. DE and CF received research grants funding from EI but have no competing interests. SKe, NM and SiS are employed in these research grants but have no competing interests. SKa and HR receive research funding from EI but have no competing interests. RB, DS, and SSh declare a potential School materials x (x) x (x) x (x) x (x) School fees x (x) x (x) x (x) x (x) Out of school tuition
2023-07-22T13:48:50.805Z
2023-07-22T00:00:00.000
{ "year": 2023, "sha1": "45dba3f94b16b9b2b7242c77e580575d5484ad1f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "45dba3f94b16b9b2b7242c77e580575d5484ad1f", "s2fieldsofstudy": [ "Education", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
247171748
pes2o/s2orc
v3-fos-license
Spared, shared and lost—routes for maintaining the Scandinavian Mountain foothill intact forest landscapes Intact forest landscapes harbor significant biodiversity values and pools of ecosystem services essential for conservation, land use and rural development. Threatened by fragmentation and loss by transitions to industrial clear-cut forestry, those landscapes are of pivotal interest for protection that secures their intact character. With wall-to-wall land-cover data, we explored opportunities for maintaining intact forest landscapes through comprehensive spatial planning across a 2.5 million hectares boreal to sub-alpine forest region along the eastern slopes of the Scandinavian Mountain range. We analyzed forest and woodland types that are protected, need protection or potentially can be subject to continued forest management. We established that the fraction of already clear-cut forest is very small and that the forest landscape of the Scandinavian Mountain foothills contains a high proportion of protected high conservation value forests, covering almost 2 million ha, and that over 500,000 ha (27%) remains unprotected and may be subject to future protection or continued adapted forest management. We found evident north to south differences with respect to forest landscape configuration, distribution of unprotected forests and land ownership. With a focus on non-industrial private landowners, we conclude that sustainable land-use requires integrative, multi-functional approaches that rely on further protection, forest and forest landscape restoration and a much larger share of continuous cover forestry than presently. Our results provide input into ongoing policy implementation and green infrastructure planning in the context of securing intact forest values and integrative opportunities for rural livelihood and regional development based on multiple value chains. Introduction The rare remnants of contiguous forest-dominated landscapes and mosaics of forests and associated open and semiopen land-cover types with high degrees of naturalness are crucially important since they harbor capacity for climate change mitigation and adaptation, biodiversity conservation and multiple ecosystem services (Thom et al. 2019;Sabatini et al. 2020;Ward et al. 2020). With the pronounced human footprint on nature globally (Venter et al. 2016; Bar-on et al. 2018), there is an increasing concern that the comprehensive and diverse values of the last remaining intact forest landscapes (Potapov et al. 2017) will deteriorate further (Jones et al. 2018;Watson et al. 2018;Zanotti and Knowles 2020). Expanding frontiers of clear-cut forestry (e.g. Seedre et al. 2018; into such areas is a main cause of forest and biodiversity loss worldwide Venier et al. 2018;Mikoláš et al. 2019;Betts et al. 2021). More ambitious conservation Communicated by José Valentin Roces-Diaz. strategies and targets are therefore promoted (e.g. Ward et al. 2020, EU, 2020. For example, active restoration of intactness and naturalness (e.g. Watson et al. 2018) and multiple value chains that better maintain the multi-facetted values of forest landscapes (e.g. Angelstam et al. 2020) have become adopted at international and national levels (e.g. CBD zero draft 2020;EU 2020EU , 2021SOU 2020). Intact forest landscapes are invaluable for their intrinsic values for in-situ conservation, for observing forest ecosystem responses to climate change and expansive land use, and as reference areas for forest landscape restoration (Angelstam et al. 2011;Kuuluvainen et al. 2017;UN 2019). They further represent "mainland" areas with viable species populations that can disperse into adjacent fragmented and transformed forests, landscapes and regions. Hence, their maintained "ecological memory" (Bengtsson et al. 2003) can support and strengthen functional green infrastructure (European Commission 2013) in forest landscapes (Pickett and Cadenasso 2018;Slätmo et al. 2019;Svensson et al. 2020a). Thereby, remaining intact forest landscapes and other geographically larger components of natural and semi-natural forests represent the nodes onto which regional planning for functional ecological networks of protected forests should be built (e.g. Ward et al. 2020;Mikusiński et al. 2021). As forests and forest landscapes in Europe have been largely transformed during the twentieth century era of industrial forest management, old-growth and naturally dynamic forests are rare or missing and, consequently, recognized as priority conservation entities (Sabatini et al. 2018;. This is recognized in the EU 2030 Biodiversity Strategy (EU 2020/21) which has set a target of 30% protection whereof a third with strict protection, with the protected areas forming ecologically functioning networks. Furthermore, intact forest landscapes are integrated into the Forest Stewardship Council (FSC) certification standards with direct consequences for forest management policies worldwide (Blumroeder et al. 2019;Kleinschroth et al. 2019). Accordingly, and for example with reference to the European primary forest database (Sabatini et al 2021), remaining intact forest landscapes need to be identified, mapped and assessed regarding opportunities and threats to maintain their full range of values in the view of national and pan-national environmental targets, e.g. the Aichi targets (CBD 2010), the EU biodiversity strategy and national policies. The foothills forest landscape of the Scandinavian mountain range, i.e. the "Scandinavian Mountains Green Belt" (SMGB; Svensson et al. 2020a), includes a significant portion of the remaining intact forest landscapes in Europe (Potapov et al 2008a, b;Heino et al. 2015;Curtis et al. 2018;Sabatini et al. 2021). Due to a more strict legal regulation of clear-cutting forestry in the foothills forests above the mountain forest border (Jonsson et al. 2019), these hinterland forest landscapes harbor intact forest landscape qualities Svensson et al. 2020a;Mikusiński et al. 2021). For the term intact forests, we here follow the definition by Potapov et al. (2008aPotapov et al. ( , b, 2017 as larger (> 500 km 2 ) mosaics of forests and natural open ecosystems that include primary forests and shows no or low influence of human activities and habitat fragmentation, but where some historic human influence of, e.g. preindustrial selective tree felling, may have occurred. Primary forests are defined as naturally regenerated forest with native tree species, no clearly visible signs of human interference and where the ecological processes are not significantly disturbed (FAO 2020). Thus, primary forests constitute cores within intact forest landscapes, which conservation status is amplified by their intact surroundings of other forests and other landcover types. In the Swedish mountain region, encompassing the foothills forests and the subalpine and alpine areas above the mountain forest border, close to 1.5 million ha of forests and forest-dominated landscapes are formally protected, amounting to 62% of all formally protected forests in Sweden (Statistics Sweden 2021). Recently, a forest policy inquiry (SOU 2020) proposed additional protection of the remaining high conservation-value forest areas, which would result in 80% protection of the total mountain forestland area. However, a high conservation ambition of the SMGB may not necessarily exclude continued forest landscape use that supports diverse value chains built on material and immaterial values (Jonsson et al. 2019). To achieve sustainability in this region and to meet multiple land-use demands, however, future conservation and land-use strategies require a knowledge base that ensures the integrity and resilience of the intact forest values in the context of integrative planning approaches to multiple value chain (e.g. Aggestam et al. 2020;Bollmann et al. 2020). This study focused on the amount, spatial distribution and characteristics of forests and woodlands located in the Swedish mountain region. We categorized forests and woodlands based on their conservation status into categories that represent key landscape planning components that need to be taken into account in implementing high conservation ambitions, while at the same time reflect multiple-use opportunities and multiple value chains. The aim was to establish a planning basis that ensures maintained intactness of the SMGB based on what is spared or should be spared, i.e. current and future protection, and what could be shared with multi-objective integrative or segregative land-use strategies and adjusted to conservation needs. To achieve this, we analyzed up-to-date and wall-to-wall spatial datasets of forest types, conservation values and land ownership, and discuss how their distributions can guide planning at different spatial scales, particularly concerning forests that may be subject to future clear-cutting. As such, we on the one hand provide an illustrative case on the challenges to ensure the integrity and values of intact forest landscapes of European significance, and on the other hand to support multiple-value based rural development in a rural, hinterland region. We foresee that this study will contribute to clarify premises for fulfilling international and national agreements on protecting outstanding ecosystems, biodiversity and ecosystem services in northern boreal and subalpine forest landscapes. Study region This study focuses on a 8.9 Mha territory of the Swedish mountain region (Swedish Forest Agency 1991), which was divided into four sub-regions, here termed "far south", "south", "central" and "north" (Fig. 1) following Roberge (2018). "Far south" (c. 0.9 Mha terrestrial surface; Dalarna County and the two southern municipalities in Jämtland County) is characterized by a dominance of Scots pine (Pinus sylvestris) forests and a forest floor dominated by ground lichens. "South" (c. 1.1 Mha; the three northern municipalities in Jämtland County) is characterized by a dominance of Norway spruce (Picea abies) forests, herb-rich forest floors, calcium-rich parent material and an Atlantic macroclimate. "Central" (c. 1.6 Mha; Västerbotten County) is dominated by spruce forests but with less favorable macroclimatic conditions and less fertile soils. "North" (c. 5.3 Mha; Norrbotten County) is characterized by a dominance of pine forests, forest floors dominated by ground lichens, low annual temperature and short vegetation periods, large wetland areas and postglacial sediments. The general bioclimatic constraints at higher altitudes and latitudes cause a large share of forests and woodlands with low site fertility often resulting in a semi-open woodland character. With increasing altitude the forests gradually transform from conifer dominated to deciduous woodland with mountain birch (Betula pubescens ssp. czerepanovii) forming the alpine tree line (Hedenås et al. 2016), but with increasing occurrence of pine in the south and north. Thickets of dwarf birch (Betula nana), willows (Salix spp.) and ericaceous shrubs cover large proportions of concave and locally low-lying terrain and are gradually replaced by heaths and barren land at higher altitude. The Swedish mountain region has hinterland, rural characteristics with low human population and less developed urban facilities, social services and road networks (Statistics Sweden 2019). Traditional small-scale forestry and mountain farming have declined, while industrial clear-cut forestry is maintained but at decreasing level since the 1990s (Jonsson et al. 2019). Tourism and outdoor recreation contribute to local livelihood, with both more developed facilities and nature-based wilderness adventures. Wind-and hydro-power production facilities and mines occurs and have significant impact in certain places (Jansson et al. 2015;Svensson et al. 2020b). A unique feature is the indigenous Sami people culture and reindeer (Rangifer tarandus) husbandry that contribute substantially to the comprehensive and diverse landscape values (Blicharska et al. 2017). The state is the dominating landowner, but also private forest incorporates, forest commons and non-industrial private ownership occur. Data and analysis With a focus on forests and forest landscapes, this study is based on the most recent remote sensing-generated landcover data, inventories of high conservation value forests and mapping of not clear-cut forests. We applied the most recently updated (Swedish EPA 2019) high-resolution (10 × 10 m raster) national land-cover data (NMD) for the areal coverage and spatial distribution of different forest types and woodland areas. We followed the NMD classification of forest types based on dominating (≥ 70%) tree species with distinction between forests located on high and low productivity sites (division based on site capacity to support tree growth ≥ 1m 3 ha −1 per year over a rotation cycle, and applying the terminology used by Hämäläinen et al. 2019). Economic-oriented forestry is Sweden is legally restricted to high productivity sites following the above definition and thus this division is relevant as a proxy for assessing actual and potential clear-cut forestry. The forest types included in the analysis were pine forest, spruce forest, mixed coniferous and deciduous (mainly mountain birch and hairy birch; Betula pubescens) forest and pure deciduous forest. We also included recently clear-cut forest areas and woodlands (< 5 m height mountain tree and shrub vegetation on wetland and other semi-open land cover) also using the NMD data. We applied data on proxy continuity forests (pCF; Ahlcrona et al. 2017;Svensson et al. 2019), i.e. mature and old forests not systematically clear-cut during at least the last c. 70 years). Given the late establishment of systematic clear-cutting forestry in this region (Kuuluvainen et al. 2017), large proportions of the pCF have thus not experienced systematic clear-cut forestry with transition to even-aged systems . For delineating forests with confirmed significance for biodiversity conservation, we applied the high conservation value forest (HCVF) dataset (Anon 2017) and added the recent (2020) inventory data which was performed specifically as input to the mountain forest section of the forest policy inquiry (SOU 2020; Henriksson and Olsson 2020). The HCVF areas were further separated into protected and unprotected forests using spatial datasets from the Statistics Sweden (2021). We overlaid the HCVF and pCF datasets to spatially analyze and visualize forest areas of different status. This resulted in four main categories: (1) unprotected HCVF, i.e. not protected forests with known and documented conservation values; (2) unprotected pCF, i.e. not protected forests with not documented, or without, high conservation values; (3) protected HCVF, i.e. already set aside for conservation; and (4) clear-cut forest, i.e. areas classified as recently clear-cut or generally without forest cover (see also supplementary material Figure S1). Here, protected forests are the formally protected areas according to the Swedish Environmental Code, Land Code and State agreements (Statistics Sweden 2021). Thus, voluntary set asides and general consideration areas were not considered. Thereby, the currently unprotected and not previously clear-cut forest areas (categories 1 and 2, i.e. included in the HCVF-and/or pCF-datasets) refer to forest segments that can become subject to either additional protection or to some form of continued forest management. To assess the distribution and characteristics of forests that are potentially available for forest management, but also with a potential to further strengthen the intact landscape values, we focused the analysis on the high productivity forests outside formal protection. More specifically, we focused on forests that have already confirmed (unprotected HCVF) or possibly harbor (unprotected pCF) high conservation values, which were separated into The four forest-type categories assessed: non-pCF, i.e. forestlands that are not mapped as proxy continuity forests; unprotected pCF; unprotected high conservation value forests (HCVF); protected HCVF. Other land cover classes than forest including mountain woodlands are shown in black forest types, patch-size classes and landowner categories for each of the sub-regions. For landownership, we used data provided by the Swedish Environmental Protection Agency (Henriksson and Olsson 2020). We separated land owners into three categories: (i) public-including the state Property Board, Swedish Environmental Protection Agency (formally protected areas), Fortification Agency, municipalities and administrative region units; (ii) private incorporates-including private forest industry companies, the state Sveaskog forest company, church and commons (due to their forest-industry behavior in Sweden; cf. Holmgren et al. 2007); and (iii) non-industrial private forest owners (NIPF)-encompassing private person ownership polygons < 1000 ha. Here, a polygon is to be understood as one spatial administrative entity, but since ownership can include several polygons, this cannot directly be translated to separate owners. This landowner categorization is applicable on the scale of the study region at state as a generic approach to how forest management is practically exercised in Sweden, but locally as well as over time there is variation within the categories. We assessed the spatial and patch-size distribution of unprotected pCF and unprotected HCVF forests, on high productivity forestlands for each of the four sub regions and for the entire study region. Using the Python package SciPy Ver. 1.1.0, we identified all forest patches ≥ 1 ha in an eight-pixel neighborhood structure (i.e. all surrounding pixels around each pixel). The area of pCF-fragments < 1 ha was estimated using the original 10 × 10 m data resolution. We re-sampled the pCF-raster to a coarser grid (1 ha; 100 × 100 m) through mode-based aggregation (≥ 50% pCF). A rasterized land-ownership vector layer was used to analyze the distribution of unprotected high productivity forests for different patch-size classes and forest types for each ownership category and for all sub-regions. Results Forestlands above the mountain forest border (Fig. 1) cover in total 2.54 million ha, of which 56.5% is formally protected (Table 1). Woodlands cover an additional 950,000 ha. High conservation value forests cover almost 2 million ha, whereof 73% are protected. Deciduous forest is the most abundant forest type across all forestland (35%) and all protected HCVF (34%), followed by spruce forests (27% and 30%, respectively). Low productivity sites cover a larger proportion of forestlands (56%) than high productivity sites, especially for protected HCVF. For high productivity forests, both in total and for protected HCVF, spruce dominates followed by mixed forests and pine forests. Almost all forestland is pCF (92%), where the protected share is slightly higher than for all forestland Table 1 Total forestland, proxy continuity forests (pCF), forest loss and woodland area (in 1,000 ha) on high and low productivity sites and combined, for all forestland, all high conservation value forests (HCVF) and protected HCVF (P. HCVF). The data are presented for five main generalized forest types (pine, spruce, mixed and deciduous, and recently logged forests) and in total Footnote: All estimates are derived from the original dataset with 10 × 10 m spatial resolution. Area calculations are nested with protected HCVF being a share of all HCVF and all HCVF being a share of all forestland and pCF, respectively. A minor fraction of forestlands that are not pCF, i.e. recently logged forests has been classified as HCVF. This is likely an effect of using independent spatial data as recently logged is provided by NMD (Metria 2019), pCF (Ahlcrona et al. 2017), HCVF (Anon 2017) and Henriksson and Olsson (2020) High productivity sites Low productivity sites All sites (61%) as well as for high (58%) and low (62%) productivity forestland. A very large share (83%) of the pCF are also HCVF. Forest loss, mainly including recently clearcut forests, is minor (190,000 ha) and almost exclusively occurs on high productivity sites: 7% of all forestland and 19% of high productivity sites. Generally, pCF on high productive forestlands occur in the vicinity of the mountain forest border and in the river valleys to the west (Fig. 2), thus following lower altitude terrain. Overall, the distribution of pCF is contiguous in the north and central sub-regions, but fragmented in the south and far south sub-regions. Pine forests show a disjunct distribution with high abundance in the north and far south sub-regions. Spruce and mixed forests are more contiguous, and deciduous forests are more widespread and most abundant in the central and north sub-regions. Deciduous forest occurs on higher altitude along valleys across the mountain range, as within woodlands that are more abundant and contiguous in the north sub-region. Unprotected HCVF and pCF covers 39% of the forestland area and with almost equal shares on high productivity and low productivity sites (Fig. 3). The share of unprotected HCVF is greater on high (36%) than on low productivity sites (20%). In total across the study region, unprotected HCVF covers 532,000 ha (see Table 1). On high productivity sites, unprotected spruce forests dominate with an area of 216,000 ha of which 168,000 ha being HCVF. On low productivity sites, unprotected deciduous forests dominate with an area of 358,000 ha of which 98,000 ha being HCVF. Across the entire study region, unprotected HCVF and pCF mainly occur in patches > 100 ha, whereof 29% in large (> 1000 ha) patches (Table 2, see also supplementary material table S2). A lower share occurs in smaller patches; 16% of the area is in patches < 10 ha and 7% in patches < 1 ha. The largest contiguous areas are in the central and south sub-regions. The north sub-region has a low share (7%) of large (> 1000 ha) patches of unprotected forests, and only the central sub-region has the largest share (45%) in large patches (> 1000 ha). The north and the far south sub-regions have the most even area distribution across patch size classes. The sub-regions show clear differences in distribution of unprotected HCVF and pCF (Fig. 4), with the north sub-region standing out with a small and scattered area and the central sub-region with concentrated and contiguous areas. There are marked differences in forest ownership patterns among the sub-regions (Fig. 5, see also supplementary material table S2 and S3). Across the entire study region, NIPF and private incorporates own the largest share of unprotected high productivity forests, 42% and 38% respectively, across all patch size classes. The dominating patch size class is 100 to 1,000 ha for all sub-regions and ownership categories, with the exception of NIPF ownership in the northern subregion. However, private incorporates clearly dominate for the largest patch-size class, except in the far south subregion. For patches up to 100 ha, NIPF owners dominate in all sub-regions, and in the central and south sub-regions also in patches 100 to 1000 ha. The public ownership of unprotected forests is generally small. With many different ownership polygons and a low average area of unprotected HCVF and pCF, the ownership structure is very complex in the far south sub-region in comparison with other sub-regions (Fig. 6). Of a total of 41,692 landowner polygons in the entire study region, 17,486 are in the far south with an average area of high productivity forestland per ownership polygon of 1.8 ha. In comparison, the corresponding average area in the south sub-region is 6.6 ha. The north sub-region is characterized by small and scattered forests whereas the central sub-region by larger and contiguous areas. The largest areas of unprotected HCVF and pCF are in the central and south sub-regions on private incorporate and NIPF ownership, with the former owner category mainly for patches larger than 1,000 ha. For smaller patches, NIPF owners dominate in all sub-regions and in particular in the central and south sub-regions. The public ownership of unprotected HCVF and pCF is small in comparison. NIPF and private incorporates own the majority of the unprotected HCVF and pCF, with the latter category being the dominating owner of large patches. High conservation values in the Scandinavian Mountains Green Belt Despite historical forestry and other land uses and cultures, high forest connectivity and continuity occurs widespread in the SMGB, but still with scattered gaps Mikusiński et al. 2021). Large and contiguous intact forest patches are concentrated in the area above the mountain forest border (Svensson et al. 2020a) and thus geographically connected to an over 3 million ha alpine environment forming a magnificent landscape with very high ecological and cultural values (Blicharska et al. 2017). Given the extensive transformation of forests and forest landscapes elsewhere across the Fennoscandian boreal forests (Heino et al. 2015;Kuuluvainen et al. 2012;Jonsson et al. 2019) and the severe loss of primary forests and intact forest landscapes in Europe (e.g. Sabatini et al. 2021), the SMGB stands out as a northern European mainland for intact forest landscapes. As for other clusters of old-growth and primary forest areas in Europe and globally, a high protection ambition is strongly motivated for the SMGB. This is also the key conclusion Fig. 2 Spatial distribution of all proxy continuity forests (pCF) and the distribution of pine, spruce, mixed and deciduous forest types, separated into on high productivity (green) and low productivity and woodland (yellow) sites. On the pine, spruce, mixed, deciduous and woodland maps, the grey background shows the distribution of all pCF not falling into the focal category. The distribution is generalized through re-sampling for visual purpose. The study region is delineated by a grey line and the sub-regions by dashed lines in the recently launched forest policy inquiry (SOU 2020), which to fulfill the national commitment to conserve biodiversity (Aichi target #11; CBD 2010) suggests to set aside the vast majority of the remaining not yet protected HCVF. Still, the SMGB partially contains fragmented forests with disrupted intactness, in particular in the southern parts, where also remaining unprotected forests occur in smaller patches separated into numerous polygons with a predominance of NIPF ownership. The high conservation values of the SMGB are well known since a long time; indeed, the first national parks were established here already in 1909 (Statistics Sweden 2021). Overall, however, the current intactness is dependent both on the already protected and the not yet protected forests Mikusiński et al. 2021). The recent (2020) inventory (Henriksson and Olsson 2020) identified more than 550,000 ha of unprotected HCVF, including areas (c. 200,000 ha; ibid.) adjacent to but below the mountain forest border. These forests extend eastwards into the inland region and have, thus, a potential to provide functional ecological network into the more transformed inland region. Thereby, identification of areas in the SMGB and in its vicinity that needs additional protection, as well as areas that potentially allow continued forest management and other land uses, is a needed planning basis for supporting multiple forest value chains. Although data and categorizations used are broad, our analyses are novel on the scale of the entire SMGB and provide urgently needed information into future national strategies for the SMGB and the mountain landscapes in general. Spared, shared and lost A long-term and diverse land-use history (e.g. Josefsson et al. 2010) has generated substantial heterogeneity and varying potentials for maintaining the intact forest landscape values across the whole SMGB. A significant proportion is already spared; the formally protected area constitutes close to 57% of all forestland with high and low productivity forests, woodlands and adjacent semi-open and open habitats, and the habitat network functionality is high Mikusiński et al. 2021). The unprotected forests are currently debated (SOU 2020), of which our results show that 532,000 ha are documented as HCVF and 928,000 ha as pCF (i.e. not documented but potential HCVF). Of these totals, close to 400,000 ha are unprotected high productivity sites that may be available for some form of continued forestry. We argue that all HCVF need formal protection or voluntary conservation-targeted management strategies. Fig. 3 Non-pCF (forest loss), proxy continuity forest (pCF), unprotected high conservation value forest (HCVF), protected HCVF. (a) Proportion (%) of high and low productivity sites of all forestland area, (b) area (in 1,000 ha) of pine, spruce, mixed and deciduous dominated forests on high productivity, and (c) on low productivity sites Table 2 Area (in 1,000 ha) of unprotected proxy continuity forest including unprotected HCVF on high productivity sites, separated into patch-size classes and summarized for all forestland and for north, central, south and far south sub-regions Footnote: The minimum mapping unit was a 1 ha (100 × 100 m) pixel that is dominated (≥ 50%) by unprotected proxy continuity forest (pCF) on high productivity sites. Thus, the column ≤ 1 ha includes fragmented forest patches that together cover < 50% of the 100 × 100 m pixel estimated using the original (10 × 10 m) data resolution ≤ 1 ha 1-< 10 ha 10-< 100 ha 100-< 1000 ha ≥ 1000 ha Total Thus, substantially larger areas than what currently is formally protected should be spared, but other areas may be shared if the forest management methods will not compromise the intactness of the SMGB and the opportunities for other sustainable land-use interests and values. We found that 190,000 ha forests have been clear-cut, corresponding to 7% of the total forestland area, 8% of the pCF area and 19% of the high productivity forest area. Large clear-cuts on climatic constrained areas before the 1990s (Jonsson et al. 2019) have often created degraded lands (socalled fossilized clear-cuts). Still, the proportion of clear-cut forests is low which suggests that the regional importance of clear-cut forestry is limited (Jonsson et al. 2019). These areas can be considered as lost and do not contribute to intact forest landscape values. Here, we see two optional strategies. First, to either actively or passively promote forest restoration that by time add to the intact characteristics including, for example to allow natural regeneration and following succession after clear-cutting selectively favor deciduous tree species, and to favor trees with cavities and other biodiversity attributes in mature forests. Passive promotion embeds natural stand development, which would be in particular valuable for young deciduous-dominated forests that are critically missing in the Swedish boreal forest due to long-term active wild fire suppression and a forest management system that systematically favors coniferous stand development (Mikusiński et al. 2003). In Fig. 7, we illustrate how green infrastructure, forest landscape restoration, continuous cover forestry, clear-cut forestry and integrative multi-functional landscape planning, can be approached to maintain the intact values of the SMGB. Conservation of biodiversity and provisioning of ecosystem services is focal in the green infrastructure concept (e.g. Slätmo et al. 2019). Already protected and unprotected HCVF as well as unprotected pCF that represent key components. Since some connectivity gaps in the SMGB have been documented (Svensson et al. 2020a), and because the area of suitable habitats needs to be increased (Mikusiński et al. 2021), there are reasons for considering forest and forest landscape restoration (cf. Mansourian 2018) in active or passive ways as discussed above. We also argue that continuous cover forestry can be promoted, in particular if viewed as an approach to manage forest ecosystems based on their inherent diverse values and premises (Mason et al. 1999). With the conservation of intact forest landscape values of the SMGB as a central goal, the range of land-use and management options calls for a comprehensive landscape approach (e.g. Arts et al. 2017) reflecting multiple values and balanced integrating and segregating approaches (Côté et al. 2010;Messier et al. 2019;Aggestam et al. 2020;Bollmann et al. 2020). Opportunities for multiple value chains supporting rural development The rich and diverse pool of natural resources and landscape values in the Swedish mountain region has generated a situation where multiple, diverging land-use claims overlap and where the combined land-use claims for economic, ecological and socio-cultural purposes substantially exceed the available land area (Svensson et al. 2020b). This implies a risk that some value chains dominate at the expense of others, and thus may lead to land-use conflicts and accelerating difficulties in land-use priority decisions to resolve them (Bjärstig et al. 2018). Building capacity for spatiotemporal and multi-objective resolution in sustainable landscape planning allows for diversified land-use for multiple value Fig. 4 The distribution of unprotected proxy continuity forest including unprotected HCVF on high productive sites. The distribution is generalized through re-sampling for visual purpose. The study region is delineated by a grey line and the sub-regions by dashed lines chains (cf. Felton et al. 2020;Angelstam et al. 2020). Like in many other hinterland regions, sustainable local and regional development calls for value-chain avenues that are based on the broad spectrum of natural resources and landscape values with a strong local use and control (Chiasson et al. 2019;Sténs et al. 2016). Besides forestry, Sami culture including reindeer husbandry and recreation and tourism represents pronounced value chains (e.g. Fredman and Emmelin 2001;Jansson et al. 2015). It can be assumed that clear-cut forestry will occur also in the future, albeit on limited areas in respecting the generally low site fertility and lack of historical legitimacy, but foremost in respecting the conservation integrity of the SMGB. In the forest policy inquiry (SOU 2020), 240,000-ha forestland above the mountain forest border were identified as potentially available for continued forestry. Here, a minimum of overlap with core areas for nature conservation and other land use interests will have to be secured. In the context of other value chains, evidence is rapidly accumulating Table 2) on high productivity sites, separated into patch-size classes per ownership category for north, central, south and far south sub-regions. Patch sizes ≥ 1,000 ha for NIPF are patches that include more than one NIPF ownership polygon on the favorable outcomes of continuous cover forestry in terms of economic viability (e.g. Nieminen et al. 2018), multifunctional capacity (e.g. Eyvindson et al. 2021), biological functions of soils (e.g. Kim et al. 2021) and with less negative impact on forest biodiversity (e.g. Peura et al. 2018). Since achieving multiple services and goods from forest environments is difficult at the local level, a diversification of management regimes at the broader landscape level supports a broader palette of biodiversity outcomes and ecosystems goods and services (e.g. Triviňo et al. 2017;Felton et al. 2020). Hence, a landscape perspective is needed for the forest management, which, so far however, has been arduous to promote and realize outside specifically designated areas such as the Sveaskog State forest company Ecoparks (Bergman and Gustafsson 2020). Sapmi, the native land of the Sami people, is covering large areas in northern Europe including the SMGB, which contribute high profile indigenous values (Pape and Löffler 2012). The presence of a vital Sami culture with continued reindeer husbandry and grazing that maintains the openness Illustration on how approaches to green infrastructure, forest landscape restoration, continuous cover forestry, clear-cut forestry and integrative multi-functional landscape planning can be allocated among the forest-type categories and with respect to spared, shared and lost intactness. The vertical width of the light grey horizontal fields approximately equals their area proportions (Table 1). The horizontal width of the dark grey boxes represents full, moderate and minor extent and importance in each category. Dashed extensions of forest landscape restoration indicates management to favor biodiver-sity values if needed to secure intactness, and of continuous cover forestry to favor forest biodiversity that benefit from canopy thinning if needed (i.e. as an approach to management of forest ecosystems and not a wood biomass production system). The shared proportion in protected (minor extent) and unprotected (moderate extent) HCVF concerns other nature-based land use that does not negatively affect nature conservation values, and reindeer husbandry or wildlife tourism and recreation. The figure backgrounds how strategic, tactical and operational spatial planning can be developed and scenery in the mountain landscape is essential for provisioning of a very large range of specific ecosystem services (Jansson et al. 2015;Blicharska et al. 2017;Hedblom et al. 2020). Also, small-scale mountain farming has contributed to the overall biodiversity, multifaceted values, open and semi-open landscape character and to the amenity, recreation and tourism values of the Scandinavian Mountains, which are clearly mirrored in the "A Magnificent Mountain Landscape" national environmental objective (Swedish EPA 2007). It can be assumed that maintained intact forest landscapes are needed also for maintaining and supporting the cultural heritage and societal values of the SMGB. Furthermore, with reference to the intact forest landscapes and primary forest (Potapov et al. 2008a(Potapov et al. , b, 2017FAO 2020), both terms embed the presence of historical land use and indigenous cultures. By area cover and also locally in many places, outdoor recreation and tourism are dominant land uses with both nature-based and place-based facilities such as ski resorts (Fredman and Emmelin 2001;Svensson et al. 2020b). There are examples where incomes from tourism in regions with particularly high biodiversity values exceed those from wood biomass production (e.g. Czeszczewik et al. 2019). The touristic attractiveness of the Swedish mountain region is unquestionable and the process of change from intensive use of natural resources into "soft" sectors is established since a long time (Fredman and Emmelin 2001;Lundmark 2005). As tourism and recreation aspects are predicted to become an even more important value chain for rural development in the future (Jonsson et al. 2019), the need for spatial planning to accommodate uptake of local as well as visitor's perspectives on values and opportunities becomes emphasized. In addressing opportunities and challenges for sustainable forest landscape management as a key component for rural development, the complex land-ownership situation needs to be considered. In this study, we have categorized landowners with a focus on NIPF owners and identified complexities and differences in the ownership structure of unprotected high productivity forests. The NIPF ownership requires particular attention in relation to sustainable rural development but also to green infrastructure, forest landscape restoration, continued forestry and planning. Here, a certain challenge lies with promoting ways forward to handle existing connectivity gaps in the southernmost part of the SMGB (Svensson et al. 2020a;Mikusiński et al. 2021) where most NIPF owners occur, where the ownership structure is the most complex and where there are limited opportunities in using public land as land-exchange compensation for further protection. To support preservation of the intact values of the SMGB in relation to sustainable development in the hinterland mountain region, policies, policy instruments and policy implementation must continue to develop. As one way forward, we suggest that voluntary agreements, such as conservation agreements under the Swedish Land Code (1970), could be more extensively applied in parallel to strict protection instruments. Conservation agreements are more nuanced formal regulations and may allow continued forest management if in accordance with biodiversity conservation targets or for favoring nature-based recreation or other socio-cultural vales, for which landowners are compensated economically, and can be further developed for multiple values including landscape perspectives and Sami people reindeer husbandry. The abovementioned State forest company Sveaskog Ecoparks (Bergman and Gustafsson 2020) is regulated through such agreements. There is an embedded capacity in policy instruments that relies on voluntary, dialogue and mutual agreement principles with the landowner (Widman and Bjärstig 2017), a capacity that is needed for successful protection of the intact values of the SMGB in the view of integrated approaches to sustainable local and regional development. We foresee that this study will help to direct further implementation based on what specific local values are at stake and who has control and vested interests given the natural and cultural capital. Conclusions The Swedish mountain region harbors high intact forest landscape and conservation values but also further values associated with multiple economic and socio-cultural value chains including those based on the indigenous Sami people culture. Despite a need to expand nature conservation, these multiple values challenges an overall strict protection approach. Using wall-to-wall land cover data, we provide a point of departure for maintaining intact forest landscape characteristics through strategic spatial planning of forestlands and future forest management. We show that in the SMGB, the fraction of actually clear-cut forest is very small and that the SMGB harbors a very high proportion of protected but also unprotected HCVF, located in a predominantly natural landscape context with woodlands and open alpine land-cover types. Forest management aimed at wood biomass production will continue, but sustainable approaches require increased use of continuous cover forestry and a sensitive implementation with respect to ownership and policy regulations. This study contributes to evidence-based regional level green infrastructure planning, to the opportunities in applying a comprehensive and integrated sustainable landscape approach, and to exploring multiple opportunities for rural livelihood and regional development. This study also contributes to the current forest policy discussion in Sweden that suggests that the SMGB should be maintained as an intact forest landscape and thus as a cornerstone in the Swedish and EU nature conservation agenda. As such, we provide an illustrative case on the challenges to both ensure the integrity and values of an intact forest landscape of national and international significance, and support regional development. Clearly, a deeper stratification into, e.g. distribution of habitat types and specific values, is needed for future assessments, for example via a second-step field-based inventory at representative and specific segments of the study region. Acknowledgements We acknowledge data assistance by Birgitta Olsson, Swedish Environmental Protection Agency, and Wiebke Neumann, Swedish University of Agricultural Sciences, and comments on a revised version of the manuscript by Peter Bergman, Sveaskog, and Tommy Ek, Sveaskog and committee secretary for the 2020 forest policy inquiry. Funding Open access funding provided by Swedish University of Agricultural Sciences. This study was funded by the Swedish Environmental Protection Agency, grant NV-03728-17, to Bengt Gunnar Jonsson and supported by FORMAS, grant 2017:1342, to Per Angelstam. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-03-02T14:42:32.245Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "d7aa565a165ca7e1873bc5c7b29c5e12dc42e956", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10113-022-01881-8.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "d7aa565a165ca7e1873bc5c7b29c5e12dc42e956", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [] }
268159585
pes2o/s2orc
v3-fos-license
Biocidal effects of organometallic materials supported on ZSM-5 Zeolite: Influence of the physicochemical and surface properties Antifouling coatings containing biocidal agents can be used to prevent the accumulation of biotic deposits on submerged surfaces; however, several commercial biocides can negatively affect the ecosystem. In this study, various formulations of a potential biocide product comprising copper nanoparticles and capsaicin supported on zeolite ZSM-5 were analyzed to determine the influence of the concentration of each component. The incorporation of copper was evidenced by scanning electron microscopy and energy dispersive spectroscopy. Similarly, Fourier-transform infrared spectroscopy confirmed that capsaicin was supported on the zeolite surface. The presence of capsaicin on the external zeolite surface significantly reduced the surface area of the zeolite. Finally, bacterial growth inhibition analysis showed that copper nanoparticles inhibited the growth of strains Idiomarina loihiensis UCO25, Pseudoalteromonas sp. UCO92, and Halomonas boliviensis UCO24 while the organic component acted as a reinforcing biocide. Antifouling coatings containing biocidal agents can be used to prevent the accumulation of biotic deposits on submerged surfaces; however, several commercial biocides can negatively affect the ecosystem.In this study, various formulations of a potential biocide product comprising copper nanoparticles and capsaicin supported on zeolite ZSM-5 were analyzed to determine the influence of the concentration of each component.The incorporation of copper was evidenced by scanning electron microscopy and energy dispersive spectroscopy.Similarly, Fourier-transform infrared spectroscopy confirmed that capsaicin was supported on the zeolite surface.The presence of capsaicin on the external zeolite surface significantly reduced the surface area of the zeolite.Finally, bacterial growth inhibition analysis showed that copper nanoparticles inhibited the growth of strains Idiomarina loihiensis UCO25, Pseudoalteromonas sp.UCO92, and Halomonas boliviensis UCO24 while the organic component acted as a reinforcing biocide. Introduction Marine biofouling is an unwanted adherence phenomenon that results in the accumulation of biotic deposits on artificial surfaces submerged in or in contact with seawater [1,2].Biofouling is a complex phenomenon that involves several species ranging from microorganisms such bacteria to invertebrates [3] and typically consists of two main stages: micro-and macro-fouling [4].During the first stage, bacteria begin to adhere to the surface, forming a microfouling biofilm.During the subsequent macrofouling stage, larger organisms such as algae and invertebrates adhere to the surface [5]. Biofouling affects several economic spheres; for instance, biofouling gradually increases the fuel consumption of fishing vessels owing to increased drag resistance and accelerates the corrosion of maritime infrastructure [3,6].In aquaculture, biofouling restricts the opening of nets, significantly increasing their weight and hindering the elimination of waste products [7,8].Similarly, the adhesion of marine species to the surface of boats could result the transport of exotic species, including invasive organisms, to environments other than their natural habitats, thereby negatively impacting native biodiversity [9].Antifouling coatings can be applied to a surface to inhibit the growth rate of organisms [2,10,11]; however, these coatings can damage marine microbiota, flora and fauna, particularly when they contain biocides with high toxicity against target and non-target organisms, and thus threaten to ecological systems [12,13].Thus, the design of materials that employ environmentally friendly biocides in place of conventional biocides is of great importance to the scientific community [6,14]. Antifouling coatings are commonly contain inorganic, organometallic, or organic biocidal agents [15].Most commercially available antifouling products contain copper oxides (CuO and Cu 2 O), silicon oxide (SiO 2 ), zinc (Zn), and titanium oxide (TiO 2 ) as the main compounds [16].They may also contain organic compounds that mitigate the toxic effects of metal oxides on organisms that do not contribute to biofouling [17].Copper nanomaterials have become important biocidal agents owing to their high effectiveness in inhibiting inlay formation [15,18], which results from the ability of copper ions to destroy the outer cell membranes of microorganisms [19,20]. Moreover, the addition of an organic agent can reinforce the biocidal effect, thereby affording greater control over the adhesion of encrusting organisms [15,17].Capsaicin, an amide alkaloid with pH-sensitive antibacterial and antiinlay properties, is commonly used to formulate such functional coatings [14,21,22].However, directly introducing biocidal materials in the formulation of the coating results in a small contact area between the biocide and the microorganism contact area is low, along with uncontrolled release of the biocide into the marine environment, thereby limiting the duration of the required effect.Depositing biocidal materials on a support matrix is therefore necessary to increase the contact surface area and retention of biocidal agents, thus regulating their release into the marine environment [23]. In this way, the active compounds (organic, inorganic, and organometallic) can be incorporated on the surface of supports, such as zeolites or other microporous materials.The large surface area increases the availability of copper and capsaicin ions within pores and promotes the interaction between the biocide and biofilm, thereby maintaining the biocidal effect over time [24].Zeolites are microand mesoporous alkaline earth metal aluminosilicates with tetrahedral [SiO 4 ] 4-and [AlO 4 ] 5-networks interconnected by oxygen atoms.This porous network is sufficiently large to contain extra structural cations within its many molecular scale cavities and channels, allowing the zeolite material to adsorb molecules ranging in size from diatomic hydrogen to organic compounds measuring of several nanometers [25,26].Similarly, the chemical structure of zeolites and the electronegativity imposed by the aluminates allow the incorporation of compensation cations such as transition metal cations through physical and thermochemical transformations [25,27]. Owing to these properties, along with their high mechanical and thermal stabilities, zeolites have several applications as adsorbents, catalysts, ion exchange matrices, antimicrobial materials, and filter media [28,29].Specifically, ZSM-5 zeolite is a shape-selective material with uniformly sized pores and has shown excellent results in supporting metals and some organic molecules, enhancing their controlled delivery [30,31].The three-dimensional network of channels and cages present in ZSM-5 zeolite could favor the controlled diffusion of copper within the zeolite pores.The ionic exchange properties could allow the incorporation of copper nanoparticles, which act as stabilizers in the zeolite structure.On the other hand, preliminary works have shown that this zeolite presents some pores with a higher size, even some mesopores where the capsaicin molecule could be allocated, thus allowing the combined action of both biocidal agents and enhancing their availability and contact surface [32][33][34]. Materials synthesis Cu particles and capsaicin were incorporated into the synthesized zeolite via consecutive metal exchange and organic modification procedures, respectively.Each of these processes is described in detail below. Impregnation of copper particles into the Zeolite The ZSM-5 zeolite (Z40) was impregnated with copper particles within the pores and on its outer surface using a wet impregnation method.The copper precursor copper nitrate trihydrate was dissolved in deionized water.The solution volume was determined at a ratio of 10 cm 3 of copper solution to 1 g of zeolite.The concentration of this copper nitrate solution was determined to obtain a nominal copper content (Cu %) of 2%, 4%, and 8%, as calculated using Equation No 1: Where w Cu is the relative mass of the copper atom within the (Cu[NO 3 ] 2 ⋅3H 2 O) molecule and w Z40 is the mass of the ZSM-5 zeolite before modification.The zeolite was then suspended in the solution and continuously mixed under vacuum at 363 K in a temperature controlled rotatory evaporator (EV400H, LabTech).The samples were dried for 24 h at 373 K in an oven (WGLL-BE, FAITHFUL) and stored in a desiccator until use. Zeolite modification by capsaicin diffusion and impregnation Raw ZSM-5 and the Cu-modified zeolite samples (base zeolite) were impregnated with capsaicin using a capsaicin solution with a concentration determined such that a nominal capsaicin content (cps %) of 1% or 1.5% was obtained in the final zeolite sample, Equation No 2: Where w cps is the mass of the capsaicin molecule and w base zeolite is the mass of the base zeolite.The zeolite was suspended in the solution under continuous stirring for 24 h at 303 K in a closed flask in a temperature controlled rotatory evaporator (Biobase, China) to avoid capsaicin degradation and ensure the diffusion of capsaicin throughout the zeolite pore network.The excess solvent was removed subsequently under vacuum at 303 K using a rotatory evaporator.The modified samples were dried for 48 h at 303 K in an oven (WGLL-BE, FAITHFUL) and stored in a desiccator until further use.This method was designed to incorporate capsaicin on the external zeolite surface and encapsulate organic molecule within the zeolite pore system.The zeolite samples were labeled as Z40_Cuxxcpsyy, where "xx" refers to calculated Cu % and "yy" refers to calculated cps%. Physicochemical characterization 2.2.1. Surface and textural properties The textural properties of the generated materials were analyzed using nitrogen adsorption and desorption isotherms at 77 K using a NOVA 1000e analyzer (QuantaChrome, USA).The effect of modifying the surface characteristics was identified using the Brunauer-Emmett-Teller (BET) method.The pore size distribution was determined using the Horváth-Kawazoe (HK) method, and the volume of micropores and mesopores was determined using the Barrett-Joyner-Halenda (BJH) method.Samples for these tests were degassed at 303 K for 20 h to prevent capsaicin modification within the pores. Scanning electron microscopy The topography of the samples was analyzed via scanning electron microscopy (SEM) using an SU-3500 microscope (Hitachi, Japan) operated at 10.0 kV under high vacuum (30 Pa).Images were collected on a scale from 5 to 100 μm.Energy-dispersive spectroscopy (EDS) and electron microscopy verified the incorporation of copper nanoparticles into the zeolite structure. X-ray diffraction X-ray diffraction (XRD) patterns were acquired using a Bruker Endeavor D4/MAX-B diffractometer operated at 20 mA and 40 kV using a copper cathode lamp (λ = 1541 Å).The 2θ sweep was set from 4 • to 80 • in steps of 0.02 • with a time interval of 1 s. Surface characterization by Fourier transform infrared spectroscopy Fourier-transform infrared (FTIR) spectra of the zeolite and capsaicin were acquired using a Cary 630 FTIR spectrometer (Agilent Technologies, Santa Clara, CA, USA).The analysis was performed in transmittance mode by obtaining 30 averaged spectra using the Attenuated Total Reflectance (ATR) sampling technique.The spectra were obtained in the range between 4000 and 500 cm − 1 . Bacterial growth inhibition study The biocidal effects of the generated materials were analyzed using the Kirby-Bauer method.The marine bacteria Idiomarina loihiensis UCO25, Pseudoalteromonas sp.UCO92, and Halomonas boliviensis UCO24 were used to analyze the effect of biocidal materials against species commonly found to form microfouling films [35,36].This method allows the determination of the sensitivity of a microorganism to a specific agent, and the absence or presence of an inhibitory area around the biocide material identifies its bacterial sensitivity [37]. Bacterial growth plates were prepared by initially culturing I. loihiensis, Pseudoalteromonas sp., and H. boliviensis strains on marine agar for 48 h.A bacterial colony was then transferred to marine broth and incubated for a further 48 h.The bacterial suspension was adjusted to a turbidity of 0.5 McFarland standard, equivalent to 1.5 × 10 8 CFU/mL.A 1/10 dilution was prepared, resulting in a final inoculum concentration of 1 × 10 7 CFU/mL.Pellets with diameters of 13 mm and thicknesses of 2 mm containing the biocidal materials (0.2 g) were prepared.Prior to each experiment, the pellets were sterilized under UV light for 20 min per side and deposited onto bacterial growth plates using alcohol-sterilized forceps.The tablets were deposited equidistant from each other at a distance of 24 mm from the center of the discs.The bacterial growth plates were then incubated for 48 h at 10 • C. Each sample was analyzed in triplicate.Statistical analysis of the samples was performed using Fisher's test of the least significant difference (LSD) from the inhibition halo formed, which determined the effects of the formulation at different concentrations of the biocides. Physicochemical and surface properties The modified samples were analyzed to elucidate the effectiveness of the treatments and to evaluate their effects on the physicochemical and surface properties.Visual inspection of the samples after modification with different levels of copper revealed different tones (Fig. S1), which indicates the copper exchange occurred from the precursor solution to the zeolite samples.The intensity of the blue hue corresponded to the concentration of the precursor solution. Scanning electron microscopy The morphology of raw and modified ZSM-5 zeolites is depicted in Fig. 1.In this sense, SEM images with a 20 μm scale (Fig. 1 (a, c, e, g)) are presented to show the general morphology of the samples, and a magnification to 10 μm scale is also presented to depict the individual granules of zeolite (Fig. 1 (b, d, f, h)).SEM images of the raw zeolite (Fig. 1 (a)) revealed the presence of well-defined and uniform hexagonal platelet-shaped zeolite particles with a lamellar structure; individual granules with a mean diameter of 15 μm can also be observed (Fig. 1 (b)).Similar results were reported by previous studies using ZSM-5 zeolites [38,39].Similarly, the presence of supported particles in the porous material (Fig. 1 (c, e, g)) provides clear evidence that the mineral retains its characteristic properties, suggesting that wet impregnation does not affect the external structure of the support.However, as the copper concentration increases, copper oxide particles seem to appear on the outer surface of the zeolite (Fig. 1 (d, f, h)).Moreover, in the samples with 8% copper loading (Fig. 1 (g, h)), the formation of larger copper oxide particles can be observed; these particles could be affecting the access to the microporous surface.In this sense, the use of copper concentrations higher than 8% could be unfavorable.Such results could also be attributed to the modification route applied.In this case, copper is not only added as a compensating cation, but bulk copper is also deposited within the porous structure, compromising the dispersion of particles and the surface area. EDS analysis confirmed the presence of Cu in the modified zeolites.As expected, different Cu contents were observed in the samples, corresponding to the Cu concentration in the precursor solution.The EDS spectra of the pure (unmodified) zeolite demonstrates the presence of the main structural elements, oxygen, silicon, and aluminum (Fig. 2(a-d)). This ensured that the raw materials were free of impurities.The EDS spectra quantitatively detected the presence of copper particles in the porous medium of the modified zeolites Existing literature suggests that the wet impregnation method guarantees extensive deposition and exchange of materials in the porous medium [40].The Cu content of each sample is presented in Table 1. X-ray diffraction Fig. 3 shows the XRD patterns of the copper and capsaicin additives supported on zeolites.By comparison with the XRD pattern of pure CuO, the diffraction peaks at 2 θ = 35.5 • and 38.7 • in the XRD pattern of Cu/ZSM-5 were assigned to (1 1 1) and (111) reflections of the CuO phase, respectively, confirming that CuO was present in the ZSM-5.The peaks observed in the region between 10 • and 30 • in the spectrum of Cu/ZSM-5 were essentially identical to those in the spectrum of pure ZSM-5, confirming that the support retained its primary structure after the loading process.Similarly, the XRD patterns of the samples (Fig. 3) show well crystallized materials corresponding to the MFI structure of the ZMS-5 zeolite and the monoclinic crystalline structure of CuO (JCPDS No.85-1326). Surface area determination by nitrogen adsorption The nitrogen adsorption assays of the samples modified with metal exchange and capsaicin were obtained at 77 K (Fig. 4(a and b)).All samples exhibit Type I isotherm, according to the IUPAC classification [41].This behavior was attributed to the microporous structure of the support and the formation of a monolayer of adsorbed gas on its surface [42] which is consistent with the regular nanopore structure of ZSM-5 type zeolites. The specific surface area of the zeolite decreased with the incorporation of Cu particles, as determined by BET analysis (Fig. 4 (a) and Table 2); these results are in good agreement with those of earlier studies [27].The BET specific surface area of the capsaicin-modified Z40CPS1 and Z40CPS1,5 samples showed a greater reduction than that of the samples with incorporated copper (Fig. 4 (b) and Table 2).This result was attributed to the molecular size of capsaicin.(1.759 nm), which is larger than that of the copper particles.The mesopore diameter obtained through the BJH method showed that the pore distribution was not affected by the modifications, with pore sizes ranging 3-4 nm in all samples, which evidences the microporous and mesoporous structure of the zeolites.Similarly, the pore volume determined using the HK method exhibited a decreasing trend at higher concentrations of incorporated copper; however, the samples that incorporated capsaicin showed a greater reduction in pore volume (Table 2).The latter was attributed to the fact that biocidal materials are not only supported on the external surface but also enter the pores and cavities of the zeolite. These results suggest that the reduction in the surface area of the zeolite support materials is due to the blockage of a part of the zeolite pore network by the incorporated materials.In this zeolite, as reported in Table 2, most of the surface area corresponds to pores with diameters below 2 nm.Such results agree with those reported in the literature concerning MFI framework types, such as zeolites ZSM-5 [43].However, BJH analysis depicts that there are pores between 2 and 20 nm, with a pore size rounding 3.3 nm.The mesopore size distribution can be observed in Fig. S2.Results obtained here suggest that, even when the mesopores are not the main surface area, the low concentrations make it suitable for the incorporation of capsaicin over the zeolite surface.Specifically, BJH analysis shows that the Z40 sample possesses around 15 m 2 /g of pores with diameters from 2 to 20 nm.Once copper and capsaicin are supported in the zeolite, those pores are not open for nitrogen diffusion in the adsorption test.Such results suggest that capsaicin molecules are encapsulated within the zeolite pores.The interaction of the incorporated materials with the encrusting organisms could result in diffusional limitations on the part of the materials supported on the surface, mainly after the incorporation of the organic material. Fourier transformed infrared spectroscopy The FTIR spectrum of the ZSM-5 zeolite shows the expected characteristic fingerprint in the range of 1600-500 cm − 1 (Fig. 5 (a)) [38,39,42].The bands located between 2500 and 2000 cm − 1 were attributed to the presence of environmental CO 2 [41].The spectrum of natural capsaicin (95% purity) shows in which the 3306 cm − 1 bands corresponding to aminoacidic bonds (N-H) at 3306 cm − 1 , aliphatic stretching vibrations (C-H) in the range of 2925 to 2858 cm − 1 , and carbonyl stretching vibrations at 1627 cm − 1 .Additionally, stretching vibrations (C-C) were detected in the range of 1551-1514 cm − 1 and an out of plane bending C-H vibrations was observed between 870 and 805 cm − 1 , which was attributed to the vibration of the aromatic ring (Fig. 5 (b)). In the spectra of the metal modified zeolite samples Z40Cu2, Z40Cu4, and Z40Cu8 (Fig. 6 (a)), the band at 1217.51 cm − 1 is redshifted by 2.4 cm − 1 , 5.5 cm − 1 , 6.1 cm − 1 , respectively, relative to that in the IR spectrum of the support material.Similarly, the band located at 1050.63 cm − 1 in the spectrum of Z40 is shifted to lower wavenumbers as the copper concentration increases.Similar studies by Razavi and Loghman Estarki suggested that such changes are attributable to the distortion in the zeolite crystal lattice caused by the nanoparticles and the interaction between the nanoparticles and the zeolite matrix produced by electrostatic forces arising from the negative charges of the support matrix [32]. The FTIR spectra of the capsaicin-modified zeolite samples confirmed the presence of organic molecules in the support matrix by the appearance of a new peak in the region between 876 and 874 cm − 1 (Fig. 6 (b)).This band is associated with the out of plane C-H bending vibrations of the aromatic group of the capsaicin molecule [44][45][46].Furthermore, the wavenumber of this peak varies according to the concentration of capsaicin, being centered at 876.35 cm − 1 , 875.73 cm − 1 and 874.82 cm − 1 in the spectra of Z40CPS1, Z40CPS1,5 and Z40CPS5 respectively.This trend was confirmed using a zeolite sample modified with 5% capsaicin (Z40CPS5).The absence of other vibrational modes of capsaicin may be explained by its interaction with the zeolite, suggesting that it is adsorbed on the surface of the support.The spectra of the samples modified with copper and capsaicin verify that organic molecules can be incorporated into the support medium, even in the presence of copper nanoparticles (Fig. 7) owing to the large surface area of the copper-modified zeolite samples, in which the surface of the support is unsaturated, thereby allowing the subsequent deposition of capsaicin. Computational measurement of capsaicin molecular size Previous studies concerning the use of capsaicin refer to its chemical properties but not its molecular size; however, this information is of great importance because the size of a molecules determines whether it diffuses within the pores and channels of the zeolite or whether it is only deposited over the outer surface and therefore affects the physicochemical modification of the zeolite.The Avogadro Software 1.2.0 was used to approximate the molecular size of capsaicin.The molecular geometry of the capsaicin molecule (IUPAC name: 8-methyl-N-vanillyl-6-nonenamide) was optimized according to low-energy conformation criteria by searching for a global minimum energy using the Merck molecular force field method (MMFF94).Using the obtained geometry, the maximum stable length of the capsaicin molecule was determined by considering the size of each atom (H, C, O, and N) and the length of the bonds, which can be single or double [47].This computational analysis established the molecular size of capsaicin, demonstrating that the incorporation of capsaicin into the porous structure of the zeolite is feasible and thereby indicating the suitability of this microporous material as a support.The structure of capsaicin is shown in Fig. 8.This conformation is stable, with an energy of − 8.14131 kJ/mol, which indicated the feasibility of this structure.According to this analysis, capsaicin has a maximum possible length of 17,590 Å (1.759 nm), approximately half the size of the mean mesopore diameter of the parent zeolite (3 nm).Kambaine et al. [48] determined by Molecular Dynamics Simulation that the gyration radius of the capsaicin molecule in different solvents rounds from 0.4 to 0.45 nm; such information coincides with data reported by Graham et al. [49].The radius of gyration can be considered an indirect measure of conformation, and the molecule's interaction with systems such as proteins or membranes.Additionally, due to their geometric configuration, the higher distance among the carbon atoms in the cross-section of the aromatic group of the capsaicin molecule rounds 0.48 nm [50]. It's worth noting that, in addition to capsaicin, other organic molecules with similar structures have also been successfully incorporated into zeolitic materials.For instance, Rabiee & Rabiee [51] deposited capsaicin onto ZSM-5 zeolite to achieve controlled delivery of capsaicin as a theranostic agent.Afterwards, Musielak et al. [52] supported curcumin molecules in commercial Faujasites, and recently, Q. Tan et al. [53], reported the encapsulation of indole, a heterocyclic molecule, in Beta zeolites as an antibacterial material with controllable release property.Thus, it is reasonable to assume that capsaicin can be incorporated within the crystalline structure of the zeolite, lodging inside the pores and cavities. Antifouling assays: the role of the biocide and microporous support in the abatement of marine bacteria Of the 13 samples analyzed, 10 (76.9%) were active against the marine bacterial strains of I. loihiensis UCO25, Pseudoalteromonas sp.UCO92 and H. boliviensis UCO24; the three remaining samples (23.0 %) did not inhibit bacterial growth (Table 3 and Table 4).To evaluate the effect of the copper and capsaicin concentrations, the samples were compared via grouping and classification studies.Fisher's test of the least significant difference (LSD) was applied with a reliability level of 95% (Supplementary material, Table S1).The contribution of the support material to the inhibition of bacterial growth was initially studied without a visible halo.This analysis suggested that the zeolite primarily served as a support for the biocidal compounds.Notably, the copper contents in the samples modified with the metallic inhibitor had a significantly impact on the inhibition capacity of the material with a 95% confidence level.An increase in the copper concentration resulted in a larger, implying a stronger biocidal effect.The bactericide effect of Cu ions could be attributed by the adherence to bacterial cytomembrane and cytoderm via electron interaction damaging the intracellular protein of bacteria.Copper exposure seems to induce drastic changes in the lipid composition of the bacterial cell membrane and to modulate the abundance of proteins functionally known to be involved in copper cell homeostasis [54]. Fig. 9 displays the changes in inhibition halos for I. loihiensis UCO25 (Fig. 9 (a)), Pseudoalteromonas sp.UCO92 (Fig. 9 (b)) and H. boliviensis UCO24 (Fig. 9 (c)) as a function of capsaicin and copper loading.In samples formulated with both capsaicin and copper, an increase in the halo diameter was observed as the copper concentration increased, irrespective of the capsaicin content.Conversely, zeolites formulated with capsaicin alone exhibited a reduction in the halo diameter.It is worth noting that neither pure capsaicin nor capsaicin-modified samples presented inhibition halos.This phenomenon was attributed to the hydrophobic nature of capsaicin [55].In this sense, it could be assumed that the samples become more hydrophobic, mainly after adding the capsaicin molecules.Additionally, the substitution of the strong Brønsted sites present in the NH 4 + compensating cations for weak Cu 2+ Lewis acid sites could reduce the water adsorption capacities, as those acidic centers are the main water interacting active sites [27,56]. In addition, among the samples modified with both biocidal materials, Z40Cu8CPS1 formed the largest inhibition halo, followed by Z40Cu8CPS1,5.In this study, pure copper nitrate was not investigated as an inhibitory material, as its biocidal effect has been demonstrated in other investigations [57], However, the use of pure copper in high concentrations could be harmful to aquatic ecosystems, and its inhibitory effect may be short-lived due to its fast diffusion.As a water-soluble salt, copper nitrate dissociates into ions and readily diffuses in aquatic media [58]. Fisher's analysis of these samples showed that subsequent modification of the material with 8% p/p copper with 1% p/p and 1.5% p/p did not result in a significant difference.In addition, significant differences were obtained from the assays using 2%, 4%, and 8% w/w copper-modified samples showed significant differences regardless of the incorporated capsaicin content, at a 95% confidence interval (Supplementary material, Table S1).Capsaicin can also block contact between copper cations and aqueous media.This is in good agreement with the results obtained from the surface characterization, in which a reduction in the surface area of the samples containing capsaicin was observed.Capsaicin molecules were incorporated in the pores of the zeolite and therefore blocked the diffusion of other molecules and affected the contact between the bacteria and copper sites.However, even when capsaicin did not result in significantly different inhibition, through comparative analysis revealed that samples Z40Cu8, Z40Cu8CPS1, and Z40Cu8CPS1,5 formed halos with larger diameters and therefore had the strongest biocidal effect.This suggests that, as it has been reported [59], capsaicin decreases the growth rate of bacteria strains and some other microorganisms [60]; however, its dispersion in the medium is limited.The interaction mechanism of capsaicin to inhibit the bacteria growth could be attributed partially to its protein inhibiting qualities and the improvement of the surface hydrophobic properties [60].Similar studies conducted using other bacteria species, specifically Pseudomona Aeruginosa, has depicted that even when capsaicin doesn't migrate to water media, it can be released from the support surface to kill bacteria when the pH of the local environment decreases, which is triggered by the reproduction of bacteria in the environment [22].Similarly, Guo et al. showed a lack of direct action of capsaicin against some bacterial strains, however a potent synergistic action in the case of combinatory use of these substances in a dose-dependent manner was also shown [61]. Results obtained here suggest that copper and capsaicin act as the primary and reinforcement biocides, respectively, and that zeolite acts as a support, controlling the delivery of the biocides.The copper particles, located in the ZSM-5 pores with a pore configuration that favors controlled delivery, could diffuse and interact with the surrounding microorganisms, acting as the primary biocide.On the other hand, the capsaicin molecules, encapsulated mainly in the mesopores, diffuse slowly in the presence of water due to their hydrophobic nature but can act as a reinforcement biocide to extend their useful life.Nevertheless, further studies must be conducted to depict the specific effects of both biocidal agents and to unveil the way to control their release into water media, including controlled delivery assays, to use the studied materials for real applications.Additionally, studies using different types of zeolites with varying porosities, such as Y zeolites, or using mesoporous zeolites such as MCM-41, must be conducted to study the influence of the presence of larger pores, such as the super cages of Y-zeolite, on the biocides' support and release. Conclusions Zeolites modified with copper nanoparticles and capsaicin molecules are effective as biocidal agents.Copper nanoparticles inhibit the growth of bacteria while capsaicin acts as a reinforcing biocide.Similarly, the microporous structure of the zeolite increases the contact surface area and retention of biocidal agents, thus enhancing their interaction with microorganisms.Modification with copper nanoparticles and capsaicin molecules affects the surface physicochemical characteristics of synthetic zeolites.The incorporation of copper nanoparticles reduces both the surface area of the zeolite and the volume of the micropores and mesopores of the support.However, incorporating the large, hydrophobic capsaicin molecule, mainly on the larger pores of the zeolite, affects the surface area and pore volume of the support to a greater extent, limiting the diffusion of water into the pores and limiting the interaction between the bacteria and active compounds.The presence of higher concentrations of capsaicin could limit the access and interaction of some of the metallic particles with microorganisms. Table 1 Composition of the samples modified by metal exchange. Table 2 Surface properties of natural and modified zeolites. SampleSpecific Surface Area a (m 2 /g) Mesopores Surface Area b (m 2 /g) Mesopore diameter b (nm) Pore volume c (cm 3 /g) a Determined by BET method.b Determined by BJH method.c Determined by HK method.
2024-03-03T19:01:11.296Z
2024-02-27T00:00:00.000
{ "year": 2024, "sha1": "1cf858545ec47a0dbf1ff2f9b4a4e19f612a0b94", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.heliyon.2024.e27182", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f5a2e251619fa976dcf1253785b415dfe42694c9", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [] }
213296587
pes2o/s2orc
v3-fos-license
Pronounced and unavoidable impacts of low-end global warming on northern high-latitude land ecosystems Arctic ecosystems are particularly vulnerable to climate change because of Arctic amplification. Here, we assessed the climatic impacts of low-end, 1.5 °C, and 2.0 °C global temperature increases above pre-industrial levels, on the warming of terrestrial ecosystems in northern high latitudes (NHL, above 60 °N including pan-Arctic tundra and boreal forests) under the framework of the Inter-Sectoral Impact Model Intercomparison Project phase 2b protocol. We analyzed the simulated changes of net primary productivity, vegetation biomass, and soil carbon stocks of eight ecosystem models that were forced by the projections of four global climate models and two atmospheric greenhouse gas pathways (RCP2.6 and RCP6.0). Our results showed that considerable impacts on ecosystem carbon budgets, particularly primary productivity and vegetation biomass, are very likely to occur in the NHL areas. The models agreed on increases in primary productivity and biomass accumulation, despite considerable inter-model and inter-scenario differences in the magnitudes of the responses. The inter-model variability highlighted the inadequacies of the present models, which fail to consider important components such as permafrost and wildfire. The simulated impacts were attributable primarily to the rapid temperature increases in the NHL and the greater sensitivity of northern vegetation to warming, which contrasted with the less pronounced responses of soil carbon stocks. The simulated increases of vegetation biomass by 30–60 Pg C in this century have implications for climate policy such as the Paris Agreement. Comparison between the results at two warming levels showed the effectiveness of emission reductions in ameliorating the impacts and revealed unavoidable impacts for which adaptation options are urgently needed in the NHL ecosystems. Introduction Terrestrial ecosystems, especially in the northern high latitude (NHL) area, are predicted to undergo substantial impacts associated with changes of land use and climate in the next several decades (Warszawski et al 2013, IPCC 2014. Such changes in terrestrial ecosystems are likely to influence human societies through deterioration of ecosystem services such as climate regulation, recreational services, and provision of foods and goods (Malinauskaite et al 2019). Moreover, the fact that changes in ecosystem structures and functions are highly likely to exert climatic feedbacks on the human-induced warming (e.g. Arora et al 2013) demands that we understand and predict the ecosystem responses to global change. Ecosystems in the NHL region will be exposed to climatic warming greater than the global average (IPCC 2013, Post et al 2019 and may thus be strongly impacted. Biological processes such as plant leaf phenology, primary production, and soil decomposition in the temperature-limited environments of the NHL are particularly sensitive to climatic warming (McGuire et al 2009, Richardson et al 2018. One of the characteristics of changes in terrestrial ecosystems is that they occur over temporal scales that range from instantaneous (e.g. photosynthetic gas exchange) to centuries or millennia. Examples of the latter include vegetation succession (Hickler et al 2012), tree migration (Neilson et al 2005), and soil development. Transformation of carbon cycling in the NHL region has attracted particular attention as an early warning of climatic impacts on ecosystems and in relation to climate-carbon cycle feedbacks. Changes in northern plant productivity have been deduced from the amplification of the seasonal cycle of atmospheric CO 2 concentrations (e.g. Graven et al 2013). Also, greening trends of northern vegetation have been detected by satellite observations for decades (Myneni et al 1997, Goetz et al 2005, Piao et al 2020. In contrast, soils in the NHL, especially perennially frozen soils, are likely to be degraded by physical and biological decomposition related to rapid temperature rise (Schuur et al 2015, Crowther et al 2016. It is uncertain whether the NHL is functioning as a net carbon sink or a source and how the system is changing. Nevertheless, the presence of large carbon stocks in the NHL region (e.g. 1100-1500 Pg C in the permafrost region; Hugelius et al 2014) suggests that there is potential for a strong climate-carbon cycle feedback that will likely act as a positive climate feedback (Schuur et al 2015). The likely interactions of ecological processes such as vegetation demography and disturbances with climatic warming will increase the risk of transgressing tipping points for boreal forest dieback and permafrost thawing in this region (Lenton et al 2008, Schaphoff et al 2016, Natali et al 2019. In the end, the balance between the positive effect of increasing productivity versus the negative effect of soil warming will determine future changes of the NHL carbon balance. At the 21st Conference of the Parties of the United Nations Framework Convention of Climatic Change, a milestone agreement about global warming mitigation, the Paris Agreement, was negotiated and agreed upon by 196 state parties. The goal of the agreement was to keeo the global temperature rise well below 2°C (hopefully 1.5°C) above pre-industrial levels. To reinforce the scientific background to these temperature targets, intensive assessments have been conducted of various sectors such as water resource, agricultural production, and human health (e.g. Jahn 2018, Schleussner et al 2018. Special reports on the 1.5°C/2.0°C climate targets and associated reports with foci on terrestrial, ocean, and cryospheric systems have been published by the Intergovernmental Panel on Climate Change (IPCC 2018, 2019). These reports address various aspects of natural and human systems and demonstrate a higher risk of negative impacts by a 2°C warming versus 1.5°C or less. Several studies have assessed the NHL region, but they have usually focused on high-end global warming projections (Ito et al 2016, McGuire et al 2018. More specific and in-depth analyses using the latest available low-end climate projections are required to better understand climatic impacts in NHL areas so that the effectiveness and limitations of the Paris Agreement can be adequately discussed in terms of climate policy. Several analyses have been conducted in the NHL region, but their reliability and uncertainty differ among sectors because of uneven scientific understanding and data availability. Impacts on biological systems and related risks are, compared to physical systems, even more difficult to evaluate, because biological systems are very heterogeneous and complex (e.g. non-linear responses, acclimation, and interactions among organisms). This study focused on the impacts of low-end global warming scenarios (1.5°C and 2.0°C versus preindustrial temperatures) on NHL ecosystems in a mitigation-oriented world, in accordance with the Paris Agreement. For this purpose, we used output data from eight global vegetation models that contributed to the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) phase 2b and focused on properties related to the carbon cycle. The ISIMIP phase 2b experiments were designed specifically to quantify impacts of low-end global warming on a mitigationoriented world using multiple impact models (Frieler et al 2017). Use of these ensembles allowed us to assess the ranges of inter-scenario and inter-model variability. Assessment of drastic and extreme events and phenomena that unfold on a centennial or longer timeframe was beyond the primary scope of this work. Such an assessment would be better conducted by other experiments specifically designed with many ensemble simulations and improved benchmarking models. Our study complements previous work and enabled us to analyze at regional to global scales multiyear and multi-decadal phenomena such as time-lagged responses and system transformations that can emerge gradually, especially in ecosystems. Consideration of such issues is highly relevant to policy makers. ISIMIP2b experiments The ISIMIP2b experiments were designed primarily to assess the impacts of 1.5°C and 2.0°C global warming above pre-industrial levels (Frieler et al 2017). To allow analyses of multiple sectors, the protocol describes several simulations that combine greenhouse gas emission pathways, associated land-use patterns, and climate projections consistent with the representative concentration pathway (RCP) 2.6 and 6.0 (van Vuuren et al 2011). In addition to a pre-industrial control experiment (in this study, used only for checking stability after initialization), the models performed historical , future , and extended future (2100-2299) simulations. Both RCPs assumed the middle-of-the-road socioeconomic pathway, SSP2 (Fricko et al 2017), but differed with respect to climate stabilization targets and mitigation policy. The RCP 2.6 scenario represents a mitigation-oriented scenario, in which the degree of global warming may not exceed 2.0°C above pre-industrial levels for an extended period of time, though it may overshoot that target temporarily. To assess long-term, more gradual impacts, climate projections for RCP2.6 were extended to 2299. The RCP6.0 represents a scenario with limited mitigation, in which the degree of global warming may well exceed 2.0°C. This scenario allowed us to assess rapid global warming impacts and put the low-end warming impacts into the context of a wider risk analysis. This study used the simulation outputs from the ISIMIP global vegetation models ('biome models', which are described in the next section) in the historical and future projection periods. Most biome models were integrated at a spatial resolution of 0.5°×0.5°in latitude and longitude and driven by bias-corrected data from as many as four global climate models (GCMs) to cover the range of inter-model variability: GFDL-ESM2M, HadGEM2-ES, IPSL-CM5A-LR, and MIROC5 (Frieler et al 2017; see figure S1 is available online at stacks.iop.org/ERL/15/044006/mmedia for their global mean temperatures). The extended climate projections for the period 2100-2099 were supplied by only the HadGEM2-ES, IPSL-CM5A-LR, and MIROC5 GCMs. The EarthH2Observe, WFDEI, and ERA-interim climate data were merged for the period from 1979 to 2013 and were used to correct the bias of the climate models (Lange 2018). In the historical period, atmospheric CO 2 and land-use conditions changed annually in most models, except for one model (CLM4.5) that used the land-use conditions in 2005 throughout its simulation of historical periods, because the model could not account for transient changes in the extent of irrigation. In the future period, atmospheric CO 2 concentrations varied on the basis of the RCP2.6 and RCP6.0 scenarios. In the NHL regions, future land-use change was predicted to be trivial; hence, for simplicity, we assumed fixed land-use conditions after 2005 (ISIMIP2b Experiments II and III described in Frieler et al 2017). The extended climate projections for the period 2100-2299 were considered by using the HadGEM2-ES, IPSL-CM5A-LR, and MIROC5 GCMs. . The eight models differ in their conceptualization of ecosystem structure, parameterization of functional processes, and environmental responsiveness, but as the phase 2a benchmarking revealed, they on average captured the present terrestrial carbon budget (figure S2; table S2). Biome models Primarily because of run-time constraints, not all models were driven by all four GCMs. Nevertheless, a total 52 cases of biome model-climate model combinations (available as of September 2019) were used in this study. The use of IPSL-CM5A-LR climate projections to force all biome impact models for both the RCP2.6 and RCP6.0 scenarios allowed us to conduct an inter-model comparison across the eight models for this GCM. The submission of output data from five biome models for four GCM projections allowed us to conduct an inter-climate comparison across the full range of GCMs. Sixteen cases of simulation results were available for the extended period. Analyses We selected three variables that represented ecosystem properties and were relevant to fundamental supporting and regulating ecosystem services for the analyses (Millennium Ecosystem Assessment 2005): annual net primary production (NPP, kg C m -2 yr -1 ), vegetation biomass (CVeg, kg C m -2 ), and soil carbon stock (CSoil, kg C m -2 ). We used area-weighted grid-cell average values of these variables. NPP represents ecosystem functional activity and responds directly to environmental change. CVeg, a metric of vegetation height and density, represents vegetation development; its response to cumulative environmental change is based on the turnover of carbon in vegetation pools. CSoil is expected to represent the role of the soil and its effective depth, which are closely related to ecosystem properties (e.g. nutrient-and water-holding capacities). Changes in CVeg and CSoil are key indicators for assessing the carbon balance of the ecosystem. We used the benchmarking results of the ISIMIP2a biome models (e.g. Chang et al 2017) to focus on changes during the 21st century that could be simulated by the present models. The NHL grid points north of 60°N were extracted from the global simulation results for the following analyses. To clarify the regional characteristics and to separate the effects of multiple factors in a simplified manner, we adopted a conventional factorial approach. The Φ index is defined as follows: Here Δ NHL is the regional mean change and Δ global is the global mean change. In both cases the changes are based on comparisons with the baseline present state (centered around the year ∼2000). The Φ index can be defined at an arbitrary period such as the year when global warming by 1.5°C occurs and indicates how severely the NHL region was influenced by climate change relative to the global average. The characteristics of the changes in the NHL region may result from climatic and biological factors, which may interact in a complicated way. For simplicity, we assumed that Φ could be expressed as the product of climatic and biological terms as follows: Here Φ T is a temperature amplification factor, and Φ B is the ecosystem response factor. The term Φ T is defined as the ratio of temperature warming in the NHL (ΔT NHL ) to the global (land and ocean) temperature warming (ΔT global ) above pre-industrial temperatures. When Φ T >1, the implication is that amplified warming occurred in the NHL. The term Φ B is defined as the ratio of the change of ecosystem in the NHL to the corresponding global change. When Φ B >1, the implication is that the temperature sensitivity is higher for the carbon variables in the NHL than for the corresponding global variable. By definition and from equation (2), the biological term can be obtained as follows for the case of NPP: Note that ΔNPP NHL (% per°C), ΔNPP global (% per°C), and the corresponding terms for CVeg and CSoil were compared during the same period of time to avoid artifacts associated with different levels of atmospheric CO 2 concentrations. For further assessments, two ancillary analyses were conducted. First, we investigated long-term changes in the NHL ecosystem carbon budget during the extended projection period from 2100 to 2299. This analysis was expected to reveal the minimal response of northern ecosystems because climate warming was suppressed to the target level of the Paris Agreement. Second, to demonstrate an impacts on multiple sectors, we conducted an analysis that took into account permafrost change related with biome change. Thawing of permafrost is a focal problem associated with the NHL warming, because it affects the habitat of natural organisms and human society. Also, permafrost thawing is likely to enhance the decomposition of carbon released from frozen soils and thereby lead to emissions of greenhouse gases to the atmosphere ( Results The rate of temperature increase in the NHL by the end of the 21st century is projected to be much higher than the global mean, irrespective of climate model or scenario. The 31 year running mean of ΔT global exceeded 1.5°C by ca. 2010 to ca. 2051, depending on the climate model, whereas ΔT NHL exceeded 2.0°C by the same time (figures 1(a) and (b)). As shown in figures 1(c) and (d), future temperature rise will occur unevenly over Earth's surface. Most land areas will undergo greater warming than the ocean at similar latitudes, and greater warming will occur at higher latitudes. Remarkably, ΔT global determined by the GFDL-ESM2M under RCP2.6 did not exceed 1.5°C by the end of the 21st century. Given the close linear relationships between ΔT global and ΔT NHL ( figure 1(b)), we estimated Φ T during the period 1950-2099 to range between 1.81 and 2.31 (on average, 2.07) for all climate projections. Close inspection revealed that the relationship between ΔT global and ΔT NHL was approximately linear, but the slopes of the relationship depended on the scenario; table 1 shows Φ T values at 1.5 and 2.0°C warming levels. The eight biome models simulated an increase if NPP under both the 1.5°C and the 2.0°C warming scenarios (figures 2(a) and (d)). The magnitude of the change differed between the global and NHL; see figures S3 and S4 for results of individual cases. If ΔT global was projected equal 1.5°C, global NPP increased by 5.3%-17.3% (on average, 10.7%) from mid-20th century levels, whereas the NPP of the NHL increased by 12.5%-38.2% (on average, 22.0%). The biome models consistently (i.e. with high probability) simulated the greatest increase of NPP for a large part of NHL terrestrial ecosystems (figures S5(a), (b) and S6(a), (b)). As a result, Φ B-NPP for all models equaled 1.32±0.56 for RCP2.6 and 1.38±0.43 for RCP6.0. The corresponding Φ NPP given by equation (2) equaled 2.18±0.93 and 2.22±0.69, respectively (mean±standard deviation among the models; see tables 1 and S3 for median). The differences in simulated results between the two RCP scenarios were small. The relative changes of NPP in the NHL were, on average, more than double the global mean and were attributable to the interplay of climatic and biological factors. The biological factor Φ B-NPP became larger under the ΔT global =2.0°C scenario; in that case Φ B-NPP values were 1.92±0.89 for RCP2.6 and 1.66±0.91 for RCP6.0 (mean±standard deviation of all models). These increases of Φ B-NPP indicated an accelerating sensitivity of NPP in the NHL to global warming. Similarly pronounced response patterns were also found in the simulated CVeg of the NHL (figures 2(b), (e)) when one outlier result by VEGAS was excluded. If ΔT global equaled 1.5°C, global CVeg increased by (1) and (2)) of northern high-latitude lands above 60°N for indicated temperature changes and simulated ecosystem carbon budgets at 1°C, 1.5°C, 2°C, and 2.5°C global mean temperature warming levels predicted by the IPSL-CM5A-LR global climate model. Medians and standard deviations (SD) among the seven a model results are shown. 1°C 1 3.9%-15.2% (on average, 7.3%) from mid-20th century levels, whereas the CVeg of the NHL increased by 8.5%-30.4% (on average, 21.1%). The fact that the biological factor Φ B-CVeg did not change under the ΔT global =2.0°C scenario (table 1) indicated an approximately linear relationship between the vegetation carbon stock in the NHL and global warming. The response patterns were clearly different for CSoil. In that case the model simulations differed widely; they ranged from a large increase to a small decrease (figures 2(c), (f)). Regionally, there was little consistency among the simulation cases in West Siberia to Europe and interior North America (figures S5(e), (f) and S6(e), (f)). As a result, the model-ensemble response was close to neutral at both the global and NHL scales ( figure S3). This was also reflected by Φ B-CSoil which did not differ substantially from 1.0 (i.e. global mean response). The wide range of model-specific Φ B-CSoil values (-0.25 to 2.89 among models and scenarios) made it difficult to derive a robust outcome from the present simulations. The difference in global NPP between the two degrees of warming (ΔNPP 2.0-1.5 ) was 5.3±3.0% of the pre-industrial NPP, whereas in the NHL, the corresponding model average difference was as large as 18.4±8.9% (average of four climate models under RCP2.6 and RCP6.0; figure 2(d)). The corresponding differences in NHL biomass (ΔCVeg 2.0-1.5 ) and soil carbon (ΔCSoil 2.0-1.5 ) were 18.0±9.7% and 1.3±1.8%, respectively (figures 2(e) and (f)). These differences were distributed widely and heterogeneously over the land areas (figures 3(a)-(c)). For example, West Siberia, Northern Europe, and northern North America gained more productivity and plant biomass than other NHL regions under the 2.0°C warming scenario. The increases of NPP and CVeg were widely distributed, whereas negative effects such as degradation by warming occurred in only a few percent of NHL areas (figures 3(d)-(f)). The differences of the biological responses between seasons provided insights concerning the underlying mechanisms and implications for observational detection of the responses. Figure 4 compares the simulated monthly NPPs during the pre-industrial era, and the 1980s, for the 1.5°C and 2.0°C warming scenarios. The enhancement of NPP throughout the growing season caused the summer NPP in June-August to increase by about 30% because of enhanced photosynthetic capacity. When ΔNPP NHL was calculated based on comparisons with the 1980s (i.e. the beginning of Earth observations by satellite remote sensing), spring and autumn NPPs were also sensitive to climate variability because of the phenological response of vegetation. However, the absolute magnitude of NPP was low in these early and late growing seasons; therefore the annual change was determined mainly by the summer response. Extended simulations to the end of the 22nd century (figure S7) highlighted long-term ecosystem responses. Along with stabilization of atmospheric CO 2 concentration and global warming, the biome models simulated gradual changes of biomass and less conclusive changes in soil carbon stocks. The range of variability among the biome models and climate projections was comparable for CVeg but became larger for CSoil in both the global simulations (standard deviation among simulations, from 14.7% in 2100 to 19.9% in 2299) and NHL simulations (from 13.4% in 2100 to 29.2% in 2299). Several models (LPJ-GUESS, LPJmL, and ORCHIDEE-MICT) showed a 'peak-out' of biomass caused by the overshoot of atmospheric CO 2 concentrations. Also, several models showed continuous (or time-lagged) increases of soil carbon stock, by as much as 10% (i.e. hundreds of Pg C) by the end of the 22nd century. Such gradual responses of terrestrial ecosystems to climate change are important for detecting potential long-term impacts and considering ecosystem adaptation. Further implications of the impacts simulated by the biome models were revealed by the changes in permafrost areas. Whereas only a tiny area was subject to permafrost destabilization under the RCP2.6 scenario, considerable destabilization was projected to occur over a vast area (2.7×10 6 km 2 ), mainly in southernmost areas where permafrost is sporadic, during the late 21st century under the RCP4.5 and 8.5 scenarios ( figure S8(a), red area). Interestingly, in these areas, the LPJml model, which included a permafrost scheme has simulated declines of CSoil by 2299, whereas other models, which did not represent dedicated permafrost processes, simulated gradual increase of soil carbon. Discussion The results of this study imply that pronounced changes in NHL ecosystems are likely to occur, because of a combination of the amplification of the temperature rise in the NHL and the higher than global-mean responsiveness of especially NPP and CVeg to increases of temperature and CO 2 . The simulated increases of NPP and CVeg as well as the small changes of CSoil, in the NHL at around the nearcontemporary warming level of 1.0°C (figure 2) are consistent with observed changes caused by the ongoing temperature rise. For example, such trends have been apparent as greening of the land detected by satellite remote sensing during the last decades (Zhu et al 2016, but see Yuan et al 2019 for declining trends of productivity induced by dryness) and other scenario studies with global vegetation models (Scholze et al 2006, Sitch et al 2008, Gonzalez et al 2010, Warszawski et al 2013, IPCC 2014. The trend of increasing amplitude of the seasonal cycle of atmospheric CO 2 concentrations in the northern latitudes, which can be attributed largely to enhanced photosynthetic activity of NHL vegetation, is also consistent with the simulated enhancements of NPP andCVeg (Forkel et al 2016, Piao et al 2018). Moreover, the increase of carbon stocks in northern ecosystems is consistent with the observed long-term trend of the atmospheric CO 2 inter-hemispheric gradient (Ciais et al 2019). The simulation results of this study imply that these observed terrestrial trends will continue to some extent at warming levels of 1.5°C and 2.0°C. There are ongoing arguments about whether the NHL and surrounding regions will act as a net carbon sink or a source (e.g. Webb et al 2016, Euskirchen et al 2017), because processes with conflicting effects are exerting influences on ecosystems simultaneously. For example, winter CO 2 emissions may be underestimated in current estimates and future projections of the NHL carbon budget (Natali et al 2019). Several long-term monitoring and experimental warming studies have been conducted to estimate future changes in the localized areas of NHL (Bjorkman et al in press). However, the heterogeneous, somewhat inconsistent results of ecosystem responses to a certain magnitude of warming revealed by local field experiments have made it difficult to extrapolate from past observations to the future. The simulated impacts of this study were sometimes inconsistent with typical experimental findings. For example, on the basis of estimates by 98 experts, Abbott et al (2016) have stated that total biomass in the Arctic could decrease due to water stress and disturbances such as thermokarst, which are not usually included in the present ecosystem models. Crowther et al (2016) up-scaled the results of soil warming experiments and concluded that warming by 1°C-2°C will lead to serious carbon loss from NHL soils. In contrast, the fact that no clear decline of soil carbon has been consistently found in the future CSoil simulated by ISIMIP2b models suggests that a substantial range of uncertainties remains in the carbon stocks simulation by present biome models (Friend et al 2014, Tian et al 2015. Vegetation biomass is projected to increase by 32.8±19.2 Pg C and by 63.4±38.9 Pg C under +1.5°C and +2.0°C warming scenarios, respectively. These net carbon uptakes are equal to the amount of contemporary anthropogenic CO 2 presently emitted in 3-6 years (Friedlingstein et al 2019). Such a large carbon sequestration by vegetation may imply a significant mitigation potential that would help achieve the goals of the Paris Agreement. Whether the ongoing climatic change will cause the NHL to reach a tipping point (e.g. boreal forest dieback and permafrost thawing) is a critical question in NHL areas, even under the low-end warming scenario. The increase of NPP and CVeg simulated in most cases implies: (1) that there is a high probability of enhancement of vegetation activity and a low possibility of extensive boreal forest dieback under both the 1.5°C and 2.0°C warming scenarios (even under the 2.5°C warming scenario, figure 2(e)), or (2) that none the models used in this study have parameterizations that take into consideration non-linear effects such as shifts in fire regimes, insect outbreaks, and dieback from drought. Indeed, there is recent evidence for an increasing influence and interaction of disturbances such as drought, fire and insect outbreaks due to climate change (Seidl et al 2017, Hartmann et al 2018. These disturbances could significantly influence the NHL, even if they do not formally cross a tipping point, but they were not covered in detail by the biome models used here. The passive responses of the regional CSoil to the postulated temperature rises might imply a low possibility of extensive soil destabilization. However, we should note that the models used in the present study did not have an accurate scheme of permafrost dynamics to capture enhanced thawing under global warming. These tipping elements might be triggered on a wide scale when highend global warming levels are reached, and we should take account of their spatial heterogeneity to detect symptoms of regime shifts. Emergence of tipping elements therefore depends on the responsiveness of impact models, and further model constraints are greatly needed to improve research confidence. Limitations of the present study should be noted. First, the existing biome models are clearly too immature to predict ecological consequences in detail, although the rather robust outcomes across multiple process-based model simulations presented here still have important general implications. Uncertainties in the simulated carbon stocks have been systematically analyzed previously (Nishina et al 2015, Tian et al 2015 and a large part of the CSoil uncertainty has been attributed to the variability in biome model properties. Second, this study focused on long-term and broad-scale changes; therefore, it did not explicitly consider the impacts of extreme events and a changing disturbance regime. Extreme weather conditions and associated disturbances (e.g. droughts accompanied with severe wildfires) would have profound impacts on the ecosystem carbon cycle (Reichstein et al 2013). Nevertheless, the in-depth analyses of climatic impacts across different sectors that are achievable by ISIMIP2b gives us many advantages that were demonstrated in this study. Notably, the Φ T values obtained in this study imply that limiting the global temperature rise to 1.5°C rather than 2.0°C should be more effective in the NHL regions than the global mean: i.e. the 0.5°C reduction of global mean temperature would limit regional warming by 0.7°C-0.9°C. On the one hand, the difference of the climatic impacts on NPP and CVeg between under the 1.5°C and 2.0°C scenarios indicated that mitigation efforts could suppress the impacts of an additional 0.5°C warming. This possibility is most apparent in the NHL regions. On the other hand, the impacts on CSoil simulated by certain models were insensitive to the degree of warming. In terms of climate policy, the ISIMIP will help us to identify effective mitigation and adaptation options in a more informed manner. Terrestrial Ecology program under project NNH18ZDA001N (award number 80HQTR19T0055). HT and HS acknowledge support from US National Science Foundation (Award number 1903722). CPOR acknowledges funding from the German Federal Ministry of Education and Research (BMBF, Grant No. 01LS1711A). Author contributions AI designed the study, conducted analyses, and drafted the manuscript. CPOR and PC led the ISIMIP2b biome sector coordination. JC, MF, SO, and WT conducted simulations. CPOR, PC, MC, MF, TH, and WT commented on the manuscript. AG also commented on the manuscript from the perspective of the permafrost sector. Data availability statement The data that support the findings of this study are openly available at 10.5880/PIK.2019.012.
2020-01-30T09:10:04.397Z
2020-03-13T00:00:00.000
{ "year": 2020, "sha1": "4039c1ded667d5e9b5a9d2742ce107df0c28428a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1748-9326/ab702b", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b145fc07b823a2bf3547754eed26e6f472915a4f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Physics" ] }
256037866
pes2o/s2orc
v3-fos-license
New formulas for amplitudes from higher-dimensional operators In this paper we study tree-level amplitudes from higher-dimensional operators, including F3 operator of gauge theory, and R2, R3 operators of gravity, in the Cachazo-He-Yuan formulation. As a generalization of the reduced Pfaffian in Yang-Mills theory, we find a new, gauge-invariant object that leads to gluon amplitudes with a single insertion of F3, and gravity amplitudes by Kawai-Lewellen-Tye relations. When reduced to four dimensions for given helicities, the new object vanishes for any solution of scattering equations on which the reduced Pfaffian is non-vanishing. This intriguing behavior in four dimensions explains the vanishing of graviton helicity amplitudes produced by the Gauss-Bonnet R2 term, and provides a scattering-equation origin of the decomposition into self-dual and anti-self-dual parts for F3 and R3 amplitudes. Introduction and motivations Higher-dimensional operators in gauge theory and gravity are important for various reasons: they are of phenomenological interests as potential corrections to Yang-Mills and Einstein theory; they can appear in effective actions of open and closed strings, and serve as potential counter terms for UV divergences of loop amplitudes. The simplest gauge-invariant, local operator that one can add to Yang-Mills action is the F 3 operator, where F µν ≡ F a µν T a is the gluon field strength, and f abc = Tr([T a , T b ]T c ) the structure constant of gauge group. This operator arises as the first correction to Yang-Mills Lagrangian F 2 ≡ Tr(F µν F µν ), from the α -expansion of bosonic open string theory [1]. It is the unique, CP-even dimension-six operator from gauge fields, and it is not supersymmetrizable. The amplitude produced by F 3 differs significantly from those produced by higher-dimensional operators in open superstrings. The polarization dependence of the latter is like in Yang-Mills case, e.g. no contractions of the form ( · k) n [2][3][4], but amplitudes produced by F 3 certainly contain such contractions. In this sense, F 3 is the first higher-dimensional operator with genuinely new polarization structures in the amplitudes. F 3 operator represents a possible deviation of gluon interactions from those in QCD, which could be produced by new physics [5][6][7]. There have been phenomenological studies on the effect of F 3 -modified amplitudes [5][6][7][8][9][10], which were systematically computed using MHV vertex expansion in [11] (for BCFW recursions see [12]). In the following we denote the matrix element with n gluons and a single insertion of F 3 as M F 3 n . 1 1 The effective Lagrangian is L = F 2 + α F 3 + O(α 2 ). We strip off the coupling g n−2 for pure Yang-Mills amplitude M YM n , and 3α g n−2 for M F 3 n . The F 3 modifications do not change the group theory structure of Yang-Mills action, and in particular the color decomposition of M F 3 n is identical to M YM n . JHEP02(2017)019 Furthermore, in [13], it has been argued that M F 3 n satisfies a duality between color and kinematics first proposed for M YM n [14], then double-copy constructions [14] give gravity amplitudes from the low-energy effective action of the bosonic closed strings, up to O(α 2 ): +α 2 e −4φ 1 48 where G 2 is the usual Gauss-Bonnet term that contains two powers of Riemann tensor and we will refer it as R 2 , I 1 and G 3 contain three powers of Riemann tensors (see [13] for details). If we restrict ourselves to pure gravitons, then at O(α ) the amplitude is produced by the R 2 operator only, but at O(α 2 ) it receives contribution both from R 3 operator as well as two insertions of R 2 operators with exchange of a dilation φ. Nevertheless, in the following we will refer to gravity amplitudes from the effective action at O(α ) and O(α 2 ) as the R 2 and R 3 amplitudes, respectively. Equivalent to the double-copy construction, the corresponding amplitudes can be obtained from those in open strings using field-theory limit of Kawai-Lewellen-Tye (KLT) relations [15]. 2 Given that M F 3 n is the O(α ) correction to M YM n in open string theory, M R 3 n at O(α 2 ) comes from double-copy/KLT of two copies of M F 3 n , while M R 2 n at O(α ) can be obtained as the double-copy/KLT of M F 3 n with M YM n . In four dimensions, it is natural to split the field strength F into self-dual and anti-selfdual parts F µν ± = F µν ±F µν , and we have amplitudes produced by F 3 + and F 3 − accordingly. The only possible modification to the three-point on-shell gluon amplitudes are the while for any other helicities F 3 amplitudes vanish. R 3 amplitudes at O(α 2 ) are the squaring of A F 3 3 , and it is important to note that pure graviton amplitudes in four dimensions are from two copies of gauge-theory amplitudes with identical helicities: On the other hand, R 2 -modified three-graviton amplitude vanishes for any helicities because is non-vanishing only for (−, −, +) and (+, +, −). In fact, M R 2 n vanishes for any number of gravitons in four dimensions, because there R 2 is a total derivative and cannot produce non-vanishing matrix element. This immediately gives a very interesting relation observed in [13], namely in four dimensions the KLT of M F 3 n and M YM n with same helicity configurations must vanish. A similar relation also observed in [13] is that the KLT of M JHEP02(2017)019 where ⊗ KLT means combining two sets of gauge-theory amplitudes via KLT relations reviewed below, and every pair of gluons must have polarizations with identical helicity, ± . For general n, these are highly non-trivial relations or F 3 amplitudes in four dimensions. In this paper we study these amplitudes from higher-dimensional operators in the Cachazo-He-Yuan (CHY) formulation [16,17]. It expresses tree-level S-matrices of massless particles as integrals over the moduli space of punctured Riemann spheres, and naturally incorporates a large variety of theories [18][19][20]. As we will review shortly, in the formula for Yang-Mills or Einstein gravity, the most important ingredient is the reduced Pfaffian (or determinant) of a matrix Ψ n ( ), with manifest gauge/diffeomorphism invariance. The reduced Pfaffian encodes polarization dependence of amplitudes in Yang-Mills, as well as from higher-dimensional operators of open superstrings. We will present remarkably simple formulas for M F 3 n , M R 3 n and M R 2 n , which are related to each other through KLT/double-copy constructions. The formulas are all based on one new, gauge-invariant ingredient, P n , constructed from the same matrix Ψ n ( ) with mass dimension higher than the reduced Pfaffian by two. Just as the reduced Pfaffian being the basic object for gluon amplitudes with supersymmetries, P n can be regarded as the basic object for non-supersymmetrizable operators, at least for this lowest dimension. Furthermore, we study P n in four dimensions, where any CHY formula naturally becomes a sum of contributions from different sectors [21,22]. Given any helicity configuration, it has been known for some time that the reduced Pfaffian is only non-vanishing in one particular sector. This reproduces various twistor string formulas for (super)Yang-Mills and gravity amplitudes [23][24][25][26][27]. As we will see shortly, the reduction of P n to four dimensions is very different: it vanishes on exactly that sector where reduced Pfaffian is non-vanishing. 3 In this sense P n is strictly "orthogonal" to the reduced Pfaffian, which means that the product of them vanishes in all sectors and cannot produce any non-zero amplitudes in four dimensions! This is the origin of the vanishing R 2 amplitude in four dimensions, which is the KLT of F 3 and Yang-Mills amplitudes. Remarkably, we will also learn how self-dual and anti-self-dual parts appear from our formulas in four dimensions. We find that all the solution sectors that contributes to F 3 and R 3 amplitudes can be naturally divided into two complementary groups; M − n ) are given by the sum of contributions from the two groups respectively, which also explains their orthogonality. In addition to providing a proof for (1.4), our formulas show other nice features of F 3 amplitudes in four dimensions as well, such as the "Parke-Taylor-like" formula for M F 3 + n with three negative-helicity gluons [13]. The paper is organized as follows. After briefly review the CHY formulas for Yang-Mills and gravity as well as KLT relations in section 2, we introduce the new ingredient P n which lead to CHY formulas for all these amplitudes from higher-dimensional operators in section 3. In section 4, we discuss P n in four dimensions, including its orthogonality to the reduced Pfaffian, and the split into self-dual and anti-self dual parts. Discussions and an appendix on reducing to four dimensions will be presented in the end. JHEP02(2017)019 2 A brief review of CHY and KLT The universal part of CHY formulas contains the so-called scattering equations [16,17,21] where s a b = (k a + k b ) 2 = 2k a · k b , σ a is the a th puncture. The tree-level S-matrix of n massless particles is written as an integral localized on the support of (2.1) where the precise definition of the integral measure including delta functions can be found in [17], and I n is the CHY integrand that defines the theory. In the second equality one sums over (n−3)! solutions of (2.1), with J n the Jacobian of delta functions. In particular, the integrands for tree amplitudes in gravity, in Yang-Mills and a bi-adjoint φ 3 theory are [18]: The two ingredients are the Parke-Taylor factor which can be dressed with color factors, PT(α) := 1 σ α(1),α(2) · · · σ α(n),α (1) , C n = α∈S n−1 and a 2n × 2n skew matrix Ψ n that depends on polarization vectors: 4 Note that the matrix is degenerate since it has two null vectors, but we can define its reduced Pfaffian by deleting two columns and rows among the first n: This definition is permutation invariant, and it has the appropriate SL(2, C) weight and correct mass dimension, [mass] n−2 for producing Yang-Mills and gravity amplitudes via (2.2) and (2.3). The most important property of Pf Ψ n is that on the support of scattering equations, it is invariant under gauge transformation µ a → µ a + αk µ a [17] JHEP02(2017)019 After color decomposition, one obtains color-ordered, partial amplitudes, M YM (α), for Yang-Mills and double-partial amplitudes, m(α|β), for bi-adjoint scalar theory, with PT factors in the integrands. The field-theory limit of KLT relations can now be expressed as: where α, β are in a basis of (n − 3)! orderings [30,31], and the KLT product of two sets of amplitudes is defined as their bilinear with the kernel given by the inverse of the matrix (n−3)! × (n−3)! matrix m [32]. It is a simple linear-algebra proof [20] that (2.7) follows from (2.2) and (2.3), which applies to general theories. Given any theory with CHY formula with its integrand of the form I target n = L n R n , we can define two sets of partial amplitudes M L(R) n from CHY formula with integrands I L(R) n = PT L n (R n ) respectively. Then we have a general KLT relations among these amplitudes, M target n = M L n ⊗ KLT M R n . Now we can write down the general form of the CHY formula for these amplitudes from higher-dimensional operators. Given that F 3 amplitudes have the same colordecomposition as well as BCJ relations as Yang-Mills amplitudes, one can always write its CHY integrand as the product of C n (Parke-Taylor factor for partial amplitude) and a permutation invariant object that depends on polarizations. Let us call this new object as P n ( ), which must also be gauge invariant, and have mass dimension higher than Pf Ψ by two. Now the KLT relations formulated above immediately imply CHY formulas for M . The form of CHY integrands for these amplitudes are: In the remainder of the paper, we will present the result for P n and study its various interesting properties, such as soft limits and reduction to four dimensions. A new ingredient in CHY formulation In this section we will generalize Pf Ψ n to the new object P n . As basic requirements, it must be permutation and gauge invariant, must have the same SL(2, C) weight and dimension [mass] n . The most natural and perfect candidate would of course be Pf Ψ n if it had not been zero! Nevertheless, we will see that P n can be built from Pf Ψ n . Let's first give a natural decomposition of Pf Ψ n into objects that already satisfy all the conditions above individually. These will be the building blocks for our P n . This interesting decomposition was essentially introduced in [33]. From the definition of Pfaffian and thanks to the special structure of 2n × 2n matrix Ψ n , we can expand PfΨ n as a sum over n! permutations of labels 1, 2, . . . , n, denoted as p ∈ S n where sgn(p) denotes the signature of the permutation p and in the second equality, we use the unique decomposition of any permutation p into disjoint cycles I, J, · · · , K given by JHEP02(2017)019 each Ψ p is the product of its "cycle factors" Ψ I Ψ J · · · Ψ K , which we define now. When the length of a cycle equals one, its cycle factor Ψ (a) is given by the diagonal of C-matrix: and when the length exceeds one e.g. i > 1, the cycle factor is given by Here the trace is over Lorentz indices and f µν are the linearized field strengths of gluons. Note that the decomposition is manifestly gauge invariant: for cycle factors with length more than 1 (3.4), the trace of linearized field strengths is gauge invariant, while for 1cycles, (3.3), the factor is gauge invariant on the support of scattering equations. The proof for the decomposition is elementary and we refer to [33] for more details. Let us look at some examples to illustrate the procedure. For Pf Ψ 2 we immediately have (12) . (3.5) In the second equality, C 11 C 22 = Ψ (1) Ψ (2) corresponds to the permutation (1)(2); the terms A 12 B 21 and C 12 C 21 have a common denominator σ 12 σ 21 , and the numerators combine to , thus we have the desired cycle factor Ψ (12) . For PfΨ 3 , there are new building blocks of length 3, (123) and (321). Four terms from the expansion of PfΨ 3 corresponds to (123) with a common denominator σ 12 σ 23 σ 31 , and similarly for (321) (with denominator σ 32 σ 21 σ 13 = −σ 12 σ 23 σ 31 ). Note that neither of them is gauge invariant, but the sum of the two is: the eight terms of their numerators nicely combine to tr(f 1 f 2 f 3 )! It is convenient to assign to each of them half of the trace, i.e. 1 2 tr(f 1 f 2 f 3 ), as the numerator. Thus we arrive at (3.4) as expected, and we have Given the decomposition of PfΨ n as in (3.1), we can classify Ψ p 's by the lengths of its cycles {i, j, . . . , k}. For example, in (3.7) the first term is of the type {1, 1, 1} as it is the product of three 1-cycles; the three terms in the bracket are all of the type {1, 2}, and the last two terms are {3}. The reason for doing so is of course to group together terms in (3.1) of the same type, and write a manifestly permutation invariant decomposition of Pf Ψ n . Furthermore, note that the signature of a permutation is given by n minus the number of cycles, so one can sum over all permutations of the same type with identical signs. Let's define permutation invariant building blocks as follows: which is a sum of Ψ p 's for all permutations of the same type {i 1 , i 2 , . . . , i r }, with i 1 + i 2 + · · · + i r = n , and the convention : Here each P is by construction permutation invariant. Let's again see some examples: With (3.8), (3.1) can be rewritten as a permutation invariant decomposition of Pf Ψ n : Let us spell out the decomposition for n = 3 (see (3.7)) and n = 4, 5: Note that each P and thus any linear combination of them immediately satisfy our conditions above: correct SL(2, C) weight and mass dimension, permutation and gauge invariance. PfΨ n = 0 means that these building blocks are not all independent: (3.11) gives linear relations between different P 's. In the following, we will present a very special linear combination that leads to the correct CHY formula F 3 amplitudes. Let us first present the answer and then discuss its special properties. It turns out one only needs to modify coefficients of (3.11) a bit to obtain P n : 13) were N i>1 denotes the number of indices in i 1 , i 2 , · · · , i m which are larger than 1, or the number of cycles with length at least 2; c is just any constant because we can add any multiple of (3.11) without changing the answer. The formula can simplify when we choose the constant c to be certain integers. For example, two convenient choices are c = −1 and c = 0 respectively, and we have: JHEP02 (2017)019 where in the first representation P 11···1 is always present with −1 but any P with only one index i > 1 are always absent (including P n ); in the second one P 11···1 is always absent. To give one more example, here is P 6 with c = −1: We have checked thoroughly that (3.13) gives correct F 3 amplitudes. First of all, one can easily verify that for n = 3, 4, the formula reproduces correct amplitudes as computed from Feynman diagrams. In the next section we will provide very strong evidence for its validity, including checks for all helicities up to n = 8 and some all-multiplicity results. Here we provide another important check, that is its behavior under soft limits. Recall that in CHY formula for gravity and Yang-Mils, Weinberg's soft theorem [34] becomes manifest due to the simple soft limits of Pf Ψ n . Let us take the a-th particle to be soft, that is k µ a = τ q µ with τ → 0. The soft graviton and soft gluon theorems are guaranteed by CHY formula in as long as we have Pf Ψ n → C aa Pf Ψ n−1 + O(τ ) where Pf Ψ n−1 has only hard particles. What is important here is that soft theorems are universal, thus apply to amplitudes from higher-dimensional operators as well [12]. For this to work the soft behavior of P n must be identical to that of Pf Ψ n . Let's check this explicitly. Note that in any term of P n , a must be in one of the cycles, and there are two possibilities. If it is a cycle of length at least 2, then in the numerator f µν a → O(τ ) as τ → 0 while the denominator remains finite, thus the cycle factor vanishes, Ψ (...a) → O(τ ). On the other hand, if it is in an 1-cycle then it remains finite For any P i 1 i 2 ···im with lengths of all cycles being at least 2, i.e. i m ≥ i m−1 ≥ . . . ≥ i 1 > 1 (note that here N i>1 = m), it vanishes as O(τ ) in any single soft limit. Therefore requiring P n to have correct soft behavior cannot constrain coefficients of such terms at all. On the contrary, the soft limit puts very strong constraints on those P 's that have at least one cycle with length 1, i.e. i 1 = 1. In the k a → 0 limit only those terms with a in an 1-cycle survive and dominate in the limit (other terms still vanish). Thus for any single soft limit, P 1,i 2 ,...,im → C aa P i 2 ,...,im where P i 2 ,...,im is the (n − 1)-point building block with particle a removed. Note that in (3.13) the coefficients are determined by N i>1 and independent of 1-cycles, thus P 1,i 2 ,...,im and P i 2 ,...,im have exactly identical coefficients. Thus we see that (3.13) indeed satisfies (P n−1 contains all particles except a): In other words, P n splits into two parts that behave very differently under soft limit where the first part is essentially fixed by soft limit, namely the coefficients for any P 1,... must be the same as that for the P with 1 removed. This explains why the coefficient JHEP02(2017)019 should not depend on how many 1-cycle are there. However, we have seen that soft limits put no constraints on the coefficients of the second part, which vanishes term by term. We believe that correct behavior under factorization limits of P n can completely fix the coefficients of the second part. However, even without resorting to that, we will now show that the coefficients in (3.13) are strongly constrained by another remarkable property of P n in four dimensions, namely it is orthogonal to Pf Ψ n . Four dimensions, orthogonality and self-duality In this section we study important properties of P n in four dimensions. Details for the reduction to four dimensions will be presented in [28]. As discussed in [21,22] and briefly reviewed in appendix A, in four dimensions, the (n−3)! solutions of scattering equations fall into n − 3 sectors labeled by k = 2, 3, . . . , n − 2, thus (2.2) becomes a sum over sectors and we define the contribution from sector k as T (k ) : On the other hand, for massless particles with spin in four dimensions, we specify helicities of the n particles, which can be divided into a set of particles with negative helicities, −, and the complementary one +. We denote the helicity sectors by the number of negativehelicity particles, k := | − | = 0, 1, . . . , n (| + | = n − k) and call the helicity amplitude in this sector, M n,k . A priori there is no relation between solution sector and helicity sector. However, it is known that Pf Ψ n vanishes unless k = k (in particular it vanishes for k = 0, 1, n−1, n), which means Yang-Mills and gravity amplitudes in helicity sector k only receives contribution from solutions in sector k = k T (k ) n,k = 0 , for any k = k , ⇒ M n,k = T Now we show that exactly the opposite is true for P n , namely it vanishes for k = k, thus The starting point of the reduction is the simple reduction of trace of linearized field strengths in four dimensions for any assignment of helicities: , (4.4) Here b 1 , b 2 , · · · , b y are all the particles of negative helicity from a 1 , a 2 , · · · , a x with its ordering unchanged and similarly p 1 , p 2 , · · · , p z are all the particles of positive helicity from a 1 , a 2 , · · · , a x with its ordering unchanged. Note that tr (f a 1 f a 2 · · · f ax ) directly vanishes JHEP02(2017)019 if there is only one particle of negative helicity or only one particle of positive helicity in a 1 , a 2 , · · · , a x . However we see that the remaining case still effectively vanish as we always add up all permutations (see (3.8 Here the sum is over ordered permutations "OP", namely permutations of the labels in the joined set {b 1 , b 2 , · · · , b y }, {c 1 , c 2 , · · · , c z } such that the ordering within {b 1 , b 2 , · · · , b y } and {c 1 , c 2 , · · · , c z } is preserved. Therefore, in the sum of (3.1), we can effectively write tr (f a 1 f a 2 · · · f ax ) in 4d in a remarkably simple way: Motivated by (4.6), we recall the off-diagonal elements of the k × k matrix h k and (n−k) × (n−k) oneh n−k essentially introduced in [35] (see also [26,27]): As discussed above, it is clear that when we have any cycle factor with length at least 2, effectively it reduces to the chain product of such off-diagonal elements in 4d: To this point we have not used scattering equations or solution sectors in four dimensions. As we prove in the appendix A, the really non-trivial part of the reduction concerns 1-cycle, or the diagonal entries of C-matrix. Note that Ψ (a) = C aa is only gauge invariant on the support of scattering equations, so it is not surprising that to reduce it nicely one needs to use scattering equations in four dimensions. We first discuss the k = k case: miraculously, by plugging in scattering equations in k = k sector, C aa reduces to diagonal entries of h k orh n−k [35] depending on the helicity: The details of the proof is given in appendix A; t's andt's are determined by scattering equations in 4d but here we can just view them as free variables, and the important thing is that each diagonal entry is a linear combination of off-diagonal entries in that row/column. Before we prove the vanishing of P n in k = k sector, let us again return to our favorite PfΨ n , and first show the following identity as a warm up: (4.10) Obviously both det h k and deth n−k vanish since they both have a null vector; this is consistent with the fact that PfΨ n vanishes due to the two null vectors. To show (4.10), we decompose det h k , deth n−k in a way similar to that of PfΨ n , e.g. for det h k we have where the sum is over all permutations of particles of negative helicity, i.e. q ∈ S k and I 1 , I 2 , · · · , I s are the cycles of the permutation q. We can further define then det h k can be rewritten as a sum of H and similarly works deth n−k , where we have introduced shorthand notation for the summation range, {i} k means i 1 + i 2 + . . . i = k and i 1 ≤ i 2 ≤ · · · ≤ i and similarly for {ĩ}˜ n−k . Both P n and PfΨ n are built from P 's, so the key identity here is for the reduction of the P 's, which nicely follow from (3.8), (4.12) and (4.8): where, recall that any cycle factor in P is only non-vanishing when all particles belong to the same helicity set, thus the sum in P "factorizes" into sums in − set and those in + set, which give H andH; the additional sum in (4.14) is over all distinct partition of i 1 i 2 · · · i m into two parts j 1 j 2 · · · j andj 1j2 · · ·j˜ , with j 1 + j 2 + · · · + j = k and j 1 +j 2 + · · · +j˜ = n − k. For example, any P 11···1 reduces to H 11···1 andH 11···1 : We have more examples for n = 4, k = 2 and n = 7, k = 3, Given (4.14), it is trivial to show (4.10) using (3.11) and (4.13). Although both sides vanish, this is still an example of the remarkable simplifications in a given sector in four dimensions: we see that most of the terms vanish and the number of terms are reduced from JHEP02(2017)019 n! to k! × (n−k)!. Along the same line but in a more non-trivial way, similar simplification happens for the reduction of Pf Ψ n which will be present in [28]. We turn to the reduction of P n . By dividing N i>1 in (3.13) into two parts N j>1 and Nj >1 (set c = 0) which depend on − and + sets respectively, and P n reduces to: Thanks to the vanishing of deth n−k and det h k , we immediately see that P n vanishes for k = k . Before proceeding, let's provide a few explicit examples of (4.17) for P 4 and P 5 : It is a remarkable fact that P n vanishes for k = k sector. As we mentioned before, this property can be used to constrain the second part of P n which are not constrained by soft limits at all. Up to n = 8, we found that the constraints that P n vanishes for k = k sector for all helicity sectors uniquely fix all coefficients in P n . This property means that P n is completely orthogonal to Pf Ψ n in four dimensions. For evaluating helicity amplitudes for Yang-Mills/gravity vs. those for F 3 or R 3 , one always uses complementary set of solutions of scattering equations. This seems to be the scattering-equation origin of the vanishing of 4d R 2 amplitudes, which has a CHY integrand P n Pf Ψ n . In general dimensions, the integrand is of course non-zero, but once we reduce to four dimensions, it vanishes for every solution of scattering equations! The derivation of (4.17) applies to any k = k case as well, with the only difference being that the reduction of 1-cycle i.e. C aa needs to be modified. As shown in appendix A, we can generalize the diagonal entries of the two matrices h k k andh k n−k depending on the solution sector k and helicity sector k. The upshot is that (4.17) still holds for any k = k sector with generalized matrices h k k andh k n−k . Just by inspecting the matrices, it turns out that we again have det h k k = 0 for k < k and deth k n−k = 0 for k > k, thus for any k , only one of the two terms in (4.17) remain non-vanishing. In view of this, it becomes very natural to divide the sectors into two groups: those with k < k and those with k > k, and the question is does this separation means anything sensible for F 3 and R 3 amplitudes in four dimensions? The answer is affirmative: the sum of contributions from the two complementary groups correspond to self-dual and anti-selfdual amplitudes, respectively. Let's write down this proposal for F 3 amplitudes: 3 4 5 6 7 8 3 2 -----4 2 2,3 ----5 2 2,3 2,3 ---6 2 2,3 2,3,4 2,3,4 --7 2 2,3 2,3,4 2,3,4,5 2,3,4,5 -8 2 2,3 2,3,4 2,3,4,5 2,3,4,5,6 2,3,4,5,6 An immediate consequence of (4.20) is that self-dual and anti-self-dual parts are orthogonal, which implies the second part of the (1.4). These are very non-trivial relations from usual representation of the amplitudes, but become totally obvious from (4.20). Given that their KLT vanishes, it immediately follows that (4.20) also applies to R 3 amplitude. There is very strong evidence that (4.20) must be correct. First of all, it implies the well-known fact that for k = 0, 1, 2, self-dual amplitudes vanishes (no k < 2) and there are only anti-self-dual amplitudes, while for k = n, n−1, n−2 there are only selfdual amplitudes (no k > n−2). To provide more non-trivial evidence for (4.20), we have checked our proposal for self-dual, F 3 + amplitudes against [13] for all helicities up to eight points. We have evaluated our formula numerically for solutions in all sectors of k = k, and find that the self-dual amplitude is the sum of those sectors listed in table 1. Our proposal suggests that there is a natural origin for self-dual and anti-self-dual amplitudes from solution sectors of scattering equations in 4d. Note that individually T (k ) n,k are not physical for general k and k , since they can contain spurious poles, as is familiar from the reduction of bi-adjoint φ 3 to four dimensions [36]. The interesting thing is that unlike in the scalar case where one has to sum over all sectors, here by summing over subsets of sectors, namely those with k < k and those with k > k, we already obtain physical amplitudes, M There is a special case when we do not need to sum over sectors at all, and it also serves as an important check of the proposal (4.20). This is the F 3 − amplitudes with three negative-helicity gluons, i.e. k = 3, which receives the contribution only from k = 2 sector, n,3 ; moreover it is well known that there is a just a unique solution in that sector. To be concrete, let's choose the three particles of negative helicity as p, q, r. Note that for k = 3 and k = 2 the generalized version of (4.17) has the first term vanishes and the second term evaluates to (the details are given in appendix A): Discussions In this paper we studied tree-level amplitudes from higher-dimensional operators, including the F 3 modification to Yang-Mills action, and those to Einstein gravity from bosonic closed strings at lowest orders. We proposed new CHY formulas for these amplitudes, (2.8), and all the modifications are naturally encoded in one new ingredient, P n as given in (3.13). The reduced Pfaffian is the natural object for Yang-Mills and gravity amplitudes, and P n is the first genuinely new object that generalizes it for higher-dimensional operators. By construction it is manifestly permutation invariant and gauge invariant, and has the correct behavior under soft limits. Moreover P n has very interesting properties in four dimensions with a helicity configuration; it vanishes in exactly the only solution sector that Pf Ψ n is non-vanishing (4.3), and it is natural to divide the remaining sectors to obtain self-dual and anti-self-dual parts of F 3 and R 3 amplitudes, (4.20). where the sum is over (n − 3)! orderings, with scalar coefficients F 's containing the full α -dependence. Therefore, at any order in the α -expansion (see [38]), the amplitude always admits a CHY representation with the reduced Pfaffian Pf Ψ n (times a linear combination of Parke-Taylor factors). To give a very nice example, let's work out the CHY integrand for gluon amplitude from F 4 operator at O(α 2 ) of the open superstring effective action. This is the first supersymmetrizable correction to Yang-Mills theory, and the amplitude with one insertion of F 4 has been studied in four dimensions [39] and in general dimensions [40] from superstring theory. There is also an interesting observation that MHV F 4 amplitude is proportional to the famous all-plus amplitude at one-loop level [41]. It turns out that F 4 color-ordered amplitude have a remarkably compact CHY formula which has been verified against the all-multiplicity result in [42]. The prefactor can be viewed as a CHY D-dimension "uplift" of the spinor numerator i j [j k] k l [l i] for allplus/MHV F 4 amplitude, which the formula reduces to for MHV helicities. To all orders in α , no new ingredient for polarizations is needed for amplitudes from any operator from superstrings, which is in sharp contrast with P n for F 3 amplitude! Our JHEP02(2017)019 results for F 3 amplitudes and the double copies may open up a new direction for encoding more higher-dimensional operators in CHY formulation. Among other things, this can shed new lights in understanding amplitudes from the bosonic string effective action along the line of [43]. Besides, from our formulas we can obtain BCJ numerators for F 3 amplitudes, similar to the Yang-Mills case in [18]. Along this line (also see [44] from string theory), we hope to understand better the color/kinematics duality for F 3 and beyond. The most intriguing feature of the new object P n is its properties when reduced to four dimensions. With the only exception of bi-adjoint scalar theory [36], every CHY formula so far is only non-vanishing in one sector of the 4d scattering equations (for given helicities), and they all nicely correspond to ambitwistor string theory [45] with worldsheet supersymmetries [46,47]. P n is totally different and it is likely to correspond to correlators from some bosonic version of the worldsheet models. It would be highly desirable to find such models. It would also be very interesting to see how these features in four dimensions can be derived from some four-dimensional ambitwistor string models directly [35]. Our formula in gauge theory is for gluon amplitudes with a single insertion of F 3 operator, so it is also a formula for form factors in the soft limit. In the limit, it can be viewed as a very non-trivial generalization of earlier four-dimensional results on form factors for F 2 operator [48] and those in N = 4 SYM [49]. An outstanding open question in this direction is about extending the construction to include multiple insertions of operators. Last but not least, recently there has been progress on loop integrands from scattering equations [50][51][52][53][54], and it would be highly desirable to see if our results can shed new lights on obtaining integrated loop amplitudes in this formulation. In particular, the amplitudes we studied here can be considered as counterterms for UV divergences of such loop amplitudes (see very interesting recent studies of Gauss-Bonnet term in quantum gravity [55,56]), and it certainly deserves further investigations along this direction. JHEP02(2017)019 Here − and + are arbitrary two sets of the n external particles, with their length equal to k and n − k separately. The variables are σ's and t's, which can be combined into n variables in C 2 , σ α a = 1 ta (σ a , 1), and the two bracket is defined as (a b) := (σ a − σ b )/(t a t b ). Each solution of (2.1) corresponds to a unique solution {σ a , t a } of (A.1) for some k , with identical cross-ratios of the σ's. For each k , (A.1) have Eulerian number of solutions, E n−3,k −2 , and the union of them for all sectors give (n−3)! solutions of (2.1), with (n−3)! = n−2 k =2 E n−3,k −2 [21]. When reducing CHY formulas to 4d, it is convenient to view (A.1) as a change of variables: we refer to λ I=1,...,k ,λ i=k+1,...,n and t a , σ a as "data" and (A.1) as writing λ i=k+1,...,n andλ I=1,...,k in terms of the data. This is equivalent to evaluation on the support of solutions in the k sector. Based on these considerations, now we derive the explicit expression when reducing C aa to four dimensions. When a ∈ − and a ∈ − , By plugging in the solutions in k sector, or equivalently a change of variable, we have Similarly we can work out the other two cases, and the final result is where the diagonal elements of the matrixh 2 n−3 are given bỹ Thus we have seen thath 2 n−3 is nothing but the reduced matrix |h n−2 | r r . The 4d formula for the self-dual F 3 amplitude with k = 3 now reads
2023-01-21T14:20:05.185Z
2017-02-01T00:00:00.000
{ "year": 2017, "sha1": "5372f17776e383283a7beff6f3687e7bac96eef7", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP02(2017)019.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "5372f17776e383283a7beff6f3687e7bac96eef7", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
235915995
pes2o/s2orc
v3-fos-license
Evaluation of a Prototype of a Novel Galactomannan Sandwich Assay Using the VIDAS® Technology for the Diagnosis of Invasive Aspergillosis Objectives To evaluate the analytical and clinical performance of a prototype of a VIDAS® Galactomannan (GM) unitary test (bioMérieux, Marcy l’Etoile, France) and compare to that of the Platelia™ Aspergillus Ag assay (Bio-Rad, CA, USA). Methods Repeatability, reproducibility, and freeze-thaw stability of VIDAS®GM were evaluated. Sera from patients at risk of IA were concurrently tested with both the VIDAS®GM and Platelia™ Aspergillus Ag assays. Correlations between the two assays were assessed by Passing Bablok (PB) regression and performance by ROC analysis. Results The correlations between the VIDAS®GM indexes after one and two cycles of freezing/thawing were r=1.00 and r=0.989, respectively. The coefficients of variation for negative, low-positive, and positive sera were 13%, 6%, and 5% for repeatability and 14.4%, 7.2%, and 5.5% for reproducibility. Overall, 126 sera were tested with both assays (44 fresh and 82 frozen). The correlation between VIDAS®GM and Platelia™ Aspergillus Ag was r=0.798. The areas under the curve of the ROC analyses were 0.892 and 0.894, for VIDAS®GM and Platelia™ Aspergillus Ag, respectively. Conclusions This new VIDAS®GM prototype assay showed adequate analytical and clinical performance and a good correlation with that of Platelia™ Aspergillus Ag with 126 sera, although these results need to be confirmed in a larger prospective and multicentric study. As for the other VIDAS® assays, VIDAS®GM is a single-sample automated test using a solid reagent strip and receptacle. It is easy to use and suitable for rapid on-demand test results. INTRODUCTION Invasive aspergillosis (IA) is an opportunistic infection that occurs mainly among immunocompromised patients. Its incidence has increased with the increasing use of immunosuppressive therapies (Vallabhaneni et al., 2017). IA is associated with high morbidity and mortality, especially if diagnosis and treatment are delayed (Cornely et al., 2017). Since 1990, biological markers, mainly galactomannan (GM), have considerably improved IA diagnosis and its precocity (Guo et al., 2010;Jenks et al., 2019). Until very recently, Platelia ™ Aspergillus Ag (BioRad), based on a sandwich enzyme immuno-assay (EIA) technique, was the most widely used for GM detection. A meta-analysis showed a pooled sensitivity and specificity for proven cases of 0.71 and 0.89, respectively (Pfeiffer et al., 2006). However, well-known limitations include poor reproducibility and repeatability and the need to batch the samples in series, resulting in a loss of speed (Oren et al., 2012). Several lateral-flow devices assays for detecting either GM or another antigen have been recently commercialized to respond to the need of a rapid and easy-to-use single-sample test (Thornton, 2008;Jenks and Hoenigl, 2020;Mercier et al., 2020). These tests performed better on bronchoalveolar lavages than sera, in which the sensitivity was lower than that of Platelia ™ Aspergillus Ag (Donnelly et al., 2020). Here, we evaluated the analytical and clinical performance of a prototype of a VIDAS ® GM (BioMeŕieux) unitary test. As for the other VIDAS tests, it is a fluorescent EIA packaged in ready-to-use disposable strips. METHODS AND RESULTS This monocentric retrospective and prospective study included 126 sera from 30 patients at risk of IA at the University Hospital of Grenoble (France). The patients had mainly hematological malignancies (n=27) including allogeneic bone marrow (n=21) and solid organ transplantation. Eighteen probable and 6 possible IA were diagnosed by a local multidisciplinary Aspergillosis committee. The EORTC/MSG criteria were used to classify the patients (Donnelly et al., 2020). Probable cases were mostly classified as such on the basis of the GM (Platelia ™ Aspergillus Ag) results and/or Aspergillus PCR. The possible cases (only radiological/clinical criteria) were not considered as IA cases. Patients were screened for GM detection with Platelia ™ Aspergillus Ag and the samples collected as part of routine clinical care and registered in the certified biological collection DC-2008-582. Both frozen and fresh samples were analyzed. Frozen and fresh samples were tested the same day with the two assays to assess the effect of storage at -80°C on GM detection. Platelia ™ Aspergillus Ag was performed according to the manufacturer's instructions using an automated EVOLIS Premium ® system (BioRad) and a GM index cut-off value of 1 was considered for positive samples as recommended in the recent revision of the EORTC/MSG criteria (Donnelly et al., 2020). VIDAS ® GM is an automated qualitative sandwich assay with a coated solid-phase receptacle that also serves as a pipetting device. Samples are heat pre-treated with EDTA, as for the Platelia ™ Aspergillus procedure. In the instrument, after a dilution step, GM is captured between the coated mouse monoclonal antibody (mAb) and the detection rat mAb conjugated to biotin. Alkaline phosphatase linked to an anti-biotin antibody hydrolyzes the substrate into a fluorescent product at 450 nm. The assay prototype uses a standard (S1) and a positive control. A relative fluorescence value (RFV) is generated and automatically calculated by the instrument, according to S1, and an index value (I) is calculated as I=RFVsample/RFVS1. IA cases were classified as proven or probable according to the 2020 EORTC/MSG criteria (Donnelly et al., 2020). Appropriate permissions have been obtained from BioMeŕieux for the copyright of the VIDAS ® trademark. Overall, 126 sera were tested with both assays (44 fresh and 82 archived at -80°C). We evaluated the stability of VIDAS ® GM after one and two cycles offreezing after seven days (at -80°C) and thawing (at room temperature) using 9 and 11 samples, respectively. We used the Passing Bablok (PB) test and analyse-it 5.0 software. The PB showed excellent correlations between the VIDAS ® GM indices after one (r=1.00) and two cycles (r=0.989) of freezing/thawing. The repeatability (precision within run) and reproducibility (total precision) of VIDAS ® GM were evaluated from four sera (one negative, one low-positive, and two positive) measured in triplicate twice a day for 3 days, totalizing 18 measurements per sera. The coefficients of variation for negative, low-positive, and positive sera were 13%, 6%, and 5% and 14.4%, 7.2%, and 5.5%, respectively (SAS Add-in 9.2 software). The PB correlation between VIDAS ® GM and Platelia ™ Aspergillus Ag levels was r=0.798 ( Figure 1). There was a good agreement, with a Cohen kappa index of 0.82 (95% CI=0.71-0.93) between the two assays for the positive and negative results based on a VIDAS ® GM cut-off of 1, calculated from the PB equation (Figure 1; Y=0.1166+0.8811X), and a 1 Platelia ™ Aspergillus Ag cutoff. The performance of the assays assessed by ROC curves is shown in Figure 2. Considering all 126 sera, the areas under the curve (AUC) were 0.808 and 0.827 for the VIDAS ® GM and Platelia assays, respectively. For the sera collected 15 days before or after the date of the IA diagnosis AUC of the two assays were better and similar (0.892 and 0.894, respectively). The cut-offs and results that correspond to the higher Youden index (the best balance between DISCUSSION GM detection has been used for IA diagnosis since the early 90's. The EIA Platelia ™ Aspergillus Ag kit rapidly supplanted the latex agglutination Pastorex Aspergillus ® kit (BioRad), which was commercialized first. Newly developed single-sample tests have addressed the need to reduce the time to results for early-targeted therapy and an improved outcome of IA patients (Thornton, 2008;Jenks and Hoenigl, 2020;Mercier et al., 2020). Here, we evaluated a prototype of a novel single-sample GM assay, VIDAS ® GM and compared it to Platelia ™ Aspergillus Ag. VIDAS ® GM showed excellent stability, repeatability, and reproducibility. The correlation between the two assays was also high (r=0.798) and their diagnostic performance comparable, with the AUC under ROC curves of 0.892 and 0.894 for the VIDAS ® GM and Platelia assays, respectively (Figure 2). In Figure 1 showing the correlation the three points with low indices of Platelia ™ Aspergillus Ag and high indices of the VIDAS GM correspond to false negative of the Platelia in probable IA patients (these 3 patients presented other sera positive with the Platelia ™ Aspergillus Ag). Importantly, the IA diagnosis was established according to the revised EORTC/MSG criteria (Donnelly et al., 2020), which include GM itself. Thus, the results of the diagnostic performance of the two assays should be interpreted with caution, as the sensitivity may have been overestimated. Nevertheless, the IA diagnosis remains possible when excluding GM from the diagnostic criteria, as our patients fulfilled the risk factors, as well as the clinical and radiological EORTC/MSG features. The ROC curves and best Youden index revealed a VIDAS ® GM cut-off of 0.36, corresponding to a sensitivity of 0.957, a specificity of 0.857, and an AUC of 0.892 when selecting the sera surrounding the IA diagnosis. This cut-off needs to be confirmed in further larger studies. The new VIDAS ® GM single-sample assay provides a semiquantitative measurement of GM, a widely used biomarker for which biologists and physicians have developed substantial The main benefit of VIDAS ® GM is that it is a simple ready-to use system adapted for VIDAS instruments, thus providing rapid results (70 min). This novel GM single-sample assay showed suitable analytical (stability, repeatability, reproducibility) and clinical performance and good correlation with that of the Platelia ™ Aspergillus Ag assay. These results need to be confirmed in a larger prospective, multicentric study in which the diagnosis may be defined by composite criteria without GM. Such future studies will allow refinement of the cut-offfor sera and the analysis of respiratory samples. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The samples collected in this study are part of routine clinical care and registered in the certified biological collection DC-2008-582. This collection is approved by the ethical committee of the Centre Hospitalier Universitaire of Grenoble. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.
2021-07-16T13:21:24.333Z
2021-07-16T00:00:00.000
{ "year": 2021, "sha1": "505d5b7b95a1b5a707a2165655b28fd8b1b7d782", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2021.669237/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "505d5b7b95a1b5a707a2165655b28fd8b1b7d782", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53213977
pes2o/s2orc
v3-fos-license
A two-phase model for the non-processive biosynthesis of homogalacturonan polysaccharides by the GAUT1:GAUT7 complex Homogalacturonan (HG) is a pectic glycan in the plant cell wall that contributes to plant growth and development and cell wall structure and function, and interacts with other glycans and proteoglycans in the wall. HG is synthesized by the galacturonosyltransferase (GAUT) gene family. Two members of this family, GAUT1 and GAUT7, form a heteromeric enzyme complex in Arabidopsis thaliana. Here, we established a heterologous GAUT expression system in HEK293 cells and show that co-expression of recombinant GAUT1 with GAUT7 results in the production of a soluble GAUT1:GAUT7 complex that catalyzes elongation of HG products in vitro. The reaction rates, progress curves, and product distributions exhibited major differences dependent upon small changes in the degree of polymerization (DP) of the oligosaccharide acceptor. GAUT1:GAUT7 displayed >45-fold increased catalytic efficiency with DP11 acceptors relative to DP7 acceptors. Although GAUT1:GAUT7 synthesized high-molecular-weight polymeric HG (>100 kDa) in a substrate concentration–dependent manner typical of distributive (nonprocessive) glycosyltransferases with DP11 acceptors, reactions primed with short-chain acceptors resulted in a bimodal product distribution of glycan products that has previously been reported as evidence for a processive model of GT elongation. As an alternative to the processive glycosyltransfer model, a two-phase distributive elongation model is proposed in which a slow phase, which includes the de novo initiation of HG and elongation of short-chain acceptors, is distinguished from a phase of rapid elongation of intermediate- and long-chain acceptors. Upon reaching a critical chain length of DP11, GAUT1:GAUT7 elongates HG to high-molecular-weight products. Homogalacturonan (HG) 3 is a plant cell wall polysaccharide and glycan component of more complex polysaccharides and proteoglycans that contributes to the structure and mechanical strength of the wall and has roles in plant growth, development, morphology, and response to biotic and abiotic stress (1)(2)(3)(4). HG is a linear homopolymer of 1,4-linked ␣-D-galactopyranosyluronic acid (GalA) that may be partially methylesterified at O-6 and acetylated at O-2 and O-3. It is the simplest and most abundant glycan in the family of cell wall polysaccharides known as pectins, which include HG, rhamnogalacturonan I and rhamnogalacturonan II (1). The pectic polysaccharides make contacts with each other (1), cellulose (5-7), hemicelluloses (1,5), and proteoglycans (2) within the wall through covalent and noncovalent interactions. The presence of pectin provides a barrier to the deconstruction of cellulosic biomass and partially obstructs degradative enzymes from accessing cellulose and hemicelluloses during the processing and saccharification of plant biomass (1, 8 -10). Reducing the content of pectin in woody and grass biofuel feedstocks has been shown to enhance biofuel production by increasing both biomass yield and the recovery of sugar (10 -12). Our current understanding of pectin synthesis and structure, however, is insufficient to explain its diverse functions in the plant or how reductions to pectin synthesis correlate with phenotypes such as increased plant growth and reduced recalcitrance to deconstruction. The first gene identified encoding an HG biosynthetic enzyme was galacturonosyltransferase 1 (GAUT1), a CAZy (13) family 8 glycosyltransferase (GT). The in vitro synthesis of HG or homogalacturonan:galacturonosyltransferase (HG:GalAT) activity was mapped to GAUT1 following MS sequencing of Arabidopsis thaliana solubilized membrane proteins (1,14). Subsequent co-immunoprecipitation, MS sequencing, and bimolecular fluorescence complementation revealed that GAUT1 functions as a disulfide-linked heterocomplex with the homologous protein, GAUT7 (15). In vivo, GAUT1 is truncated at its N terminus by 167 residues and requires GAUT7 for localization in the Golgi (15). Based on these results, GAUT7 was proposed to be a noncatalytic membrane anchor for GAUT1 (14,15). A more detailed study of the function of the GAUT1:GAUT7 complex and the mechanism of HG synthesis required the establishment of a recombinant protein expression system that could co-express GAUT1 and GAUT7 and properly post-translationally fold and process the active GT complex. Progress in studying plant cell wall biosynthesis has been hampered by difficulties associated with expressing and biochemically characterizing the relevant glycosyltransferases (16,17). Because plant cell wall biosynthetic GTs are typically large N-glycosylated proteins that contain hydrophobic transmembrane regions and may exist as multienzyme complexes, most successful purifications of such recombinant GTs have been achieved using eukaryotic expression systems, namely Nicotiana benthamiana (18 -21), Pichia pastoris (16,22,23), and HEK293 cells (14,24,25), with the latter being a particularly successful host for recombinant expression and enzymatic characterization of soluble forms of eukaryotic GTs (26 -32). Here, we show that the HEK293 cell system can be used to express the heteromeric, disulfide-linked GAUT1: GAUT7 complex from A. thaliana in sufficient quantities to enable a comprehensive characterization of its enzymatic properties. Heteromeric and homomeric GT complexes have been identified in glycan biosynthetic pathways shared by model eukaryotic organisms, including Homo sapiens, Saccharomyces cerevisiae, and A. thaliana (33). GT complexes have diverse biological functions and appear in N-and O-linked glycan, proteoglycan, and glycolipid synthesis pathways (33). In addition to the GAUT1:GAUT7 complex, at least six examples of proven and putative GT complexes involved in the synthesis of plant cell wall glycans are known (34 -36). Various hypotheses for the biological significance of GT complexes have been proposed, including enhancement of enzymatic activity, substrate channeling, and Golgi localization (33,34). Here we demonstrate that the GAUT1:GAUT7 complex can synthesize high-molecular-weight (MW) HG polysaccharides in vitro, and we expand upon the original model of the GAUT1: GAUT7 complex (15). We propose a two-phase model of polysaccharide elongation in which short-chain acceptors are elongated slowly, followed by a rapid elongation phase once the glycans reach an intermediate, critical degree of polymerization. This model reconciles data showing that HG extracted from plant cell walls consists of long polymers containing Ͼ100 GalA units (37) with reports that in vitro HG elongation occurs through a nonprocessive mechanism (37)(38)(39)(40). The activity reported also provides a basis for the initiation of HG polysaccharides by the GAUT1:GAUT7 complex and suggests that GAUT7 has a previously unrecognized role in contributing to the synthesis of high-MW polysaccharides. GAUT1 and GAUT7 form a complex when co-expressed in HEK293F cells The co-immunoprecipitation of GAUT1 with GAUT7 from A. thaliana solubilized membranes revealed that GAUT1 and GAUT7 function as a GAUT1:GAUT7 heterocomplex covalently linked by disulfide bonds (15). In an effort to obtain sufficient amounts of purified, soluble GAUT1:GAUT7 complex, recombinant GAUT1 and GAUT7 constructs were heterologously co-expressed in HEK293F suspension culture cells, and the GAUT1:GAUT7 complex was characterized following purification. Individual GAUT1 and GAUT7 constructs lacking their N-terminal transmembrane domains (GAUT1⌬167 and GAUT7⌬ 43) were generated as secreted fusion proteins harboring chimeric N-terminal fusion tags. GAUT1 purified from A. thaliana membranes is N-terminally truncated by 167 residues in vivo (15). The fusion tags were composed of a signal sequence, His 8 tag, AviTag, superfolder GFP, and tobacco etch virus (TEV) protease recognition site followed by the respective GAUT domains using a strategy previously employed for the expression of a large library of mammalian glycosylation enzyme expression constructs (32). The constructs were transfected into HEK293F cells, and the secreted fusion proteins were purified from the medium by Ni 2ϩ -Sepharose affinity chromatography. Co-expression of GAUT1⌬167 and GAUT7⌬43 (hereafter referred to as GAUT1 and GAUT7) resulted in the production of secreted GAUT1:GAUT7 complex (Fig. 1A, lane 7), which required reducing conditions (ϩDTT) to separate GAUT1 and GAUT7 into monomers (Fig. 1A, lane 2). Treatment of the complex with peptide:N-glycosidase F (PNGase F) led to a reduction in the molecular weight of both GAUT1 and GAUT7, indicating that both proteins are N-glycosylated (Fig. 1A, lanes 3 and 5). From a 1-liter culture, a total of 47.5 mg of the enzyme complex was purified. Expression of GAUT1 alone resulted in the secretion of a GAUT1 homocomplex that could be purified and identified by SDS-PAGE under nonreducing conditions (Fig. 1B, lane 5). However, heterologous expression of GAUT1 in the absence of GAUT7 also resulted in the formation of large amounts of truncation products and high-molecular weight aggregates (Fig. 1B, lanes 3 and 5). Expression of GAUT7 alone led to no detectable secretion of the GAUT7 fusion protein into the culture medium. These results suggest that co-expression of GAUT1 and GAUT7 and the formation of a disulfide-bonded heterocomplex is required for proper folding and secretion of the active GAUT1:GAUT7 complex. Recombinant GAUT1:GAUT7 catalyzes the transfer of GalA residues from UDP-GalA onto homogalacturonan acceptors The GAUT1:GAUT7 complex isolated from A. thaliana membranes has previously been shown to transfer GalA from UDP-GalA onto HG acceptors (HG:GalAT activity) (15). Here, we employed similar HG:GalAT acceptor-dependent activity assays to examine HG extension by the recombinant GAUT1: GAUT7 complex produced in HEK293F cells. The co-ex- Mechanism of homogalacturonan synthesis pressed complex, containing intact N-terminal tags and N-glycosylation, was used in all assays. The activity of the purified enzyme was determined by radioactive incorporation of 14 C-labeled GalA from UDP-[ 14 C]GalA onto the nonreducing ends of an HG acceptor mix enriched for HG oligosaccharides with a degree of polymerization (DP) of 7-23 (15,39,41). To assess the stability of the enzyme, GAUT1: GAUT7 activity was assayed immediately after Ni 2ϩ -Sepharose affinity purification and also after 3 days of storage at Ϫ80°C. GAUT1:GAUT7 retained 100% activity after storage at Ϫ80°C ( Fig. 2A). Single-thawed aliquots retained ϳ40% activity after storage for 15 months. The enzyme complex has a pH optimum of 7.2 ( Fig. 2B) and shows maximal activity in the presence of 0.1-1.0 mM MnCl 2 (Fig. 2C). A lower level of activity was obtained when GAUT1:GAUT7 was assayed with CoCl 2 (Fig. 2C). Manganese ion cofactors have been visualized in X-ray crystal structures of related GT8 enzymes, LgtC (42) and glycogenin (43), and make active-site contacts with UDP-sugar substrates. All of the other divalent and monovalent metal cations tested (NiSO 4 , FeCl 2 , CuSO 4 , CaCl 2 , ZnSO 4 , MgCl 2 , NaCl, and KCl) yielded less than 5% of the activity observed with MnCl 2 (Fig. S1). Based on the assays outlined above, a standard reaction condition was defined: 5 min reactions containing 100 nM GAUT1: GAUT7, 1 mM UDP-GalA, 10 M HG acceptor, and 0.25 mM MnCl 2 in a pH 7.2 HEPES buffer containing 0.05% BSA. Using the standard conditions outlined, Michaelis-Menten kinetics were measured for the donor, UDP-GalA, and for the HG acceptor mixture with the nonvariable substrate held at saturating conditions. The GAUT1:GAUT7 complex shows a standard hyperbolic Michaelis-Menten curve for UDP-GalA with a K m of 151 M (Fig. 2D). Substrate inhibition was observed at HG acceptor concentrations Ͼ 5 M (Fig. 2E). Substrate inhibition at high concentrations of HG acceptor is expected if GAUT1:GAUT7 functions through an ordered kinetic mechanism in which the UDP-GalA donor binds to the active site prior to the HG acceptor (44,45). This ordered substrate binding scheme has also been shown in crystal structures and inhibition assays for homologous GT8-family enzymes (42,43,46,47). k cat values for the elongation of HG acceptors ranged from 0.92-1.99 s Ϫ1 . Results from six independent assays are summarized in Table S1. GAUT1:GAUT7 synthesizes high molecular weight polysaccharides in vitro by elongation of medium-chain HG acceptors (DP > 11) using a distributive mechanism The mechanism of HG backbone synthesis has not been clearly defined due to conflicting results from prior studies of this activity. HG:GalAT activity from detergent-solubilized microsomal membranes and from GAUT1:GAUT7 partially purified by immunoprecipitation has previously been shown to add between 1 to ϳ30 GalA residues onto DP13-15 acceptors (14,(37)(38)(39), without detection of high-MW polymeric HG. These results suggested that GAUT1:GAUT7 uses a distributive mechanism, in which single GalA units are added during each catalytic event, followed by the release of the HG acceptor, and also that GAUT1:GAUT7 does not synthesize high-MW HG polymers in vitro. In contrast, intact Nicotiana tabacum microsomal membranes produce an HG-containing product of ϳ105 kDa (39,48). However, in those experiments, it was not shown whether the high-MW product was free HG polysaccharide or a more complex glycan or proteoglycan containing HG. The synthesis of high-MW HG polysaccharides has not yet been described from in vivo or in vitro experiments. We used recombinant GAUT1:GAUT7 and HG acceptors enriched for homogeneous degrees of polymerization to measure the size of products synthesized by HG:GalATs. We reasoned that if the GAUT1:GAUT7 complex elongates HG acceptors by a distributive mechanism, the chain length of the products synthesized should be dependent on the ratio of donor to acceptor substrates (49,50). The degree of HG acceptor elongation by GAUT1:GAUT7 in vitro was determined by size-exclusion chromatography, highpercentage PAGE, and MALDI-TOF. Using 1 mM UDP-GalA and varying amounts of DP11 HG acceptor, reactions were performed under two conditions: 10 M HG acceptor (100:1 molar excess of UDP-GalA donor) (Fig. 3, A and B) and 100 M HG acceptor (10:1 molar excess of UDP-GalA donor) (Fig. 3, C and D). DP11 acceptors were rapidly elongated under both assay conditions. The products synthesized were larger at all time points when a 100:1 molar excess of UDP-GalA over HG acceptor was used (Fig. 3, A and B). The high-MW products are Mechanism of homogalacturonan synthesis measured to be Ͼ100 kDa, but pectin MW may be overestimated relative to dextran standards due to the potential for aggregation or anomalous behavior in size-exclusion columns (51). As observed in high-percentage polyacrylamide gels in which individual bands corresponding to HG products up to DP30 could be distinguished, the DP11 acceptor was elongated to an estimated DP of 30 -50 (Ͻ10 kDa) when incubated at a 10:1 donor/acceptor ratio (Fig. 3D). Comparison of these two reaction conditions showed that in vitro HG synthesis by GAUT1:GAUT7 matches the results expected for a distributive GT mechanism in which the narrow product distribution has a DP dependent on the available donor/acceptor ratio, as discussed for polysialyltransferases (49). Overnight incubation of GAUT1:GAUT7 with donor/acceptor ratios ranging from 10 to 10,000 further demonstrated that GAUT1:GAUT7 synthesizes high-MW products given available UDP-GalA (Fig. S2). Labeling of the reducing ends of HG acceptors with the fluorescent tag 2-aminobenzamide (2-AB) has been used to detect HG elongation using high-performance anion-exchange chromatography (40). Fluorescent labeling of oligosaccharides has also been shown to enhance detection of oligosaccharides by MALDI-TOF (24). We therefore used MALDI-TOF to test the pattern of elongation of a DP15 fluorescently labeled HG acceptor in reactions incubated from 5 min up to 4 h. Over time, products up to DP30 were detected (Fig. 3E). The apparent Poisson distribution of HG products formed over time is also typical of a distributive mechanism of synthesis (24). However, it should be noted that, as observed in polyacrylamide gels (Fig. 3D), products larger than DP30 were also synthesized under these reaction conditions, indicating that the apparent Poisson distribution observed by MALDI-TOF was partially due to the loss of higher-MW signals. Longer-chain oligosaccharides may be poorly ionized or may have precipitated when exposed to acidic conditions during the labeling procedure. Elongation of short-chain HG acceptors (DP < 7) occurs with low efficiency relative to longer-chain acceptors (DP > 11) Prior assays of HG:GalAT activity from N. tabacum and Petunia axillaris solubilized membranes demonstrated a preference of the enzyme for exogenous HG acceptors of DP Ն 10 -12, Table S1. Mechanism of homogalacturonan synthesis but also indicated that acceptors as short as DP5 can be elongated (38,39). To more clearly define the acceptor specificity of GAUT1: GAUT7, we compared the rates of elongation of homogenous HG acceptors enriched for DPs of 3, 7, 11, and 15. GAUT1:GAUT7 elongates HG acceptors at least as small as DP3 (Fig. 4A). GalA transfer is most rapid using longer-chain acceptors (DP Ն 11). The rate of transfer to DP11 and DP15 acceptors was approximately equal from 5 to 60 min. Activity was detected using shorter acceptors (DP7 and DP3), but the initial rates of synthesis were low relative to the longer-DP acceptors. The highest rates of synthesis for longer-chain acceptors were detected in the initial linear phase of the reaction, as expected for a standard steady-state reaction progress curve. A lag phase was observed during the elongation of DP7 and DP3 acceptors, and longer incubation periods were required to detect above-background levels of activity. Reaction kinetics were measured using increasing amounts of DP11 (Fig. 4B) and DP7 (Fig. 4C) HG acceptors. The reaction kinetics using the DP11 HG acceptor appear similar to the results observed using the HG acceptor mix (Fig. 2E), with inhibition observed at concentrations above 5 M. Because activity with the DP7 acceptor was near background level when measured at 5 min, a 30-min incubation time was used. Standard hyperbolic Michaelis-Menten kinetics were observed under these conditions. The catalytic efficiency, defined as k cat /K m , of a DP11 acceptor was 45-fold higher than a DP7 acceptor, representing a strong preference for longer-chain acceptors. This measurement may underestimate the difference in catalytic efficiency between DP7 and DP11 acceptors because the low initial rates and the lag phase observed only with short-chain acceptors makes it difficult to calculate true initial rates of synthesis. Elongation of short-chain HG acceptors and de novo synthesis in the absence of exogenous oligosaccharide acceptors leads to a bimodal product distribution with minimal observable intermediates As shown in Fig. 4, initial rates of synthesis using short-chain HG acceptors (DP Յ 7) are low relative to longer-chain accep- Mechanism of homogalacturonan synthesis tors. To test whether the DP7 acceptor is elongated by a different reaction mechanism distinct from longer-chain acceptors, the degree of DP7 acceptor elongation was measured by size-exclusion chromatography, high-percentage PAGE, and MALDI-TOF. Reactions were carried out using 100 M acceptor (10:1 molar excess of UDP-GalA donor) (Fig. 5, A and B). Unlike the results observed using a DP11 HG acceptor (Fig. 3), a bimodal distribution of high-MW and short-chain products was synthesized during the elongation of a DP7 acceptor, even under conditions of a low molar ratio of UDP-GalA to acceptor. Polysaccharides of intermediate molecular weights were not observed. When detecting low-MW products, the addition of only 1-2 GalA residues was observed at all time points (Fig. 5B and Fig. S3B). Unlike DP11 elongation, a Poisson distribution of HG oligomers was not observed. Instead, the intensity of high MW products increased with reaction time from 1 to 12 h. A similar bimodal product distribution was observed with donor/acceptor ratios ranging from 10 to 200 (Fig. S4). Having identified that GAUT1:GAUT7 can elongate shortchain acceptors as small as DP3, the complex was tested for the ability to initiate HG synthesis de novo in the absence of exogenously added acceptors. Prior attempts to identify de novo initiation of HG using detergent-solubilized membrane fractions yielded no measurable activity, leading to the conclusion that the enzyme only functions in the elongation step of HG biosynthesis (1). Three monoclonal antibodies previously shown to react with HG (CCRC-M38, CCRC-M131, and JIM5) (52) were used in ELISAs to determine whether GAUT1:GAUT7 could synthesize HG de novo. Following incubation with 1 mM UDP-GalA, all three anti-HG antibodies reacted with GAUT1:GAUT7synthesized product in a time-and concentration-dependent manner (Fig. 5C). Control ELISAs confirmed that the anti-HG antibodies detected HG acceptor controls but had no reactivity toward GAUT1:GAUT7 itself or toward UDP-GalA (Fig. 5D). To the best of our knowledge, this is the first evidence that GAUT1:GAUT7 can synthesize HG de novo. The de novo synthesis product synthesized in overnight reactions was of high MW, similar to the products of overnight reactions with DP7 and DP3 acceptors (Fig. 5E). The de novo synthesis product is of a more uniform MW than the relatively polydisperse product formed following elongation of DP7 acceptors. The polysaccharides synthesized de novo were sensitive to digestion by the HG-specific enzyme, endopolygalacturonase (Fig. S5). These results demonstrate that in addition to elongation of acceptors of all degrees of polymerization, GAUT1:GAUT7 can initiate the synthesis of high-MW HG de novo. A two-phase model of distributive elongation is favored over a processive model for HG synthesis During in vitro HG polymerization by GAUT1:GAUT7, small differences in the chain length of HG acceptors appeared to affect the mechanism of elongation. We investigated whether short-and long-chain acceptors are elongated using distinct mechanisms by further defining the effect of acceptor DP on product size distribution. GAUT1:GAUT7 was incubated for 12 h in the presence of a low donor/acceptor ratio (10:1) of intermediately sized HG acceptors ranging from DP7 to DP11 (Fig. 6A). As previously observed, elongation of the DP11 acceptor resulted in products ranging in size from DP11 to a DP of ϳ30 -50. The majority of the starting acceptor was incorporated into larger-sized HG products. In contrast, all Table S1. Mechanism of homogalacturonan synthesis acceptors of DP Յ 10 were elongated relatively poorly, as indicated by the appreciable amount of acceptors remaining unelongated even following a 12-h incubation. Quantitation using fluorescently tagged acceptors indicated that Ͻ3% of the original DP7 acceptor was elongated to high-MW products in overnight reactions (Fig. S7). Product size was inversely related to the size of the HG acceptor, with high-MW polymeric HG observed following incubation with smaller acceptors. Contrary to the DP10 and DP11 acceptors, elongation of DP7 and DP8 acceptors resulted in a bimodal product distribution, with an intermediate distribution for DP9 (Fig. 6A). Three observations appeared to support the hypothesis that GAUT1:GAUT7 uses distinct processive and distributive mechanisms, depending on the chain length of the acceptor, and that short-chain acceptors were elongated by a processive Fig. 3B. C, detection of HG polysaccharides synthesized de novo by ELISA. Reactions (30-l total volume) containing 100 nM GAUT1:GAUT7 and 1 mM UDP-GalA were incubated for 5 min or 12 h. At the indicated time points, aliquots containing a 1-or 0.1-l reaction volume ( 1 ⁄30 or 1 ⁄300 of the total reaction volume) were boiled and spotted onto a 96-well plate. Reaction products were detected by anti-HG antibodies CCRC-M38, CCRC-M131, and JIM5 but showed no reactivity toward anti-xylan antibody CCRC-M149. D, anti-HG antibody ELISA controls. In C and D, error bars represent the S.D. from a total of four replicate measurements from two independent experiments. E, high-percentage polyacrylamide gel of products synthesized de novo by GAUT1:GAUT7 or in the presence of DP3 or DP7 acceptors, stained with a combination of alcian blue/silver. Reactions (30-l total volume) containing 100 M DP7, 100 M DP3, or no acceptor (de novo synthesis) were incubated for 24 h, and then 5 l was removed for separation on a 30% polyacrylamide gel. Mechanism of homogalacturonan synthesis mechanism. First, a bimodal product distribution formed over time (Fig. 5, B and E). The archetypal processive model for GT activity proposes that a single acceptor molecule remains tightly bound without dissociating from the enzyme until after many rounds of elongation, leading to the formation of high-MW products with minimal observable intermediates (49,50,53). Second, a lag phase was observed during early points in the progress curve (Fig. 4A). The lag phase has been proposed to be a feature common to processive enzymes due to the low affinity of short-chain acceptors that are not capable of filling a requisite number of "acceptor subsites" within an extended active-site domain (50,54). Third, the amount of acceptors within the starting pool used during the reaction was low, measured at Ͻ3% (Fig. S7). In traditional measures of processivity, if Ͻ10% of the total starting acceptor pool is elongated, then it has been assumed that each acceptor has only associated with the enzyme in a single priming event (53,55). All three of these observations were suggestive of the processive model but did not directly demonstrate the existence of tight enzyme-acceptor binding complexes demanded by that model. Because these results were inconsistent with the rapid elongation of DP Ն 11 acceptors in a nonprocessive manner, we considered that the processive model may not accurately describe the elongation of HG by GAUT1:GAUT7. To directly test for evidence of a processive elongation mechanism in the presence of low-DP acceptors, a competition assay was designed. GAUT1:GAUT7 was preincubated with UDP-GalA and a DP7 acceptor for 15 min at a 1000-fold excess of acceptor over enzyme. We reasoned that 15 min was a sufficient preincubation time to allow all enzyme molecules to bind to the acceptor and to begin processive synthesis. Following preincu-bation, a DP11 acceptor was added, and incubation was continued for another 5 min (Fig. 6B, lanes 4 and 5). If the DP7 HG served as an acceptor for processive catalysis, the DP11 acceptor would not have been able to compete for binding to GAUT1: GAUT7 or to serve as an acceptor. The results show, however, that the DP11 HG acceptor was able to compete for enzyme binding and was elongated during the 5-min reaction period. The DP11 acceptor was elongated to the same chain lengths as two control reactions: a standard 5-min reaction with no competing DP7 acceptor added during the preincubation period (Fig. 6B, lanes 6 and 7) and a 5-min reaction in which DP7 and DP11 acceptors were added at the same time (lanes 8 and 9). These results argue against the hypothesis that shortchain acceptors are synthesized by a processive mechanism, in which the growing acceptor should have formed tightly binding complexes with the enzyme, making it unavailable for binding to DP11 acceptors. Furthermore, there was no effect on the size distribution of the products synthesized by elongation of DP11 acceptors in the presence of DP7. This result is consistent with the enzyme having a strong preference for binding to longerchain acceptors and releasing the acceptor following each round of GalA transfer. The competition assay argues against the hypothetical processive elongation model because the results suggest that GAUT1:GAUT7 and HG acceptors do not form tight enzymeacceptor binding complexes. An alternative, two-phase model proposes that acceptors of all sizes are elongated by a distributive mechanism and that the bimodal product distribution results from large differences in catalytic efficiency between shorter-and longer-chain acceptors. Small acceptors are inefficiently elongated by the enzyme, as evidenced by the Ͼ45-fold 6 and 7), no acceptor was added during the preincubation phase. DP11 acceptor was added after 15 min. For simultaneous incubation reaction controls (lanes 8 and 9), no acceptor was added during the preincubation phase. A mix of 100 M DP7 and the indicated DP11 acceptor was added after 15 min. DP11 standard (lane 10) shows the DP11 acceptor and the presence of background bands due to minor impurities. Mechanism of homogalacturonan synthesis difference in catalytic efficiency between DP7 and DP11 acceptors. Acceptors are only rapidly elongated after reaching a critical DP, estimated to be DP11 (Fig. 6A). The relative inefficiency of short-chain acceptors results in a slow rate of synthesis during the early phase of chain elongation. In reactions containing only DP7 acceptors, high-MW products are observed because there is an effective increase in the donor/acceptor ratio for the small number of acceptors that reach the critical DP, causing them to become rapidly elongated. Synthesis of high-MW HG depends upon GAUT1:GAUT7 complex formation and electrostatic interactions HG is a negatively charged polymer. It has been shown for other charged polymers, including polysialic acid (49) and DNA (56), that electrostatic substrate-enzyme interactions affect product size distribution and the mechanism by which enzymes maintain contact with their charged substrates. We hypothesized that electrostatic HG-enzyme interactions may be involved in acceptor binding and elongation by GAUT1: GAUT7. Prior evaluation of a homology model of the GAUT1 GT8 domain using LgtC from N. meningitidis as a template identified a patch of positively charged residues near the active site of GAUT1 that could create an extended acceptor binding groove (57). We predicted that incubation with NaCl would disrupt these interactions and limit the ability of GAUT1: GAUT7 to maintain the contacts with the HG acceptor needed for efficient transfer. We tested whether the addition of NaCl affected product formation by GAUT1:GAUT7. The presence of 100 mM NaCl in reactions containing a 1000:1 ratio of UDP-GalA to DP11 HG acceptor inhibited the synthesis of high-MW HG products, even under long reaction times (Fig. 7A). In reactions containing a 10:1 donor/acceptor ratio and 0, 50, and 100 mM NaCl, elongation of a DP7 acceptor was likewise inhibited with increasing salt concentration (Fig. 7B). Individual bands of intermediate-sized products (DP Ͼ 20) were detectable following reactions containing NaCl, leading to the loss of the bimodal product distribution. This result also argues against DP7 being elongated processively. Short-chain elongation products were visible in both reactions, but the addition of NaCl to the reaction appears to prevent high-MW product formation. Synthesis of intermediate-sized HG upon elongation of a 10:1 donor/acceptor ratio of DP11 acceptor was only weakly affected by the presence of NaCl (Fig. 7B). For both DP7 and DP11 acceptors, product size was limited to approximately DP30 -50 when GAUT1/GAUT7 was assayed with 100 mM NaCl. We propose that the electrostatic interactions guide the rapid elongation phase by orienting longer-chain acceptors within the active site for GalA transfer to the nonreducing end. The NaCl concentrations used here are within the physiological range and may not otherwise be expected to have such a strong inhibitory effect, as many glycosyltransferases are regularly purified, stored, and assayed under similar NaCl concentrations (32,58). The possibility that previously unidentified structural domains are required for the synthesis of high-MW HG opens the possibility that GAUT7 also contributes to the function of the complex. The activity of GAUT1 expressed and purified in the absence of GAUT7 was tested. Upon incubation of GAUT1 with DP11 and DP7 acceptors (Fig. 7C), high-MW product was not observed. The size of the products synthesized by GAUT1 was limited to DP of ϳ30 -50, even in reactions containing a large excess of donor (10 M acceptor, 100:1 donor/acceptor ratio). The consistency of this result with the reduced product sizes observed following incubation of GAUT1:GAUT7 with NaCl suggests that GAUT7 also contributes to an acceptorbinding domain/pocket required for high-MW polymerization. Due to poor expression of GAUT1 and GAUT7 as individual enzymes, the nature of the contribution of GAUT7 to HG synthesis remains to be more thoroughly investigated. Soluble expression of plant cell wall glycosyltransferases using the HEK293F cell system Plant cell wall matrix polysaccharides are synthesized in the secretory pathway by the coordinated efforts of an estimated 200 glycosyltransferases that generate the backbone and sidechain linkages of pectins and hemicelluloses, including xylans and xyloglucan. A study published in 2009 (16) identified a total of nine plant cell wall GTs that had in vitro activities verified by expression in a heterologous system. In the years since that publication, numerous additional plant cell wall GT in vitro activities have been demonstrated and mapped to individual genes or multigene families (18 -24, 58 -65). Similar to activities assayed from native plant membranes, the use of recombinant enzymes produced in microsomal membranes from sources such as P. pastoris or N. benthamiana may suffer from difficulties including low protein yields and unknown concentrations of the protein of interest (61). Here, the HEK293F cell expression system was used for robust co-expression of a disulfide-linked enzymatically active GT complex in a soluble secreted form. In addition to the expression of the GAUT1: GAUT7 heterocomplex, the HEK293F cell system has been used to express the plant cell wall GTs xylan synthase-1 (Xys1) and fucosyltransferase 1 (Fut1) (24,25). A model for HG synthesis by GAUT1:GAUT7 The results described here expand upon previous reports of HG synthesis by demonstrating that the GAUT1:GAUT7 complex can synthesize high-MW polysaccharides in vitro using a distributive mechanism. Previously, intact and partially solubilized N. tabacum membranes were shown to incorporate [ 14 C]GalA into high-MW polysaccharides with an apparent mass Ͼ 100 kDa (39,48). It was uncertain whether the large size of the polysaccharide products detected in cellular membrane preparations was due to the initiation of polymeric HG or due to incorporation of [ 14 C]GalA into large endogenous acceptors of an unknown size. We demonstrate here that as long as a sufficient concentration of UDP-GalA is available, recombinant GAUT1:GAUT7 is capable of producing high-MW polysaccharides in vitro. A hypothetical model of distributive HG elongation is depicted in Fig. 8 as a series of five steps. Binding of UDP-GalA (step 1) is followed by HG acceptor binding (step 2). The binding of longer-chain acceptors is enhanced by structural features Mechanism of homogalacturonan synthesis of the GAUT1:GAUT7 complex, including charged interactions within an extended acceptor-binding groove that also appears to require the presence of GAUT7. GalA is transferred to the nonreducing end of the HG acceptor (step 3). The HG acceptor, elongated by a single GalA residue, departs from the active site (step 4), followed by departure of UDP (step 5). The conformational state of the enzyme is then reset for the next round of glycosyl transfer. Numbered GalA units in the acceptor molecule (center images) represent subsites within the proposed extended acceptor binding domain. The two-phase model is more complete than traditional descriptions of distributive glycosyltransfer mechanisms because it accommodates the observation of distinct, acceptor size-dependent slow and rapid elongation phases. This phenomenon may be common to GTs but would not be observed unless the reaction progress is independently assayed with a wide range of oligosaccharide acceptors. Elongation of short-chain acceptors exhibits several characteristics that have previously been proposed to be common to processive GTs, including a bimodal product distribution, the lack of intermediate-sized products, a lag phase during the early time points of the reaction progress, and large proportions of the starting acceptors remaining unelongated (49,50,53,55,66). However, here we show that HG acceptors of DP Ն 11 are elongated in vitro by a distributive mechanism with greater catalytic efficiency than smaller HG acceptors. For several polymerases that favor longer acceptors, a model has been proposed in which oligosaccharides must be elongated to a certain minimum length before the transferase exhibits maximum activity. Longer acceptors can fill acceptor-binding subsites within the active site groove (50,54,67). This "acceptor subsites" model is compatible with the slower rates of synthesis that have been observed with DP Յ 7 HG acceptors. Only acceptors longer than the critical DP can efficiently bind to the active site and be rapidly elongated. The nearly identical rates of transfer to DP11 or DP15 HG acceptors support an HG acceptor size of DP11 as being sufficient for maximum activity. The model that we present expands upon the hypothesis presented during the analysis of the processive bacterial galactan polymerase GlfT2 (50) because we argue that acceptor subsites and kinetic lag phases are features that are not necessarily limited to processive polymerases. Processive polymerization mechanisms may require enzymes that physically constrain and enclose the growing acceptor chain, such as cellulose synthase (68), DNA-binding enzymes (56,69), or, putatively, multi-transmembrane-channel-forming GTs, such as the xyloglucan backbone synthase CSLC4 (70,71). Many type II GTs have a single GT domain and have no obvious structural basis for confining the growing glycan chain within a processive tunnel. Rather than inferring a processive model by induction from end-point product distribution data, the competition assay that we present (Fig. 6) is part of a recent push to find direct assays to test for the formation of enzyme:acceptor complexes that would serve as positive evidence for the archetypal processive model (50). The competition assay showed that longer, more efficient acceptors can compete for binding to the enzyme following a preincubation period sufficient for formation of processive enzyme:acceptor complexes. The results of this assay show that GAUT1:GAUT7, at least in vitro, does not remain tightly bound to the HG polymer during elongation, a key feature that distinguishes distributive from processive polymerases. A two-phase model with distinct differences in elongation rates between short-chain and longer-chain acceptors has been described previously for the nonprocessive in vitro activity of K92 polysialyltransferase from Escherichia coli, which has a similar critical acceptor chain length, approximately DP10 -12 (72). Heparosan synthase from Pasteurella multocida provides a similar example of a nonprocessive transferase in which a slow initiation phase is observed (73). Two-phase elongation provides an alternative hypothesis for the interpretation of bimodal or polydisperse product distributions that may be observed in studies of GT activity. As an example, levansucrase from Bacillus subtilis was determined to have a processive activity on the basis of a bimodal product distribution, which was lost upon the addition of DP16 acceptors (66). If levansucrase functions similarly to GAUT1:GAUT7, then longer-chain acceptors are more efficiently elongated. The bimodal product distribution would result from large differences in catalytic efficiency and acceptor preference based on chain length. The buildup of inefficient short-chain acceptors and rapid elongation as soon as the acceptor reaches a critical DP can explain non-Poissonian product distributions for GTs that have not been directly shown to be processive. In vitro polymer initiation and extension in the absence of a glycan primer has been demonstrated for at least four bacterial capsular polysaccharides (74 -77), including two negatively charged polysaccharides: Neisseria meningitidis serogroup X and serogroup A. The de novo synthesis products are both synthesized as high-MW polysaccharides with relatively homogeneous product dispersity compared with acceptor-primed reactions (75,76). For the de novo initiation of HG polysaccharides, a proposed candidate for the primer is a molecule of UDP-GalA, similar to elongation of ␤-1,4-linked GlcNAc oligomers synthesized by hyaluronan synthase that retain UDP at the reducing end (78,79). Alternatively, the primer could be monomeric GalA formed following hydrolysis of UDP-GalA. The large size of the products synthesized de novo, containing several hundred GalA units per reducing end, have prevented the primer for de novo synthesis from being identified in the current study. Anionic polymers, such as HG, may have electrostatic enzyme-acceptor interactions that aid polymerization by promoting the proper binding and orientation of growing acceptor chains. The proposed positively charged acceptor-binding groove was identified using a homology model of the GT8 domain of GAUT1 (57). Future structural and acceptor binding studies will be necessary to confirm the existence of this extended binding groove. Basic residues located in the acceptor-binding pocket influence processivity and product dispersity in a polysialyltransferase from N. meningitidis (49). Incubation with NaCl disrupts polymer sliding in enzymes that bind to DNA, such as the endonuclease BamHI (56). In the case of the nonprocessive activity of GAUT1:GAUT7, strong inhibition of activity by NaCl may be due to disruption of electrostatic inter- Mechanism of homogalacturonan synthesis actions that contribute to the ability of acceptors to bind to the charged acceptor-binding groove. The model described in this report provides a basis for the synthesis of high MW HG polysaccharides. GAUT1:GAUT7 uses a distributive mechanism in which rapid elongation requires that the acceptor reach an intermediate chain length, approximately DP11. The significance of this acceptor size, including comparisons with other GTs that function using multiphase elongation mechanisms, will require further investigation. The relative inefficiency of elongation of short-chain acceptors may provide a mechanism by which HG polymer size and chain initiation are regulated in vivo. This model is consistent with previous reports that HG:GalAT activity is nonprocessive (37)(38)(39)(40). Whereas the precise role of GAUT7 remains uncertain, GAUT1 appears to be unable to synthesize high-MW products in the absence of GAUT7. Proper folding and secretion of the complex requires co-expression of GAUT1 with GAUT7, but GAUT7 also appears to have a functional role in contributing to HG polymer synthesis. The results reported here provide a comprehensive characterization of the in vitro distributive activity of a heterologously expressed plant cell wall polysaccharide biosynthetic GT complex, with implications for investigations of kinetic mechanisms and processivity of GT heterocomplexes. Sequences were verified following cloning into entry and expression vectors. Expression plasmids were purified using Purelink HiPure Plasmid Gigaprep kits (Invitrogen). HEK293F suspension culture cells grown to a cell density of 2.5 ϫ 10 6 in Freestyle 293 expression medium (Thermo Fisher Scientific) were used for transfection. Transfection of DNA, either GAUT1⌬167 or GAUT1⌬167 co-transfected with GAUT7⌬43, was done at a total concentration of 3 g/ml total culture volume with polyethyleneimine (9 g/ml). Cells were incubated in a humidified shaking 37°C CO 2 incubator at 150 rpm for 24 h before 1:1 dilution in Freestyle 293 medium supplemented to a final concentration of 2.2 mM valproic acid (Sigma). Medium containing secreted protein was collected after a 6-day total incubation time. Secreted proteins were purified from suspension culture medium by nickel-affinity purification using HisTrap HP (GE Healthcare) columns connected to an Ä KTA FPLC system (GE Healthcare). Vacuum-filtered culture medium was injected into HisTrap HP columns at 1 ml/min, washed with column buffer, and eluted using a gradient from 20 -300 mM imidazole (20 column volumes). Fractions containing protein were exchanged into a storage buffer containing 50 mM HEPES, pH 7.2, 0.25 mM MnCl 2 , and 20% glycerol using a PD-10 desalting column (GE Healthcare) and concentrated using a 30-kDa MW cutoff Amicon Ultra centrifugal filter unit (Millipore). The concentration of nickel-affinity purified proteins was determined by UV-visible spectroscopy (Nanodrop) using a 10-cm path length cuvette, and purification was confirmed by SDS-PAGE on a 4 -15% gradient gel (Bio-Rad). Synthesis of UDP-[ 14 C]GalA and HG oligosaccharide acceptors UDP-D-[ 14 C]galactopyranosyluronic acid was synthesized enzymatically from UDP-D-[ 14 C]glucopyranosyluronic acid (PerkinElmer Life Sciences) as described (15,80). The batchspecific activity value of 249 mCi/mmol was used to convert cpm readings from scintillation counting to reported pmol values in activity assay figures. Nonradiolabeled UDP-D-galactopyranosyluronic acid was purchased from CarboSource Services. The HG acceptor mix, enriched for HG oligosaccharides of DP7-23, was generated by partial digestion of polygalacturonic acid with endopolygalacturonase, and the purity was confirmed by HPAEC-PAD as described (39). HG acceptors enriched for homogeneous degrees of polymerization of 7-15 were purified by HPAEC-PAD as described (39). Trigalacturonic acid (DP3) was purchased from Sigma. Deglycosylation, fusion tag removal, and protein gel electrophoresis Recombinant TEV protease and PNGase F were expressed as N-terminal His/GFP fusion proteins in E. coli and purified by nickel-affinity chromatography as described (26). Recombinant Mechanism of homogalacturonan synthesis proteins were incubated at a 1:10 ratio of TEV and PNGase F relative to the GAUT1:GAUT7 complex overnight at room temperature to cleave the fusion tags and N-glycan structures. Protein samples (4 g) were mixed with Laemmli sample buffer (81) containing 25 mM DTT to reduce the samples. DTT was omitted in nonreduced samples. Samples were boiled for 10 min and resolved by 4 -15% gradient Tris-glycine SDS-PAGE gel. Electrophoresis was performed in a running buffer containing 25 mM Tris, 192 mM glycine, and 0.1% (w/v) SDS using 150-mV constant voltage. HG:GalAT activity radiolabeled filter assays Unless otherwise noted, HG:GalAT activity was measured under standard conditions in 30-l reactions containing 100 nM GAUT1:GAUT7, 5 M UDP-[ 14 C]GalA, 1 mM total UDP-GalA, 10 M HG acceptor, HEPES buffer, pH 7.2, 0.25 mM MnCl 2 , and 0.05% BSA. Reactions were incubated at 30°C, and 5 min was used as a standard time for linear range specific activity comparisons. As indicated, reactions were modified from the standard conditions to include acceptors enriched for a homogeneous degree of polymerization or higher concentrations of acceptors to modify the donor/acceptor ratio or were conducted in the absence of exogenous acceptors for samples labeled "de novo" synthesis. HG:GalAT activity was measured using a filter assay as described (82) with modifications. Reactions were terminated by the addition of 400 mM NaOH (5 l). Reactions were spotted onto 2 ϫ 2-cm squares of Whatman 3MM chromatography paper coated with cetylpyridinium chloride. Filters were airdried for 5 min prior to three 15-min rounds of washing in a 4-liter bath containing 150 mM NaCl, for a total washing period of 45 min. Filters were air-dried for at least 2 h prior to scintillation counting. Scintillation counting was performed using a PerkinElmer Life Sciences Tri-Carb 2910 TR liquid scintillation counter, 14 C program, 1-min count time/sample. Background cpm was measured in each assay using T0 samples, in which NaOH was added to the reaction mixture prior to the addition of enzyme. Background cpm was subtracted from net cpm readings for all reaction samples, and cpm was converted to total pmol transferred. Alcian blue-stained PAGE Reactions were incubated under standard conditions described above, except UDP-[ 14 C]GalA was omitted from the reaction buffer. Reactions were terminated by boiling. Aliquots containing an estimated 200 -500 ng of total polysaccharide were analyzed. PAGE and visualization by a combination of alcian blue and silver nitrate staining was performed as described with modifications (14). Samples were mixed with a loading buffer (final concentration 0.1 M Tris, pH 6.8, 0.01% phenol red, and 10% glycerol), loaded onto a stacking gel (5% acrylamide (Bio-Rad), 0.64 M Tris, pH 6.8), and separated over a 30% acrylamide resolving gel (0.38 M Tris, pH 8.8, 30% acrylamide) at 17.5 mA for 60 min. The gel was stained for 20 min with 0.1% alcian blue (Sigma) in 40% ethanol and washed with at least three changes of water until background staining was eliminated. Silver stain-ing and developer was performed using a silver staining kit (Bio-Rad). Staining was terminated by the addition of 5% acetic acid. Size-exclusion chromatography of HG:GalAT products Scaled up reactions with a total volume of 400 l were incubated, using concentrations of reagents consistent with smallscale, nonradioactive, standard condition HG:GalAT assays. At each indicated time point, a 50-l aliquot was removed, and the reaction was stopped by boiling. Denatured enzyme was removed from the sample by centrifugation at 12,000 rpm for 5 min, and the polysaccharide sample was frozen at Ϫ20°C until analysis. For T0 samples, an equivalent aliquot was removed prior to the addition of UDP-GalA to the reaction. Size-exclusion chromatography (SEC) was performed using a Superose 12 10/300GL column connected to a Dionex system at a flow rate of 0.5 ml/min in 50 mM ammonium formate buffer. The refractive index of polysaccharide products was measured. Dextran standards (270, 150, 50, and 12 kDa) (Sigma) were used as molecular mass standards. The peak retention volume of each dextran standard is indicated by an arrow at the top of each SEC figure. The molecular weight estimations of pectins may be overestimations due to anomalous behavior of pectins compared with dextrans during SEC (51). For 2-AB-labeled polysaccharides measured by fluorescence detection, reactions were incubated for 12 h, polysaccharide reducing ends were chemically labeled with 2-AB (described below), and reaction products were injected onto the SEC column under the same conditions as above. An RF 2000 fluorescence detector, under high sensitivity (x16) settings, was used for detection. Quantitation of high-MW polysaccharide was performed by fluorescence signal integration of product and acceptor peaks in Chromeleon version 6.80 software (Dionex). MALDI of 2-AB-labeled HG oligosaccharide products Following incubation under the indicated reaction conditions and times, HG products were incubated with 0.2 M 2-AB and 1 M sodium cyanoborohydride in 10% acetic acid to chemically label the reducing ends of HG oligosaccharides, as described (24,40). Samples were dialyzed four times against water in a 3500 MW cutoff tubing (VWR Scientific) and recovered by lyophilization. Retention of HG oligosaccharides during dialysis has been described (51). ELISA of HG:GalAT activity using anti-HG monoclonal antibodies Monoclonal antibodies directed against plant cell wall polysaccharides were characterized previously (52). Antibodies directed against HG (CCRC-M38, CCRC-M131, and JIM5) and xylan (CCRC-M149) were used in this study. Antibody epitope, immunoreactivity, and supplier information is available at Mechanism of homogalacturonan synthesis WallMabDB (http://www.wallmabdb.net). 4 Following incubation under standard reaction conditions unless otherwise noted, aliquots containing either 1 l ( 1 ⁄ 30 of the reaction) or 0.1 l ( 1 ⁄ 300 of the reaction) were diluted to a final volume of 50 l, and reactions were terminated by boiling. ELISAs were performed as described previously (84). The 50-l diluted reaction sample was incubated in a 96-well plate (Costar 3598) and evaporated to dryness overnight. Nonspecific spots on the plate were blocked by incubation for 1 h in 0.1 M Tris-buffered saline (TBS) containing 1% nonfat dry milk (Publix) (200 l). Primary antibodies (50 l), diluted 10-fold from the hybridoma supernatant, were dispensed into each well and incubated for 1 h. All wells were washed with 300 l of wash buffer (0.1 M TBS containing 0.1% nonfat dry milk) for a total of three washes. Secondary antibodies were diluted to 1:5000 in wash buffer. For CCRC series antibodies, anti-mouse (Sigma, A4416), and for JIM series antibodies, anti-rat (Sigma, A9037) secondary antibodies conjugated with horseradish peroxidase (50 l) were incubated for 1 h. Plates were washed with wash buffer for a total of five washes. 3,3Ј,5,5Ј-Tetramethylbenzidine peroxidase substrate (Vector Laboratories) was incubated in each well (50 l) for 20 min, and the reaction was stopped by the addition of 0.25 M sulfuric acid. OD values were measured using a plate reader at 450 nm with background readings at 655 nm subtracted. Boiled enzyme reaction buffer controls contain reaction mixtures incubated under the same conditions, except GAUT1:GAUT7 enzyme was inactivated by boiling prior to the addition to the reaction mixture. Endopolygalacturonase digestion Following incubation of the HG:GalAT reaction, the mixture was adjusted to pH 4.2 by the addition of 1 M sodium acetate buffer, pH 4.2 (3 l) and 2 M acetic acid (15 l). The mixture was incubated overnight at 30°C with 20 milliunits of endopolygalacturonase-I (Aspergillus niger, EC 3.2.1.15) (85); 1 unit ϭ 1 mol of reducing sugar produced per min, as determined by a p-hydroxybenzoic acid hydrazide reducing sugars assay (Sigma). The endopolygalacturonase (EPG) reaction was terminated by the addition of 1 M NaOH (30 l). The final reaction mixture was assayed using the filter assay, as described above. Control samples labeled Boiled EPG (Fig. S5) were incubated with EPG that was deactivated by boiling for 1 h prior to use. Modeling of Michaelis-Menten kinetics Standard Michaelis-Menten kinetics and substrate inhibition kinetics were modeled using formulas for nonlinear regression analysis using GraphPad Prism version 7 for Windows (GraphPad Software, La Jolla, CA). Kinetic parameters (V max , K m , and K i ) were calculated for each independent experiment, measured using 8 -12 different concentrations of the variable substrate spanning a range of 0.1-10 times K m . Replicate experiments were calculated as separate data sets, using the standard Michaelis-Menten equation (Equation 1) and the substrate inhibition equation (Equation 2).
2018-11-01T20:38:12.718Z
2018-10-16T00:00:00.000
{ "year": 2018, "sha1": "696fe47177e74230b2e74990fea5567fd2ba0afd", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/293/49/19047.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "1d0a09c9f5bcd538e933db1ad886b01225156f96", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
17413268
pes2o/s2orc
v3-fos-license
Acute Stress Alters Amygdala microRNA miR-135a and miR-124 Expression: Inferences for Corticosteroid Dependent Stress Response The amygdala is a brain structure considered a key node for the regulation of neuroendocrine stress response. Stress-induced response in amygdala is accomplished through neurotransmitter activation and an alteration of gene expression. MicroRNAs (miRNAs) are important regulators of gene expression in the nervous system and are very well suited effectors of stress response for their ability to reversibly silence specific mRNAs. In order to study how acute stress affects miRNAs expression in amygdala we analyzed the miRNA profile after two hours of mouse restraint, by microarray analysis and reverse transcription real time PCR. We found that miR-135a and miR-124 were negatively regulated. Among in silico predicted targets we identified the mineralocorticoid receptor (MR) as a target of both miR-135a and miR-124. Luciferase experiments and endogenous protein expression analysis upon miRNA upregulation and inhibition allowed us to demonstrate that mir-135a and mir-124 are able to negatively affect the expression of the MR. The increased levels of the amygdala MR protein after two hours of restraint, that we analyzed by western blot, negatively correlate with miR-135a and miR-124 expression. These findings point to a role of miR-135a and miR-124 in acute stress as regulators of the MR, an important effector of early stress response. Introduction Stress can be broadly defined as a disruption of homeostasis, to which the organism responds by trying to reestablish the initial equilibrium or to adopt an altered state in the new environment. The adaptive response to homeostatic disturbances implies that the stress response is activated rapidly and terminated efficiently afterwards. If coping with stress fails, a vulnerable phenotype with increased susceptibility to psychopathologies is produced [1]. Stressorrelated information from all sensory systems are conveyed to a variety of limbic brain structures, as hippocampus, amygdala and prefrontal cortex, that work in parallel but are involved in different aspects of the stress response [2,3]. In particular the amygdala, a group of nuclei located in the medial temporal lobe, is considered a key node for stress response integration, with a widespread network of efferent projections to other brain regions [4,5]. Stress mediators, as (nor) adrenaline, corticotrophin releasing hormone (CRH) and corticosteroids (CORT; corticosterone in rodents, cortisol in humans), are highly conserved among vertebrates and contribute to the neuronal functional change and plasticity that are instrumental to the stress response [6]. Acute psychological stress causes a rapid surge of neurotransmission, neuronal activation and hormone release. Though temporary, this activation has profound effects in the brain, ultimately leading to altered gene expression and structural modifications in dendritic spine morphology and synaptic connectivity [1]. This stress-induced neuronal plasticity is responsible for changing the subsequent neuronal response, and shows unique features in the amygdala [7]. miRNAs are a growing class of small non-coding RNAs that act as post-transcriptional regulators of gene expression primarily by translational repression [8,9]. In mammals, they are implicated in the control of many fundamental processes and most of them are expressed in a development-or tissuespecific manner [8,10,11]. The nervous system is a rich source of miRNAs [10][11][12], consistent with their primary role in brain development and neuronal cell identity maintenance [13][14][15]. More recently miRNA functions in the mature nervous system have been related to neuronal plasticity and to the control of synaptic tasks [16]. The ability of miRNAs to selectively [17] and reversibly [18] silence mRNAs, together with their involvement in neuronal plasticity events, make miRNAs well-suited to serve as fine regulators of the complex and extensive molecular network involved in stress response. In this report we investigated whether in the amygdala the process of acute stress response involves miRNAs. We show that after two hours of mouse restraint miR-135a and miR-124 are negatively regulated. Demonstrating that miR-135a and miR-124 are able to affect the MR expression, we suggest their functional role in the initial stress reaction by the activation of the corticosteroid signaling. Ethics statement All animals were housed, cared for, and experiments conducted in accordance with the guidelines laid down by the European Community Council Directive (86/609/EEC of 24 November 1986), the Italian National Low guidelines and the National Institutes of Health Guide for the Care and Use of Laboratory Animals. This study was approved by the Italian Department of Health (authorization D.M. n° 169/2009-B to AM). Animals and stress procedures Adult male C57BL/6J/Cnrm mice 13-15 weeks old were used for stress experiments. Mice were kindly provided by the European Mouse Mutant Archive, Consiglio Nazionale delle Ricerche (CNR-EMMA, Monterotondo, Italy). All mice were allowed to acclimate to the colony for at least four weeks before handling, they were kept on a 12 hours light/dark cycle, with food and water ad libitum. Mice were randomly assigned to two different groups: acute stress mice, subjected to a single 2 hour restraint session in a black perforated plastic tube (diameter: 3cm, length: 10cm); naive mice, briefly handled in a separate room. All stress sessions were done between 10:00 a.m. and 4:00 p.m. Immediately after the stress or handling session, the animals were sacrificed, and tissues were dissected, flash frozen and stored at -80°C. Amygdala punches were obtained with a 1-mm punch tool. Corticosterone measurement Immediately after acute immobilization stress, blood was collected and stored in EDTA containing tubes. To obtain plasma the tubes were centrifuge at 10000 rpm for 10 min and kept at -80°C until use. Concentration of corticosterone in plasma was quantified by ELISA (Cayman Chemical Company) according to the instructions of the manufacturer. RNA extraction and quantitative RT-PCR Total RNA was isolated from cells and from dissected brain tissue according to the standard Trizol (Life Technologies) protocol, with one additional extraction with chloroform before precipitation in 3 volume of ethanol. The tissue was homogenized in Trizol with a Dounce homogenizer prior to extraction. The quantity and the quality were analyzed on a NanoDrop 1000 spectrophotometer (Thermo, Fisher Scientific) and by visual inspection of the agarose gel electrophoresis images. RNA was extracted from single animals and RNA pools were produced. Quantitative Reverse Transcription-PCR (qRT-PCR) analysis of miRNAs was done by TaqMan miRNA assay (Applied Biosystems, Life Technologies) according to the manufacturer's protocol. qRT-PCR analysis of Nr3c2 mRNA was done on cDNAs prepared using SuperScript III (Life Technologies) and dT primers plus random primers at 50°C for 2 hours. qPCR was performed using Syber-Green (SensiMix TM SYBR Hi-ROX Kit, Bioline) with appropriate primers (See Supplementary Methods for primer sequences). Relative quantification of gene expression was conducted with the Applied Biosystems 7300 Real Time PCR System and data analysis was performed using the comparative ΔΔC T method. U6B and sno202 RNAs were used as internal control for miRNA expression, Actin mRNA as internal control for mRNA expression. microRNA array profiling 2.5µg of total amygdala RNA from pooled nuclei (n=12) from naive and stressed mice were Hy3-and Hy5-labeled respectively, using miRCURY TM LNA microRNA Array Power Labeling Kit (Exiqon). Fluorochrome-labeled RNA samples were then combined, denatured and hybridized to custommade slides containing LNA-modified microRNA capture probes targeting all human, mouse and rat miRNAs listed in the miRBase Sequence Database Release v.11.0 (http:// microrna.sanger.ac.uk/sequences/), which contains 1769 unique mature miRNA sequences. Slides have been kindly provided by Dr. V. Benes, from the Genomics Core Facility of the European Molecular Biology Laboratory (Heidelberg, Germany). The hybridization was performed according to the miRCURY TM LNA array manual using hybridization chambers (Agilent), for 16 hours at 56°C. After hybridization, the microarray slides were scanned using a ScanArray Lite Microarray Scanner (Packard Bioscience) and the image analysis was carried out using GenePix® Pro 7 Software (Molecular Devices). Absent and marginal spots were flagged automatically by the software, and then each slide was inspected manually. Data were normalized using different endogenous controls present in the LNA-modified spotted library. The Hy5/ Hy3 ratios were log 2 -transformed. Data from 2 independent experiments were averaged and only probes with a log 2 -ratio above 1 or below -1 were considered. Only probes with log intensity >8 and <14 were taken in account, to avoid non linear effects caused by the noise floor at low intensities or by saturation at high [19]. miR-135a and miR-124 and Acute Stress Response PLOS ONE | www.plosone.org DNA constructs miRNA expression vector p135a was obtained by amplifying 301-bp from mmu-miR-135a-1 genomic sequence, containing 111 nt upstream and 100 nt downstream the miRNA stem-loop sequence, according to the miRBase Database (http:// www.mirbase.org/). The PCR fragment was cloned dowstream the U1 promoter of a pSP65 vector [20]. The complete Nr3c2 3' UTR was amplified from mouse genomic DNA and cloned into the XbaI unique site of the pGL3 control vector (Promega). To generate Nr3c2 mutant constructs Nr3c2 m135a and Nr3c2 m124, mutations of seed binding sites for miR-135a (Nr3c2 m135a) or miR-124 (Nr3c2 m124) were introduced into the Nr3c2 3' UTR using synthetic oligonucleotides, by generating partially complementary PCR fragments. These fragments were used as templates for PCR to generate complete mutated Nr3c2 3' UTR fragments, further cloned as described for wild type 3' UTR. All constructs have been checked by sequencing. See Supplementary Methods for primer sequences. Primary cultures of cerebellar granule neurons (CGN) were prepared from cerebella of C57BL/10 mice at postnatal day 8 (P8), according to established protocols [21]. Briefly, mice were deeply anesthetized with Isofluorane-Vet (isofluorane, Merial) and decapitated; brains were quickly removed and collected in ice-cold HBSS solution, pH 7.3 (3 mM HEPES, 1% penicillinstreptomycin and 1X Hank's Buffered Salt Solution). Cerebella were rapidly dissected in the same solution and cut into small pieces after having carefully removed the meninges. Tissue pieces were incubated for 15 min at 37°C in a digestion buffer (0.1% trypsin, 0.25 mg/ml DNAse in PBS) and successively triturated through a flame-polished Pasteur pipette until no chunks were visible. Cells were centrifuged for 10 min at 1000 rpm (4°C), re-suspended in DMEM culture medium (2 mM glutamine, 2% B27, 1% penicillin-streptomycin, 5% fetal bovine serum, 5 mM D-glucose in DMEM-Dulbecco's modified Eagle's medium), counted in an hemacytometer and plated at a density of 10 6 cells/35 mm in Petri dishes, previously coated with 0.1 mg/ml polylysine. Glial cells proliferation was inhibited by adding to the culture medium, 18-22 hours after plating, 10 mM of cytosine-beta-D-arabinofuranoside (Ara-C). After 6 days, Ara-C was removed and cells maintained in vitro up to 10 days (10 DIV). Luciferase assays Hela cells were transfected in 24-well plates with 0.2 µg of pGL3 constructs, 0.8 µg of miRNA expression vectors, and 0.02 µg of pRL vector. The Renilla expressing pRL vector was used as an internal control. Cells were harvested 24 hours after transfection and luciferase activities were measured using the Dual-Luciferase Reporter Assay System (Promega) as described by the manufacturer's protocol. Western Blot analysis For endogenous protein expression analysis, N2a cells and CGN (6 DIV) were transfected in 6-well plates with 4 µg of miRNA expression vectors or 100 nM LNA (scramble, LNA anti miR-135a or LNA anti miR-124 (Exiqon), respectively). Cell lysates were prepared 72 hours after plasmid and 96 hours after LNA transfections. Protein extracts were obtained using RIPA lysis buffer (150 mM NaCl, 50 mM Tris pH8.0, 0.5% sodium deoxycholate, 0.1% SDS, 1% Nonidet P-40) containing Complete Protease Inhibitor Cocktail (Roche). Total protein extracts from mouse tissues were obtained from pools of amygdala nuclei (n=4) by homogenizing in RIPA buffer. Proteins were separated by SDS-PAGE and blotted onto a nitrocellulose membrane (Whatman). Non-specific bindings were blocked with Tris-buffered saline plus 5% milk powder and 0.2% Tween 20. The following primary antibodies were used: mouse monoclonal anti-MR (1:200; antibody raised against the rat MR peptide epitope AA64-82) [22]; rabbit antiactin (1:2000; A2066; Sigma Aldrich); mouse anti-tubulin (1:5000; T3526; Sigma Aldrich). The secondary horseradish peroxidase (HRP)-conjugated goat anti-rabbit antibody (1:20000; A0545; Sigma Aldrich) or anti-mouse antibody (1:20000; A5278; Sigma Aldrich) were visualized by enhanced chemiluminescence with a western blotting detection kit (Bio-Rad) according to the manufacturer's instructions. Band intensities were quantified by densitometric analysis. For quantitative analysis of WB the signal for each protein was normalized to the housekeeping protein detected on the same blot, and the ratio with average control values was determined. Statistical analysis Statistical significance was evaluated using Student's t-test, or one-way ANOVA performed by the StatView software. The probability values less than 5% or 1% were considered significant. Acute stress affects amygdala miRNA expression In rodents, a single immobilization session represents a brief but severe stress that triggers in amygdala a surge of corticosterone and glutamate release that eventually leads to miR-135a and miR-124 and Acute Stress Response PLOS ONE | www.plosone.org synaptic plasticity [5,26]. To examine the neuroendocrine effects of two hours of restraint we measured circulating corticosterone levels immediately after stress, finding a significant increase in plasma corticosterone in restrained compared to naive mice ( Figure 1). We examined whether two hours of acute stress altered miRNA expression in the mouse amygdala, by using miRNA microarray profiling as an initial screening approach. Pooled RNA samples from stressed and naive mice (n=12) were hybridized to an LNA platform that allowed us to analyze the relative expression of the 288 miRNAs that could be detected in our biological samples. The amygdala miRNA expression profile is altered after acute stress (Figure 2A), indicating an increase of up to three-fold in the expression of several miRNAs (log 2 -ratio=1.6) and a decrease in the expression of a minority of miRNAs. Among miRNAs negatively regulated, only one miRNA shows a log 2ratio < 1, namely miR-135a. Its expression level was found to be more than two times lower in stressed compared to naive mice (log 2 -ratio=-1.2). In order to further investigate stressinduced changes of miRNA expression in amygdala, we performed qRT-PCR analysis on a panel of neuronal highly expressed miRNAs, whose signals in our array study were above the signal intensity threshold of 14 (white dots in Figure 2A). Among the miRNAs studied, miR-124 showed significantly reduced levels of about 40% in RNA samples from stressed compared to naive mice (n=12) ( Figure 2B). Moreover, qRT-PCR analysis indicated an approximately 30% reduction of the expression levels of miR-135a, validating our previous array data ( Figure 2B). Similar results were obtained normalizing data with sno202 as internal control ( Figure S1). Thus, we conclude that mir-135a and mir-124 expression levels are sensitive to stress, as indicated by qRT-PCR for miR-124, and by both microarray and qRT-PCR for miR-135a. The expression of miR-124 in the nervous system has been broadly investigated, both in the development and in the adult mouse CNS. It is highly enriched and widely expressed in the mouse brain, with some region specificity [10][11][12][13][14][15]27]. miR-135a is brain-specific, its expression is induced upon neuronal differentiation, and it is poorly expressed in other adult tissue [10,11,[27][28][29]. However, not much is known about miR-135a expression in the different brain regions. By northern blot analysis we studied miR-135a expression in different structures of the adult mouse brain and in post-natal neurons ( Figure S2): various expression levels were found in the brain areas analyzed, including the amygdala, with a higher relative expression in the cerebellum, as already described [30]. A significant miR-135a expression was found in primary neuronal cultures of cerebellar granule neurons. miR-135a and miR-124 regulate Nr3c2 mRNA expression Next, to better understand the possible role of mir-135a and miR-124 downregulation in the context of the stress response, we searched for predicted mRNA targets. Using three independent target prediction algorithms, DIANA-microT v 3.0, TargetScan 5.2, and PicTar, we narrowed the number of bioinformatic target candidates for miR-135a to 38 genes ( Figure S3). Interestingly, from our bioinformatic analysis the top-score predicted target is the nuclear receptor subfamily 3, group C, member 2 (Nr3c2), also known as MR, the highestaffinity receptor for corticosteroid hormones in the brain [31]. From now on Nr3c2 refers to the gene and the mRNA, MR to the protein. Two binding sites for miR-135a are present upstream and dowstream in the mouse Nr3c2 3' UTR ( Figure 3A), highly conserved in 15 and 14 species, respectively ( Figure S4). Two binding sites for miR-124 are also predicted ( Figure 3A), the first conserved in 14 different species ( Figure S4). Noteworthy, in long mammalian 3' UTR, such as the 2.5 kb Nr3c2 3' UTR, miRNA functional binding sites tend to cluster near the start or the end of the 3' UTR [32]. To validate the functionality of these putative interactions, the complete mouse Nr3c2 3' UTR was cloned dowstream of the firefly luciferase coding sequence (Nr3c2 3' UTR) and this miR-135a and miR-124 and Acute Stress Response PLOS ONE | www.plosone.org construct was transiently transfected into Hela cells along with miR-135a and miR-124 expression vectors (p135a and p124). These plasmids allow us to obtain high levels of U1 promoterdriven miR-135a and miR-124 expression, as shown by northern blot analysis ( Figure S5). As presented in Figure 3B, we were able to demonstrate the predicted interactions between miR-135a and the 3' UTR of Nr3c2 mRNA by means of approximately 50% of decrease in the expression of the Nr3c2 3' UTR reporter upon miR-135a overexpression, comparing to the co-transfection of the reporter with the empty vector. The inhibition of luciferase expression is tightly dependent on the presence of a sequence in the Nr3c2 3' UTR perfectly complementary to the miR-135a seed region. This was shown by transfection experiments with mutant reporter constructs containing the Nr3c2 3' UTR mutated at the level of miR-135a seed binding sites, whose expression was unaffected by miRNA overexpression. However, the expression of Nr3c2 reporter was not altered by miR-124 overexpression ( Figure 3B). A mutant reporter lacking the seed matches for miR-124 is unaffected by miR-124 overexpression, as expected, and is more than 60% inhibited by miR-135a overexpression. These reporter experiments indicate a direct functional interaction between miR-135a and the Nr3c2 3' UTR and a no direct functional interaction between miR-124 and the Nr3c2 3' UTR. To better examine the role of the predicted miRNAs in the control of Nr3c2 gene expression, we investigated whether the endogenous MR protein levels were affected by the modulation of candidate miRNAs. The mouse neuroblastoma cell line (N2a) and cerebellar granule neurons (CGN) express the MR at lower amounts than brain tissues, yet appreciable at both mRNA and protein levels ( Figure S6). No expression of miR-135a and miR-124 was found in N2a cells ( Figure S5, see empty vector lanes). Transfection of N2a cells with miR-135a expression constructs, resulting in an increased concentration of mature miRNAs ( Figure S5), directed a significant reduction of MR protein expression (p135a, Figure 4A and B). As shown by quantitative analysis of western blots ( Figure 4B), overexpression of mir-135a caused a 50% reduction of MR protein levels, compared to the transfection with the empty vector. Interestingly, the overexpression of mir-124 determined a 40% decrease on MR protein levels ( Figure 4B). Moreover, we found a strong reduction of the MR expression after the combined overexpression of plasmids encoding for miR-135a and miR-124 (MR expression 60% of vector control transfection). These findings demonstrate that mir-135a and mir-124 are able to negatively affect the expression of the endogenous MR. The ability of miR-135a to negatively regulate MR expressions was confirmed by overexpressing miR-135a from a different plasmid in which the miRNA transcription is driven by an H1-promoter ( Figure S7A and B). Either the firefly reporter construct or the endogenous protein are significantly downregulated by miR-135a overexpression. Conversely to miRNA overexpression, the inhibition of endogenous miR-135a and miR-124 in CGN by LNA modified oligonucleotide transfections led to a statistically significant increase in the levels of endogenous MR protein ( Figure 4C and D). Taken together these data suggest that mir-135a inhibits Nr3c2 mRNA translation by binding to either one or two sites present in the Nr3c2 3' UTR; indirect effects of miR-124 on the MR endogenous protein expression have been observed both in N2a cells and CGN. Acute stress induces an increase in amygdala MR protein levels High levels of the Nr3c2 mRNA have been found in limbic regions such as the amygdala and the hippocampus [33] and high levels of MR protein have been described in the same brain regions [34]. To investigate whether two hours of restraint affect MR protein levels in the amygdala, we studied by quantitative western blot analysis the MR expression immediately after stress. As shown in Figure 5A acute stress induced a three-fold increase in the amygdala MR protein levels, this negatively parallels stress-induced alterations in miR-124 and miR-135a expression. Interestingly, no significant change was observed in Nr3c2 mRNA levels between stressed and naive mice ( Figure 5B). Discussion Multiple lines of evidence indicate that miRNAs play a key role in mediating cellular stress response [35]. To gain insight into the role of miRNAs in stress response, we analyzed stressinduced changes in miRNA expression in the amygdala after mouse restraint. Here we show that acute stress downregulates miR-135a and miR-124 expression in the amygdala. Moreover, we report that this effect parallels the increase in the MR expression level in the same brain region. Finally, we established miR-124 and miR-135a as regulators of the MR expression in mouse N2a cells and CGN. The amygdala has a crucial role in the regulation of stress response [4,5], which makes this structure an interesting system to study how restraint stress modulates miRNA activities and to investigate miRNA role in the stress response. After two hours of stress we observed modifications on miRNA expression in the amygdala. This is consistent with the temporal profile of stress response in the brain, where after two hours of exposure to a stressor the rapid neurotransmitter activation and the corticosteroid dependent alteration of gene expression are already achieved [6]. In the present study we focused on the stress-induced downregulation of the brainspecific miR-135a and miR-124, and their possible role in the context of stress response. To test the hypothesis that these miRNAs directly participate in adaptive mechanisms by regulating the expression of components of stress response, we performed a computational analysis for predicted mRNAs targets. Among miR-135a predicted target genes we found as a top-score target Nr3c2, coding for the brain corticosteroid receptor MR. The MR, together with the glucocorticoid receptor (GR), is considered a master switch in the control of physiological and behavioral adaptation to stress [6,31,36]. On binding to the hormones, corticosteroid receptors translocate to the nucleus, where they act as modulators of gene expression by transactivation or transrepression [37,38]. However, recent evidence indicates that the initial stress reaction mediated by miR-135a and miR-124 and Acute Stress Response corticosteroids might be accomplished via limbic membrane MRs, activating non genomic signaling pathways [39,40]. We validated the predicted interaction between miR-135a and the mouse Nr3c2 3' UTR by a luciferase assay, demonstrating a miR-135a induced reduction of Nr3c2 reporter expression. Furthermore, as shown by mutant constructs, miR-135a activity on Nr3c2 3' UTR is direct and it is indeed strictly dependent on the integrity of the two cognate target sequences. In the Nr3c2 annotated 3' UTR conserved sites for other miRNAs are present. Interestingly, a previous study aimed to the identification of miRNAs involved in kidney watersalt balance and blood pressure regulation, indicated the human NR3C2 gene as a potential target of miR-135a and miR-124 by mean of luciferase assays [41]. However, by luciferase experiments we were not able to confirm a direct interaction between miR-124 and the mouse Nr3c2 3' UTR. Differences between mouse and human Nr3c2 3' UTR sequences might account for the discrepancy between our and Sõber's results. Nevertheless, the reporter assay may not be sufficient to predict the ability of a miRNA to modulate endogenous mRNA translation. Hence, we tested the validity of miR-135a and miR-124 binding sites, ascertained by reporter assays from our and others labs, by further experiments on mouse cells expressing the MR protein. We demonstrated that the overexpression in N2a cells of miR-135a and miR-124 determines a reduction of MR protein levels. It has been suggested that the number of target sites within a specific 3' UTR can determine the degree of translational repression [42]. Indeed, we found that the transfection in neuroblastoma cells of miR-135a, that has two binding sites on the Nr3c2 3' UTR, induces a strong suppression of the MR. In addition, knocking down endogenous miR-135a and miR-124 in CGN we were able to confirm the ability of both these miRNAs to affect the expression of the MR in a different cell system. Other miRNAs are potential regulators of the Nr3c2 mRNA, suggestive of a complex regulatory network, as recently remarked [43]. Thus, it seems feasible that the MR gene expression is highly regulated in the amygdala at posttranscriptional level and, as previously discussed for many known target genes of miRNAs [44], multiple cis-regulatory sites in the Nr3c2 3' UTR can be read by sets of coexpressed miRNAs. This complex scenario of the MR expression control might account for the indirect negative regulation of the MR expression by miR-124. A broader effect of stress on miRNA expression in the brain is suggested by recent observations that indicate miRNA downregulation in the hippocampus and in the prefrontal cortex 24 hours after restraint [45,46]. These findings, together with our data, point out that stress-induced miRNA downregulation might involve different limbic structures and different temporal dynamics. Future studies will be required to better understand the spatial and temporal dynamic of stress induced modulation of miRNA expression. Interestingly, in Aplysia sensory neurons miR-124 expression was found to be regulated by a modulatory neurotransmission important for plasticity [47], suggesting a possible regulatory role of neurotransmission in stress-induced changes in miRNA expression. Stressful stimuli activate the hypothalamic-pituitary-adrenal axis, leading to the secretion of adrenal stress hormones [1][2][3]48]. Subsequently, CORT feeds back on the brain binding to their cognate receptors, MR and GR. It has been demonstrated that an increase in the MR-mediated signaling in amygdala is neuroprotective, reducing both anxiety and CORT secretion [49]. Because of its high affinity for CORT, the MR is already heavily occupied by the basal levels of CORT [1]. Indeed, a raise in the MR protein levels was found after 24 hours of forced swimming in the hippocampus, neocortex, prefrontal cortex and amygdala [50]. However, to our knowledge little data is present in the literature regarding MR protein expression at early times after acute stress [51]. We show evidence of an induced expression of the MR protein in the amygdala immediately after 2 hours of restraint. Therefore, the observed MR protein increase, related to miR-135a and miR-124 down-regulation, might be instrumental for the MR functional activation in response to the stress-induced increase of CORT levels. Other factors, affecting positively the MR translational efficiency and protein stability, might cooperate with miRNAs to account for the three fold MR induction, detected at the protein level but not at the level of transcripts. The finding that the observed stress-induced increase in MR expression occurs exclusively at the protein level suggests a major contribution of mRNA translation control to the rapid changes of gene expression in the early phases of the stress response. Importantly, miR-124 was recently identified as a regulator of GR expression [52]. Thus, the same miRNA, in concert with other miRNAs like miR-135a, might be responsible for the finetuning of corticosteroid receptors in order to maintain the correct MR/GR balance, necessary for effective coping with stress [6]. In a recent paper [53] mouse amygdala miR-34c was found to be induced 90 min after acute stress and the stressrelated corticotropin releasing factor receptor type 1 was identified as one of the miR-34c targets. Therefore, increasing evidences indicate miRNAs as important mediators of early Figure 5. Acute stress increases MR protein levels in amygdala. (A) Steady state MR protein levels were measured by western blot analysis immediately after 2 hours of restraint. Lysates were obtained from pooled amygdala nuclei (n=4), 5 pools were generated from naive (n=20) and restrained mice (n=20). MR expression was normalized to actin signals in the same blot. Quantitative values are shown as mean ± SE *P < 0.05 (pairwise Student's t-test). (B) qRT-PCR analysis of Nr3c2 transcript levels in naive and stressed mice (n=12 for both groups). Data are presented as mean ± SE. Ex 6-7 and exs 8-9 refer to the amplicons studied, corresponding to exons 6 and 7 (exs 6-7), and to exons 8 and 9 (exs 8-9) of the Nr3c2 coding sequence. doi: 10.1371/journal.pone.0073385.g005 miR-135a and miR-124 and Acute Stress Response PLOS ONE | www.plosone.org stress response in amygdala. The involvement of miRNAs in neuronal pathologies such as schizofrenia and depression has been recently shown [54,55]. It will be interesting to study whether miR-124 and miR-135a dysregulation is implicated in stress-related or major depressive disorders where a decreased expression of the amygdala MR was found [56]. In summary, we report that miR-135a and miR-124 are important components of the stress signaling response in the brain. They respond rapidly to stress and, exerting a control of the MR expression, might contribute to the post-transcriptional gene regulation of one of the key effectors of the corticosteroid cascade. Figure S1. Acute stress induces miR-135a and miR-124 downregulation in the amygdala. Levels of mature miRNAs are quantified in the amygdala RNA pool by qRT-PCR using sno202 as internal control. The statistical test used for comparison was one-way ANOVA (n=9). Values are means ± SE *P <0.001 versus naive control mice.
2017-07-27T03:16:52.040Z
2013-09-04T00:00:00.000
{ "year": 2013, "sha1": "4a82957ec3b99a2d809858bd07571be1047f253c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0073385&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a82957ec3b99a2d809858bd07571be1047f253c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
118882846
pes2o/s2orc
v3-fos-license
Moment Independent Expansion for Fourth-Order Corrections in Lattice Boltzmann Methods A expansion to fourth-order for lattice Boltzmann methods is presented. This expansion provides an easy model for finding fourth-order corrections to lattice Boltzmann methods for various physical systems. The fourth-order terms can give rise to improved results over traditional second-order lattice Boltzmann implementations. Although, this manuscript solely deals with fourth-order expansions, this expansion is easily extended to arbitrary order. We present examples of how this expansion is utilized and provide basic analysis to show how the fourth-order methods differ from lower order models for both diffusive systems and phase separating systems. Introduction Since its initial development in the late 1980s, lattice Boltzmann methods have been growing as a powerful tool in computational physics. Originally introduced for simulating hydrodynamic systems [1][2][3], the lattice Boltzmann methods have been an active area of research which consistently is being improved and finding new applications in other fields of physics, namely phase separation [4,5], electrostatics [6], quantum mechanics [7], diffusion [8][9][10], and moisture transport through barrier coatings [11,12]. As a consequence of the many developments in lattice Boltzmann methods, deeper analysis on the fun-damental methods have been required for better understanding as to how the method treats any given system [13]. In order to show that a specific lattice Boltzmann model is simulating the desired equations of motion, the hydrodynamic limit of the lattice Boltzmann equation is studied. In principle, this arises from an expansion of arbitrary order. In most implementations, an expansion up to second-order is sufficient. In certain computational situations, additional higher order terms can be utilized to better match the physical behavior of a system [14]. Cases such as this require higher-order expansions and analysis to develop these corrections to the method and to better match the physical system [14][15][16]. This manuscript presents a moment independent expansion for examining the lattice Boltzmann equation in the fourth-order with the intention of easily finding the equations of motion which govern a specific system. In section 2, we introduce the lattice Boltzmann method. Section 3 shows the moment independent expansion of the lattice Boltzmann equation up to fourth-order and examines the hydrodynamic limit from which the terms of higher order for the method arise. Section 4 then shows applications of the proposed expansion to various simple one dimensional system. We re-derive results previously derived by Strand et al. [11] using this new moment independent expansion and reanalyze the way the hydrodynamic limit treats temporal derivatives in the diffusive case. We then present two separate derivations for fourth order phase separating systems. The first utilizes a chemical potential model and the second employs the traditional diffusive lattice Boltzmann moments with the addition of external forcing. We present comparisons between the new fourth-order methods and the second order method. Although, we only present results for fourth order methods, this expansion can be easily generalized to arbitrary order, which we present in an appendix. The lattice Boltzmann method The lattice Boltzmann equation is a discrete form of the Boltzmann equation which is discretized in both space and time. The equation takes the form where Ω i is a specified collision operator, F i is a forcing term which allows for the inclusion of external conservative forces [17,18], v i is an element of a set of prescribed lattice velocities {v i } and i is indicates a specific element of the velocity set. In principle, lattice Boltzmann methods are sets of the discrete-velocity particle distribution functions f (x, v i , t). These distribution The distribution functions can be used to find the macroscopic quantities of a system through weighted sums known as velocity moments of f i (x, t). For example, hydrodynamic systems have macroscopic quantities density, ρ(x, t), and momentum, ρ(x, t)u(x, t), where u is the macroscopic flow velocity, to It is noted that the density in Eqn. (2) is a scalar and the momentum in Eqn. (3) is a vector. In the current manuscript, we are concerned with the most general representations. For this reason, we will rename these moments to moment independent scalars and vectors in the form Later, when we introduce higher order moments, we will extend these methods to higher ranked tensors. The collision operator Ω i will not modify the conserved quantities of a system. This is represented by using discrete-velocity moments of the collision operator defined by for a system which conserves both mass and momentum. The collision operator can take many different forms. It is common that a multi-relaxation time (MRT) collision operator is employed. The MRT collision operator uses particle collisions to relax the distribution functions f i to a local equilibrium distribution f 0 i with a characteristic relaxation time τ i . The index i refers to the relaxation time of a specific mode. The MRT operator takes the form. where Λ ij is a collision matrix with eigenvalues given by the relaxation times [19,20]. There is a special case of the collision operator in which all of the relaxation times are equal [21]. This is equivalent to writing a diagonal collision matrix such that Using this special diagonal collision matrix with equivalent relaxation times gives a simplified form of the MRT collision operator which is written This form of the collision operator is called the Bhatnagar, Gross, and Krook (BGK) collision operator. The equilibrium distribution is inherently a function of the macroscopic properties of the system. A discretized second order expansion of the Maxwell-Boltzmann distribution is commonly employed, but for the sake of the required generality, we will not require any specific definitions for the equilibrium distribution. General fourth-order expansion of the lattice Boltzmann method The equations of motion for any given system can be derived to arbitrary where we have used the BGK collision operator with an additional forcing term. The inclusion of the Greek indices indicate the Einstein summation convention. However, we desire a partial differential equation for f 0 i for equilibrium behavior. In order to do this, we can rewrite Eqn. (12) such that With this form for f i , we can iteratively substitute Eqn. (13) into Eqn. (12). This process will then allow us to find a PDE for the equilibrium behavior. After repeating this iterative process and rearranging terms [22], we arrive at fourth-order partial differential equation for the equilibrium distribution and forcing terms in the form We will now employ moments of the equilibrium distributions to derive general equations of motion in the hydrodynamic limit. In Eqns. (4)(5), we presented the moments of the discrete-velocity particle distribution functions which would reproduce the macroscopic properties of the system in terms of general scalars and vectors. Since we have derived a partial differential equation of motion for the equilibrium distribution, we can now extend these macroscopic moments for the f i distribution functions to the f 0 i distribution functions. The number of moments required must be equal to the degree of order of our PDE for f 0 i plus one. This is due to the fact that for a PDE of arbitrary order, we have a term which is written as there will be one term which has n powers of v i . For the case of Eqn. (14) we need five moments since this equation is written up to fourth order. Since we can derive this PDE to arbitrary order, this requirement will hold to any order. In the fourth-order expansion we derived, we define the moments The zeroth order and first order moment produce a scalar and vector respectively as we had seen previously in the moments of the distribution functions. The additional second through fourth moments each give a tensor of rank which is equivalent to the number of velocities in each sum. These general tensors are then inserted into Eqn. (14) when we sum over all i in the hydrodynamic limit. In Eqn. (14), we also have a dependence on the forcing terms F i . These forcing terms also require their own distinct moments, but will have the same tensor ranking as seen in the moments for the equilibrium distribution. We define the moments to be With the moments from both the equilibrium distribution and the forcing terms, we can then take the hydrodynamic limit of Eqn. (14) by summing over all i. Now summing over all i in {v i } in Eqn. (14) will then give a form for a moment independent fourth-order equation of motion in the defined tensor notation. After much rearranging, we are left with where λ m (τ ) is a set of polynomials in τ such that This equation is a moment independent fourth-order equation of motion with the inclusion of general forcing term. To tailor this to a specific system, all that is needed is to define each of the tensors in a form which satisfies the system. Employing the moment independent expansion to diffusive and phase separating systems In order to show the usage and validity of the expansion provided in Eqn. (25), we utilize moment definitions designed to model the diffusion equation and Cahn-Hilliard equation. For simplicity, we will present a D1Q3 lattice Boltzmann model for each system. The D1Q3 representation simulates motion of particles in one spatial dimension with a set of three velocities such that The vectors in the set from Eqn. (30) mean that the particles are restricted to a rest state and motion in to the neighboring lattice space to the left and right. For brevity, both methods being presented lack the external forcing terms. To add forcing terms, one must simply define the tensors in Eqns. (20-24). Diffusion equation To model a diffusion equation using lattice Boltzmann methods in the absence of external conservative forces, we define the tensors from Eqns. (15)(16)(17)(18)(19) as The forcing definitions for Eqns. (20)(21)(22)(23)(24) are all set to zero. Using these definitions, we can re-derive the results from previous work by Strand et al. [11]. These moments can then be inserted into Eqn. (25). There are many terms which inherently vanish. With some rearranging, we arrive at a fourth-order diffusion equation in the form Since we are utilizing a single spatial dimension, we simplified the partial differential notation in terms of ∇. This gives a valid form for fourth-order corrections in the diffusive lattice Boltzmann method with a correction term To reach these results, the previous work made the assumption that the diffusion equation itself could be used to relate first and second order temporal derivatives to second and fourth order spatial derivatives respectively such that (31-35), we have Phase separating systems Another example which we have employed is analyzing phase separation as Chemical potential method Lattice Boltzmann methods can also be used to model systems governed by the Cahn-Hilliard equation to study phase separation [23]. For such systems, we define Eqns. (15)(16)(17)(18)(19) such that Once again, we set the definitions for the forcing moments to be equal to zero. Inserting these moments into Eqn. (25) will allow us to define a fourth order Cahn-Hilliard equation in the form It is important to note that we have used similar substitutions to relate the higher order temporal derivatives to spatial derivatives as in Eqns. (38-39). The correction term which arises for these moments is then defined as In this case, the correction terms is only a function of τ where the correction term in Eqn. (37) is a function of both τ and θ. This arises from the simple fact that θ is absent from the moments defined for Cahn-Hilliard systems. The moment for the diffusive systems have the lattice temperature θ inherently included. To justify the chemical potential method for a Cahn-Hilliard system, we introduce a simple system with two domains that each contain distinct temperatures, θ 1 and θ 2 . With these temperature domains, we introduce a simple free energy which takes the form where θ 1,2 refers to the temperature of a specific domain. The chemical potential follows by the thermodynamic relation To relate θ 1 and θ 2 , we make the ansatz that where ρ 1 and ρ 2 are initial densities for the corresponding domains. With this, we can solve for θ 2 in terms of θ 1 and the initial densities such that (52) With this system, we can employ the lattice Boltzmann model from previously in the section. Figure Forcing method Here we present an identical system as presented in the previous section with the free energy and chemical potential represented by Eqns. (49 -50) respectively. However, in this case we employ a forcing method as opposed to the chemical potential method previously presented. We choose the same equilibrium moments defined in Eqns. (31-35), but also define forcing moments from Eqns. (20 -24) such that Substituting these definitions into Eqn. (25) and simplifying gives a fourth order forced diffusion equation which takes the form where we have dropped the Greek indices on Ψ αβ since we are using a one dimensional model. We have also defined It is to be noted that we have once again used the substitution for the temporal derivatives presented in Eqns. For the full scale of the lattice, we see overall agreement between the second and fourth order methods. However, we see an asymptotic line at the location of the lattice interface. Inset (a) zooms in on the left domain where the second order method takes a value of µ 1 ≈ 0.1976 and the fourth order giving µ 1 ≈ 0.19755. Inset (b) zooms in on the right domain and the second order method gives µ 2 ≈ 0.1968 and the fourth order takes a value µ 2 ≈ 0.1971. In equilibrium, we see a very small difference between the chemical potentials where we would expect them to be a constant value at equilibrium. However, this difference is small enough that it can be neglected. The discrepancy at the interface could possibly be resolved by adding additional orders to the method. that the chemical potentials are not exactly constant across both domains which we would expect in equilibrium. However, the difference between the domains is very small and this difference could be neglected. Analysis on further improving this method is planned. Conclusions In this manuscript, we have proposed a moment independent expansion for examining the lattice Boltzmann method to the fourth-order. This has been done by taking an expansion of the lattice Boltzmann equation up to fourth order and taking its hydrodynamic limit. In the hydrodynamic limit, we have been able to show how to apply specific velocity moments to the expansion to find the fourth-order equations of motion and also define correction terms to the method. We then applied this moment independent method to various different systems and verified the fourth order method against a traditional second order method. First, we re-derived results previously derived for a diffusive system using this new moment independent expansion. This method utilizes a substitution for replacing temporal derivatives with spatial derivatives was discussed in Eqns. (38-39). It was shown that the actual lattice Boltzmann expansion gives rise to higher order temporal derivatives which may be of some importance. The diffusive lattice Boltzmann method was compared to a finite difference solution of Eqn. (40) for a diffusion front. We observed that the lattice Boltzmann algorithm matches the numerical solution almost identically. We then presented two methods for modeling phase separating systems. The first method is a chemical potential based model which includes no external forces that gives rise to a Cahn-Hilliard equation. We examined the fourth order method compared to the second order where the fourth order gave a slight improvement over the second order. We also observed a constant chemical potential across the entire lattice in equilibrium for both second and fourth order methods. Secondly, we used external forces in addition to the traditional diffusive lattice Boltzmann moments. Although, We observed similar phase separation behavior as seen from the chemical potential method, the equation of motion reached is not a Cahn-Hilliard equation, but it is instead a type of forced diffusion equation with forcing terms related to mixed gradients in ρ and µ. While the behavior seen between the chemical potential and forcing methods are similar, an asymptotic line was observed in the chemical potential at the locations of the density interfaces in the results of the forcing method. Once again, the fourth order method provided slight improvements over the second order method. The chemical potentials were not exactly constant across the lattice, but the difference is small enough to be considered negligible. These preliminary results are of interest and will be investigated with a more thorough analysis. Appendix A. Extension to arbitrary order Here we introduce the method of generalizing the fourth order expansion to arbitrary order. Beginning with Eqn. (14), we first recognize that this can be written as a series. We can then rewrite this equation as where λ m (τ ) is the polynomial prefactor for each specific order. We can generalize this series to arbitrary order by extending the limits on the sum such that n m=1 λ m (τ ) where n is the desired order of the expansion. For simplicity sake, we define the sum on the left hand side as This simple and concise form is valid for systems with a single conserved quantity, but this can be generalized further to account for systems which require more than one conserved quantity. In general, to acquire the equations of motion for additional conserved quantities, we multiply Eqn. (14) by powers of v iα which correspond to the moments in Eqns. (15)(16)(17)(18)(19). We can define a product over these velocities in the form
2018-01-17T20:52:36.000Z
2017-10-30T00:00:00.000
{ "year": 2017, "sha1": "16b6d1e44fa7cf8513fd93c0536d04c1168fa9e2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "16b6d1e44fa7cf8513fd93c0536d04c1168fa9e2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257062723
pes2o/s2orc
v3-fos-license
Residence Time vs. Adjustment Time of Carbon Dioxide in the Atmosphere We study the concepts of residence time vs. adjustment time time for carbon dioxide in the atmosphere. The system is analyzed with a two-box first-order model. Using this model, we reach three important conclusions: (1) The adjustment time is never larger than the residence time and can, thus, not be longer than about 5 years. (2) The idea of the atmosphere being stable at 280 ppm in pre-industrial times is untenable. (3) Nearly 90% of all anthropogenic carbon dioxide has already been removed from the atmosphere. Introduction One of the major points in discussion of the anthropogenic global warming (AGW) scenario is the time the added carbon dioxide (CO 2 ) stays in the atmosphere. In an extensive study, Solomon concluded that the residence time of carbon atoms in the atmosphere is of the order of 10 years [1], see Table 1. Such a short time would undermine the prime tenet of AGW, since a molecule of CO 2 will not have time to contribute to any greenhouse effect before it disappears to sinks where it cannot do any thermal harm. Just as water, a molecule that has orders of magnitude larger greenhouse potency, is irrelevant in the AGW discussion, because any water produced by (non-carbon-only) fossil fuels will rapidly equilibrate and the effect is zero. At best, it will raise ocean levels by some micrometers. As such, if the residence time is below 30 years (the climate window), injections of CO 2 in the atmosphere would, just like water, not affect the climate. Or as the IPCC writes in their upcoming report about another atmospheric constituent, "[Water], because of its residence time in the atmosphere averages just 8-10 day, its atmospheric concentration is largely governed by temperature", the value of 8-10 day coming from Ent [2]. However, some claim that the residence time (the amount of time a molecule on average spends in the atmosphere before it disappears from it) is not relevant for this discussion; what matters is the adjustment time (or relaxation time or (re)-equilibration time), the time it takes for a new equilibrium to establish, the time constant seen in the observed transient, and, allegedly, these two are different. In a recent work, Cawley explains it as [3] . . . natural fluxes into and out of the atmosphere are closely balanced and, hence, comparatively small anthropogenic fluxes can have a substantial effect on atmospheric concentrations. Before we continue and address these points, initially, we need to provide the definitions. According to the IPCC (p. 1457 of Ref. [4]): Turnover time . . . is the ratio of the mass of a reservoir (e.g., a gaseous compound in the atmosphere) and the total rate of removal from the reservoir. Adjustment time or response time is the time scale characterising the decay of an instantaneous pulse input into the reservoir. The term adjustment time is also used to characterise the adjustment of the mass of a reservoir following a step change in the source strength. Half-life or decay constant is used to quantify a first-order exponential decay process. In the current work, we use these exact two concepts, with turnover time called residence time. We also focus on first-order systems mentioned here by the IPCC. We discuss the difference between residence time on the one hand, and adjustment time on the other hand, and test the hypothesis that the adjustment time can be longer than the residence time by mathematical methods. After having addressed this core point, we perform a calculation based on the available data to see how they fit. In what follows, we will use a simple two-box first-order model, see Figure 1. The atmosphere has a mass of carbon dioxide equal to A. CO 2 molecules can be captured into a sink and this occurs at a certain rate, a fraction of the molecules being trapped per time unit. Each individual molecule has a certain probability to be captured over time. In other words, a molecule has a residence time τ a in the atmosphere (also sometimes called the 'turnover time'), which is the reciprocal of the rate, k a . Likewise, in the sink, there is a carbon dioxide mass equal to S, where molecules have a residence time τ s ; an individual molecule has a certain probability over time to be released by the sink into the atmosphere, or a rate k s . This then defines natural fluxes going out of the atmosphere into the sink and vice versa, in a first-order model given by, respectively Or in chemistry notation, with k s = 1/τ s and k a = 1/τ a . Note that these two time constants are considered constant, independent of time and temperature. At equilibrium, the two fluxes are equal and this links the equilibrium masses to the residence times, Or, to put it in thermodynamic terms, at equilibrium the change in the Gibbs free energy G is zero [5], with T temperature and R the gas constant. Gt per year to the system. Nature adds F n+ and takes away F n− to a sink represented by the bottom box. That sink has a total CO 2 mass equal to S. The residence time in the atmosphere, τ a is well known and estimated to be 5 years, the residence time in the sink τ s is not well known. Humans add an extra flux into the atmosphere labeled F h . On the basis of this, we can determine the adjustment time τ of the atmosphere in terms of the residence times. This requires solving a simple mathematical differential equation; we do not have to worry at this moment about the thermodynamics and explain why the reaction constants are what they are. The questions we ask are, if we add an amount of carbon dioxide ∆A to it: 1. What are the new equilibrium values of A and S? 2. How long does it take to establish this new equilibrium? The first question is readily answered. According to Equation (3), the new equilibrium mass in the atmosphere at t = ∞ is given by where the mass before the injection is indicated by the subscript '0', e.g., A 0 . A similar equation can be found for the new amount in the sink (swapping τ a and τ s in the expression). For the adjustment time, or relaxation time τ, we use the differential equation At equilibrium, this derivative is zero and the masses obey the ratio found in Equation (3). If we use the fact that the sum of masses after injection is the sum of masses before injection plus ∆A, and after the injection at t = 0 the mass stays constant, then, for t > 0 Substituting this in the equation before, results in The solution of this differential equation is an exponential decay with the new equilibrium value A ∞ given by Equation (5), and an adjustment time τ that is the parallel sum of residence times, rather than the residence time of only the atmosphere As we know from the analogue of parallel electronic resistors, the dominant time constant in this case is the smallest one, and the resulting time constant shorter than the shortest residence time. In other words, the adjustment time of the atmosphere is shorter than the residence time of carbon in the atmosphere. To give an example, if the residence time in the atmosphere is 10 years, and the residence time in the sink is 100 years, the adjustment time is 9.1 years. The statement is also true for non-first-order kinetics; no transient can be slowed down by adding a reflux back to the box under study, it would only change the equilibrium value while decreasing the time to reach that. Note: sometimes, the concept of 'half-life' is also used. It is clear that the time at which half of the perturbation has disappeared is given by t 1/2 = τ ln(2). The same reasoning used for τ obviously also applies to t 1/2 , with the found time constants multiplied by about 0.69. Figure 2 shows a simulation of such a two-box system. For better visibility, it is a symmetric system of equal atmosphere and sink with equal residence times. Both atmosphere and sink initially have 100.0 units, and before the first iteration 100.0 units are added to the atmosphere. At each iteration, A/τ a is moved from the atmosphere to the sink and S/τ s is moved from the sink to the atmosphere. As can be seen, the observed adjustment time is half of the individual residence times, and follows Equation (9). The new equilibrium reached (150 in both atmosphere and sink) is governed by the total amount in the sink and atmosphere after injection (300) and the ratio of kinetic constants (or reciprocal residence times) k a = 1/τ a and k s = 1/τ s . Note that the old equilibrium, with equal amounts of 100 in each box, will never be reached, not even after an infinite amount of time. The characteristic time of the added amount to be 'processed' and to reach the new equilibrium is what is defined as the adjustment time as, according also to the definition of the IPCC given at the beginning of the text, the time it takes for 1/e fraction of the surplus amount, the 'disturbance' relative to the new (not old) equilibrium, to disappear. This adjustment time is not defined as the time it takes for all added amounts to disappear from the atmosphere. Had we used this latter definition, the adjustment time would be infinite for any value of residence time in the atmosphere and this definition would, thus, be rather meaningless. Figure 1. Before injection of 100 into the atmosphere, the atmosphere-sink system was in equilibrium at 100 each, with the residence 'times' in both atmosphere and sink 1000 iterations. At each iteration A/τ a is moved from atmosphere to sink and S/τ s moved from sink to atmosphere. As can be seen, the observed adjustment time (relaxation time) of the system is 500 iterations, as predicted by Equation (9). After 500 iterations, the surplus quantity in the atmosphere relative to the new equilibrium has been reduced to 1/e, a level indicated by a horizontal dashed line. Further, a half-life can be defined, a time at which half of the transient amplitude has passed, t 1/2 = τ ln(2) = 347. This is indicated by a dotted line. (b) The adjustment time τ, as a function of the sink residence time τ s , normalized by the atmospheric residence time τ a . The dot indicates the value of the plot in (a), τ s = τ a , resulting in τ = τ a /2. As can be seen, the adjustment time is shorter than the atmospheric residence time for all values of the sink residence time, with, for large τ s , the adjustment time τ approaching the atmospheric residence time τ a . We, thus, refute the claim of the climate-skeptics-skeptics [6] that individual carbon dioxide molecules have a short life time of around 5 years in the atmosphere. However, when they leave the atmosphere, they are simply swapping places with carbon dioxide in the ocean. The final amount of extra CO 2 that remains in the atmosphere stays there on a time scale of centuries. Their flawed reasoning is that the adjustment time (relaxation time) is the mass perturbation in the atmosphere divided by the flux balance, and, so goes the reasoning, while fluxes can be great (and the residence time short), the balance is close to zero and the relaxation time can then approach infinity. Anthropogenic carbon would, thus, be able to stay a long time in the atmosphere. The work of Cawley mentioned before reasons along similar lines, albeit in a more obfuscated way. Equation (5) of that work is similar to the above equation with F n+ actually constant. It assumes a non-linear (non-first-order) function of A (oddly called a "linear function" in the work; their Equation (3)) for the outflux F n− , a function that is not justified and, moreover, does not make sense; it would imply a non-zero outflux F n− of a system with zero mass A. Moreover, the same equation uses the outflux rate (their k e , reciprocal residence time) later as the reciprocal adjustment time. They mixed everything up. However, it leads to a relaxation time that can take on any value and could conveniently support a century-scale adjustment time in the presence of a sub-decade residence time, something that physically does not make sense. In fact, as shown here, the reality is that, if molecules have a residence time in the atmosphere of 5 years, surplus CO 2 remains in the atmosphere less than 5 years, albeit not much less if the residence time in the sink is much longer. Since that seems to be the case, for all purposes, we can take the residence time as the adjustment time. In fact, we suspect the residence times of Table 1 are actually adjustment times τ, since these are the time constants easily found in a transient, and determining the residence times τ a from the transients requires more knowledge of the system. Before we continue, we finish this section by mentioning that the mass ratio of the sink and atmosphere in equilibrium can be estimated from the transient if the injection value of ∆A is known as well as the end value A ∞ . From that, then the residence time in the sink τ s can also be established. Looking at Figure 2, or Equation (5), we see that, if A ∞ , A 0 , ∆A and τ a are known, τ s can be estimated, and then the equilibrium ratio is S/A. Scenarios We can now do a more detailed analysis based on the available data. (Note: For easy reading, the pre-industrial values are marked by an asterisk, as in F * n+ , etc.). We start off with some facts. The pressure at the bottom of the atmosphere is 1020 mbar or hPa (1.02 × 10 5 N/m 2 in S.I. units). This force, divided by the gravitational constant (9.81 m/s 2 ) results in a mass density of 1.04 × 10 4 kg/m 2 . The total surface area of the planet is 510,072,000 km 2 ; this translates into a total mass of the atmosphere as 5.304 × 10 18 kg. Using a mixture of 20% oxygen (15.999 g/mol) and 80% nitrogen (14.007 g/mol), the average molar mass of air molecules is 28.81 g/mol. The atmosphere, thus, has 1.8408 × 10 20 mol. At this moment, there is a concentration of about [CO 2 ] = 420 ppm (parts-per-million mole fraction) of carbon dioxide in the atmosphere; that is, then, 7.73 × 10 16 mol of CO 2 . CO 2 has a molecular mass of 44.0095 g/mol, so that is a total of A = 3.403 × 10 15 kg. In a similar way, we can say that 1 ppm equals 8.1 × 10 12 kg. A tonne (t) being a thousand kilos, that means 1 ppm is equivalent to 8.1 Gt and there is a total of 3403 Gt in the atmosphere (see Table 2 for factual data on atmospheric carbon dioxide). It also has to be noted that there is sometimes confusion caused by the difference between carbon and carbon dioxide when talking about tonnage. It is clear that a tonne of carbon dioxide is only 273 kg-0.273 tC-of carbon atoms, the rest are oxygen atoms. Table 2. Carbon dioxide facts, with the natural outflux F n− derived from the mass in the atmosphere and the residence time. Other important parameters, influx F n+ , sink mass S, and sink residence time τ s , are less well known and should be considered adjustable. Quantity Parameter Value The pre-industrial 'equilibrium' (axiomatically assuming it indeed was in equilibrium before we started our industry) was 280 ppm. At this moment, every year we inject F h = 38 Gt/a into the atmosphere (see Figure 3a). However, the year-on-year increase in A is only about 20 Gt/a [7]. Apparently, about half (47%) immediately disappears, so that there is a net natural flux balance of −18 Gt/a. In our two-box model, the flux goes into the sink without considering the details. The residence time in the atmosphere can be estimated quite well from the aboveground atomic bomb tests [1], which makes us happy that these at least served the purpose of advancing atmospheric science, if nothing else. The best estimate is about τ a = 5 years [9]. Other references mention different times, with the IPCC mentioning the shortest (4 years) in their 5th Assessment Report (p. 1457 of Ref. [4]), showing that this value is not settled yet; we will use 5 years in this work. The equilibrium amount of carbon dioxide in the atmosphere is open for debate, but, for this purpose, we might use the consensus value of 280 ppm (A * = 2250 Gt). To estimate the amount of CO 2 in the sink is very difficult. However, there seems to be a general view that it is fifty times more than in the atmosphere, S = 50A = 113,400 Gt (relatively unchanged since pre-industrial times). Using the combination of these values does not allow for consistent bookkeeping, as the reader can easily verify. Something has to yield. In what follows, we will try out some scenarios based on specific assumptions. Scenario: Pre-Industrial Atmosphere Was at Equilibrium First we assume that the pre-industrial level of 280 ppm was indeed an equilibrium value with influx equal to outflux in the absence of human flux, as we are wont to believe, but that the mass in the sink S and the residence time τ s in the sink are unknown. Atmospheric carbon dioxide has increased 50% since these pre-industrial times (from 280 to 420 ppm). Since we are dealing with first-order kinetics (Equation (1)), the natural outflux F n− has, thus, also increased 50% from pre-industrial times. The current natural outflux is very well determined at F n− = A/τ a = 681 Gt/a (both parameters are well-established); in pre-industrial times, it must, thus, have been 33% less, at F * n− = 454 Gt/a. If we maintain the idea that in pre-industrial times the system was at equilibrium, then the natural influx F * n+ must have been equal to this outflux F * n− at 454 Gt/a in pre-industrial times and is now found by the flux balance, F n+ = 681 Gt/a − 18 Gt/a = 663 Gt/a (46% gain). The residence time of carbon in the sink cannot have changed, so the sink itself must have gained 46% in mass-a conclusion that is highly unlikely since it would imply a rather small carbon buffer in the sink if such tiny flux imbalances can disturb the buffer to such a large extent. In this scenario, the amount of carbon in the sink must be about equal to the amount in the atmosphere. Yet, as mentioned before, we can obtain a good estimate from the sink mass in equilibrium from the transients. An especially good tool is the 14 C released in the atmosphere by atomic-bomb tests, since this isotope of carbon has very low natural abundance, enabling an accurate estimation of ∆A. Such a partial analysis, with a subset of carbon atoms, is possible, as discussed later. We can actually take the fraction of 14 C of all carbon as a measure of the total mass A of this subset. Moreover, note that the half-life of nuclear decay of 14 C is 5730 years and, thus, no significant decay took place during the experiment; all carbon-14 disappeared from the atmosphere by transfer to the sink. Figure 4 shows an example of investigations carried out by Enting and Nydal [10] (data of Enting from a work by Perruchoud et al. [11]; extracted by WebPlotDigitizer [12]). Using A 0 equal to zero, from a fit (shown as a dashed line), we find an adjustment time of τ = 14.0 a, an amplitude of ∆A = 740, and A ∞ = 30. From this, we derive τ s = 344 a, and τ a = 14.6 a. We find an equilibrium sink-to-atmosphere mass ratio of 24. This analysis assumes a delta-Dirac insertion of 14 C in 1965. From the figure we can see that the 14 C-injection already took place earlier and we, thus, underestimate ∆A, and, therefore, underestimate the S/A mass ratio. However, even this lower-estimate can be used to debunk the idea that the sink buffer S is small, and, thus, debunk the idea that the atmosphere was stable in pre-industrial times (in this model). . This enables stating that the sink must be at least 24 times larger than the atmosphere. Data from Enting (blue) found in a work of Perruchoud [11] and Nydal et al. [10] (green), extracted with WebPlotDigitizer [12]. It seems that the idea of the pre-industrial level stable at 280 ppm (with F n+ = F n− at 280 ppm) is untenable. It seems very likely that the sink was already off-balance and emitting amounts of carbon dioxide at the beginning of the industrial era and the increase in the atmospheric CO 2 at any time in human history is not solely due to human activity. This would also explain the large pre-Mauna-Loa values found with chemical methods summarized by Beck [13] and Slocum [14]. For instance, values of 500 ppm have been observed around 1940. Ignoring these facts, on the other hand, would be equivalent to throwing entire generations of scientists under the bus. Scenario: The Sink Is Fifty Times Larger Than the Atmosphere Next, we adopt the assumption that the sink at this moment really has 50 times more carbon than the atmosphere, in other words, S = 50A = 170,000 Gt, and release the restriction that the atmosphere was stable at 280 ppm; in pre-industrial times there can have been a flux imbalance. We can first make an estimate of the residence time in the sink by noting that the natural outflux F n− = A/τ a = 681 Gt/a at this moment is not fully compensated by influx from the sink. An imbalance of 18 Gt exists, so F n+ = 663 Gt/a. Given the sink mass, this results in a residence time of τ s = S/F n+ = 256 a. Most of the 1696.5 Gt that we have produced from burning fossil fuels (Figure 3) must have disappeared into the sink. However, that did not make a big dent. In pre-industrial times, the sink mass must have been only 168,000 Gt. The emissions from that sink at that time must have been F n+ = S/τ s = 657.4 Gt/a. The outflux then at 280 ppm (A * = 2250 Gt) was F * n− = A * /τ a = 450 Gt/a. We see indeed a tremendous outgassing from the sink in preindustrial times. The system was far from equilibrium, with an imbalance being a net influx of F * n+ − F * n− = 207 Gt/a. Where, at the moment, there is a net natural flux of 18 Gt/a out of the atmosphere, in pre-industrial times, in this two-box first-order model with a sink 50 times larger than the atmosphere, there was a net natural influx of 207 Gt/a. Somewhere, we must have passed the equilibrium value and, considering the above numbers, this value must be rather close to today's concentration of 420 ppm. Scenario: Residence Time in the Sink Is Much Larger Than in the Atmosphere If we only assume that the residence time in the sink is much larger than in the atmosphere, τ s τ a , then we can get a good idea of what has happened to our anthropogenic contribution to the carbon in the atmosphere, F h , based on the two-box model. Because it is first-order, with all fluxes linearly depending on masses, in our analysis, the carbon dioxide can be decomposed into anthropogenic and natural and each treated separately. In a statistical physics/mathematics analogy, as if one were yellow balls and the other red, and we are constantly randomly taking a fixed fraction (not a fixed number) of balls from one of the boxes and putting them in the other, the chances of getting a yellow or red ball are proportional to the number of balls of that color in the box. A very important observation: adding yellow balls to the system does not change anything about the dynamics of the red balls. Some may think that adding yellow balls in one box (atmosphere) influences the amount of red balls in it, but that is not the case in first-order kinetics; the yellow and red ball subsystems are fully independent and can be analyzed separately, even if the observer is colorblind (such that carbon dioxide molecules are indistinguishable). For instance, if the red balls (natural CO 2 ) were at equilibrium before the yellow balls were added, no net flow of red balls will take place after adding them. In other words, we can analyze the anthropogenic and natural CO 2 entirely separately and, at the end, simply add them together. The amount of anthropogenic CO 2 in the atmosphere does not influence the amount of natural CO 2 in the atmosphere and vice versa. We can, thus, analyze how much of the anthropogenic CO 2 still remains in the atmosphere by simply analyzing it with our two-box model. (In the case where the atoms can be distinguished, equivalent to being able to see the colors of the balls, we can determine the kinetics parameters by looking at only one type, only one color). Figure 3 shows the yearly carbon dioxide emissions into the atmosphere (left panel; data source: Our World In Data [8]). The total amount so far emitted is 1696.5 Gt. The right panel shows the cumulative emissions, ∑ year i F h (i). If at every year we apply the fluxes according to Equation (1), then we can see at each year how much of the anthropogenic CO 2 is still in the atmosphere. The right panel of Figure 3 shows this for τ s = 50τ a . We see that only 202.3 Gt of the total injected 1696.5 Gt is still in the atmosphere. In these years, the amount of CO 2 in the atmosphere has risen from 280 ppm (2268 Gt) to 420 ppm (3403 Gt), an increment of 1135 Gt. Of these, 202.3 Gt (17.8%) would be attributable to humans and the rest, 932.7 Gt (82.2%), must be from natural sources. In view of this, curbing carbon emissions seems rather fruitless; even if we destroy the fossil-fuel-based economy (and human wealth with it), we would only delay the inevitable natural scenario by a couple of years. Scenario: Abandoning Constant Residence Times We have seen here how the first-order-kinetics two-box model results in conclusions contrary to data. We could, of course, change our model. We could abandon the idea of first-order kinetics (where flux is proportional to mass), but that would be problematic to justify with physics. Yet, some authors do that and, in that case, one can add parameters to the system until it has the desired property of having a stable atmosphere at A * = 280 ppm. The chemical measurements described by scientists such as Beck and Slocum mentioned above still remain to be explained. How could we have had very large concentrations in recent history? We could also add more boxes to the system, distinguishing the sinks, or differentiating between deep ocean and shallow ocean, dissolved carbon dioxide gas, CO 2 (aq), and dissolved organic carbon (sea-shells), or between CO 2 disappearing in the oceans and being sequestered in biological matter on land, etc. Then each box can have its own kinetics; as an example, plant growth is sublinear with CO 2 concentration. We leave it for further work to formally analyze the adjustment time in higher-order kinetics systems with any number of boxes. However, we expect the most likely improvement to the model to come from abandoning the idea that the residence times τ a and τ s are constant. They, in fact, are very much dependent on temperature. As an example, the ratio between the two that tells us the concentrations (and, thus, the masses) between carbon dioxide in the atmosphere and in the sink, if we assume this sink to be the oceans, is governed by Henry's Law, and this concentration ratio is then dependent on temperature. When including such effects, we might even conclude that the entire concentration of carbon dioxide in the atmosphere is fully governed by such environmental parameters and fully independent of human injections into the system. A is simply a function of many parameters, including the temperature T, but not F h . It is as if the relaxation time is extremely short and any disturbances introduced by humans, or by other means, rapidly disappear, rapidly reaching the equilibrium determined by nature. This fits very nicely with the recent finding that the stalling of the economy and the accompanying severe reduction in carbon emissions during the Covid pandemic had no visible impact on the dynamics of the atmosphere whatsoever [15]. The result of that research, the hypothesis that the carbon dioxide increments in the atmosphere were fully due to natural causes and not humans, fits the experimental data very well, and the hypothesis that humans are fully responsible for the increments can equally be rejected scientifically. This then also agrees with the conclusions of Segalstad that "The rising atmospheric CO 2 is the outcome of rising temperature rather than vice versa" [16]. The pre-industrial atmosphere might indeed have been in equilibrium, and we are currently also in, or close to, equilibrium. That seems to us to be the most likely scenario. Once we admit the possibility of non-anthropogenic sources of carbon dioxide, we can start finding out what they might be. Examples such as volcanic sources, planetary and solar cycles spring to mind. It might well be that the climate puzzle is solved in such areas as the link between solar activity and seismic activity and climate [17]. This is, however, not the focus of this work. We conclude here by summarizing the major findings of this analysis using a first-order-kinetics two-box model: (1) The adjustment time is never larger than the residence time and is less than 5 years. (2) The idea of the atmosphere being stable at 280 ppm in pre-industrial times is untenable. (3) Nearly 90% of all anthropogenic carbon dioxide has already been removed from the atmosphere. Institutional Review Board Statement: Not applicable. Conflicts of Interest: The author declares no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: AGW Anthropogenic Global Warming IPCC Intergovernmental Panel on Climate Change
2023-02-22T16:06:00.095Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "51376d41c9f2fe0a796688a57d31be5d4216ea40", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "fd03fc040edbe25cdd71653dec06e770197740a7", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
222214007
pes2o/s2orc
v3-fos-license
Prevalence, intensity and associated risk factors of soil-transmitted helminth and schistosome infections in Kenya: Impact assessment after five rounds of mass drug administration in Kenya Background In Kenya, over five million school age children (SAC) are estimated to be at risk of parasitic worms causing soil-transmitted helminthiasis (STH) and schistosomiasis. As such, the Government of Kenya launched a National School Based Deworming (NSBD) program in 2012 targeting the at-risk SAC living in endemic regions, with the aim of reducing infections prevalence to a level where they no longer constitute a public health problem. The impact of the program has been consistently monitored from 2012 to 2017 through a robust and extensive monitoring and evaluation (M&E) program. The aim of the current study was to evaluate the parasitological outcomes and additionally investigate water, sanitation and hygiene (WASH) related factors associated with infection prevalence after five rounds of mass drug administration (MDA), to inform the program’s next steps. Materials and methods We utilized a cross-sectional design in a representative, stratified, two-stage sample of school children across six regions in Kenya. A sample size of 100 schools with approximately 108 children per school was purposively selected based on the Year 5 STH infection endemicity prior to the survey. Stool samples were examined for the presence of STH and Schistosoma mansoni eggs using double-slide Kato-Katz technique, urine samples were processed using urine filtration technique for the presence of S. haematobium eggs. Survey questionnaires were administered to all the participating children to collect information on their demographic and individual, household and school level WASH characteristics. Principal findings Overall, STH prevalence was 12.9% (95%CI: 10.4–16.1) with species prevalence of 9.7% (95%CI: 7.5–12.6) for Ascaris lumbricoides, 3.6% (95%CI: 2.2–5.8) for Trichuris trichiura and 1.0% (95%CI: 0.6–1.5) for hookworm. S. mansoni prevalence was 2.2% (95%CI: 1.2–4.3) and S. haematobium prevalence was 0.3% (95%CI: 0.1–1.0). All the infections showed significant prevalence reductions when compared with the baseline prevalence, except S. mansoni. From multivariable analysis, increased odds of any STH infections were associated with not wearing shoes, adjusted odds ratio (aOR) = 1.36 (95%CI: 1.09–1.69); p = 0.007; high number of household members, aOR = 1.21 (95%CI: 1.04–1.41); p = 0.015; and school absenteeism of more than two days, aOR = 1.33 (95%CI: 1.01–1.80); p = 0.045. Further, children below five years had up to four times higher odds of getting STH infections, aOR = 4.68 (95%CI: 1.49–14.73); p = 0.008. However, no significant factors were identified for schistosomiasis, probably due to low prevalence levels affecting performance of statistical analysis. Conclusions After five rounds of MDA, the program shows low prevalence of STH and schistosomiasis, however, not to a level where the infections are not a public health problem. With considerable inter-county infection prevalence heterogeneity, the program should adopt future MDA frequencies based on the county’s infection prevalence status. Further, the program should encourage interventions aimed at improving coverage among preschool age children and improving WASH practices as long-term infection control strategies. Introduction Soil-transmitted helminths (STH: three common species being Ascaris lumbricoides, the hookworms; Necator americanus and Ancylostoma duodenale, and Trichuris trichiura), and schistosomiasis (mainly caused by Schistosoma mansoni and S. haematobium), are among the most Western, Rift Valley and Coast regions), and those sites that had not had routine parasitological monitoring since baseline but had been participating in deworming (the sites included Eastern and North Eastern regions). Further, during this evaluation survey, the program collected individual, household and school level WASH and demographic characteristics in order to determine factors associated with the risk of STH and schistosomiasis in the national program in Kenya. Study design and sampling The study utilized a cross-sectional design in a representative, stratified, two-stage sample of school children across six regions in Kenya. A sampling frame of all primary schools which were participating in the NSBD program within a county was taken. A sample size of 100 schools (five schools per county with highest STH prevalence) with approximately 108 children per school was calculated to be adequate to detect a 5% change in prevalence of STH infections, assuming power of 80% and test size of 5%, and considering the anticipated variance in prevalence. The schools were purposively selected based on Year 5 (the year 2017) STH infection endemicity prior to the study [15]. In each school, 18 children (nine girls and nine boys) were sampled randomly from each of the six classes; one early childhood development (ECD) class and classes two to six using random number tables, for a total of approximately 108 children per school. Survey procedures The selected schools were visited three days prior to the survey date to explain the purpose of the survey to the school head teacher and the school committee; permission to conduct the study was sought at the school level. On the day of the survey, each selected child was given a container (poly pot) labeled with unique identifier and instructed to place a portion of his or her own stool sample in it. The stool samples were then processed in the laboratory within 24 hours and examined in duplicate for the presence of STH and S. mansoni eggs by two technicians using the Kato-Katz technique [16]. Additionally, urine samples were obtained only from children in the participating schools from Coastal, Eastern and North Eastern regions where S. haematobium is known to be focally prevalent. The urine samples were then processed using the urine filtration technique in the laboratory within 24 hours using polycarbonate membrane filters and examined in duplicate for the presence of S. haematobium eggs by two technicians [17]. However, children not present during the day of the survey were not included in the survey. As part of the NSBD program during MDA, all participating children were treated with albendazole (400 mg) and praziquantel (40 mg/kg) for STH and schistosomiasis respectively according to MoH and WHO guidelines [18]. Data collection and management Data was collected in two phases between 29th January to 17th February, 2018 and 8th to 24th May, 2018, approximately 12 months after the Year 5 MDA in each region. Single stool specimens were collected to assess prevalence and intensity of STH (A. lumbricoides, T. trichiura, and hookworm) and S. mansoni infections. Single urine specimens were additionally collected in Coast, Eastern and North Eastern regions to assess prevalence and intensity of S. haematobium infections. In all areas, pilot-tested survey questionnaires were administered to all the participating children to collect information on the participants' demographic and their individual, household and school levels WASH related behaviours, practices and characteristics. Both survey questionnaires and laboratory reporting forms were programmed onto androidbased smart phones and used to capture data electronically using the Open Data Kit (ODK) system which incorporated in-built data quality checks to reduce data entry errors [19]. Ethics statement Ethical approval for the study protocol was obtained from the Kenya Medical Research Institute (KEMRI)'s Scientific and Ethics Review Unit (SSC Number 2206). At county-level, approval was provided by the respective county health and education authorities. At school, parental consent was obtained based on passive opt-out consent rather than written opt-in consent due to the routine and low risk nature of the study procedure. Additionally, individual assent was obtained from each child before participation in the study. All data used were anonymised. Statistical analysis Infection prevalence and average intensity of infection were calculated for STH and schistosomiasis and the 95% confidence intervals (CIs) determined using binomial and negative binomial regression models respectively, taking into account clustering by schools. Infection intensities were classified into light, moderate and heavy infections according to WHO guidelines (S1 Table) [20] and the prevalence of light, moderate and heavy infections together with 95%CIs obtained using a binomial regression model taking into account clustering by schools. We calculated the prevalence of each intensity class using two approaches; 1) when taking the denominator as the overall number of children examined, and 2) when taking the denominator as the total number of positive-children for each respective infection. The use of these two approaches enabled us to conveniently compare the morbidity due to these infections and for easy comparison to other studies. WASH and sociodemographic conditions of interest from the questionnaires included reported individual, household and school-level variables that are known factors affecting STH or schistosomiasis prevalence. Individual factors included age, gender, handwashing, defecation and urination, soil-eating and shoe-wearing behaviours at school and home. Householdlevel factors included availability of toilet, anal cleansing material, handwashing facility equipped with water and soap, type of water source, as well as number of people living in an individual's household. Information regarding type of household latrine (being improved or unimproved) was not collected as the surveys were conducted at school locations. School-level factors included interviewer-verified availability and type of school toilet facility, availability and type of handwashing facility equipped with water and soap, and availability of anal cleansing material at school. Additionally, latrine structural integrity and cleanliness were assessed by interviewers. Latrine structural integrity was assessed by the evidence of all the following: roof, walls with no holes, a functional lockable door, and a stable floor slab, while assessment of latrine cleanliness was determined by the absence of strong smell, absence of visible faeces on the latrine floor, and clean floor. Index scores for latrine structural integrity and cleanliness were created using factor analysis [21], with a score range of between zero and one, and with higher scores indicating greater cleanliness/structural integrity. Overall, the WASH factors associated with STH or schistosomiasis prevalence were analyzed, first using univariable analysis and described as odds ratio (OR) using mixed effects logistic regression model at two levels; pupils nested within schools selected within counties. To select minimum adequate variables for multivariable analysis, an inclusion criterion of pvalue <0.1 was pre-specified in a sequential (block-wise) variable selection method which selected covariates meeting the set criterion; however, sex and age were retained as fixed terms in the final models regardless of statistical significance due to their known importance. Adjusted OR (aOR), of the most parsimonious model, were obtained by mutually adjusting all minimum generated variables using multivariable mixed effects logistic regression model at 95%CI taking into account the hierarchical nature of the data. All the statistical analyses were carried out using STATA version 14.1 (STATA Corporation, College Station, TX, USA). Graphs were developed using the ggplot package implemented in R [22]. School locations were mapped using ArcGIS Desktop version 10.2.2 software (Environmental Systems Research Institute Inc., Redlands, CA, USA). Results Overall, 100 schools (9,801 children) with the median age of 10 years (range:1-21 years) were surveyed across 20 counties in Western, Nyanza, Rift Valley, Coast, Eastern and North Eastern regions prior to the Year 6 MDA. Five schools with a total of 108 children per school were surveyed from each county. Approximately half (50.2%) of the children were males. It is important to note that the wider age range here is due to inclusion of few younger children (<2 years) that were found in ECD classes, who properly might have accompanied their siblings or mothers to school on the day of the survey, on the other hand it is normal to get older children (�15 years) in most primary schools in Kenya due to variety of reasons. Table 1 provides the number of schools and children examined by county as well as the range of school-level prevalence for both STH and schistosomiasis. (Table 2). Overall, the undifferentiated STH prevalence reduced by 61.7% (p<0.001) from the baseline of 32.3%; similarly, specific species indicated significant declines over the period. Hookworm reduced by 93.6% (p<0.001) from a baseline prevalence of 15.4%, A. lumbricoides reduced by 52.9% (p<0.001) from baseline prevalence of 18.1% and T. trichiura reduced by 42.7% (p<0.001) from initial prevalence of 6.7%. Similar declines were observed for mean intensity of infections ( Table 2). Analysis of undifferentiated STH prevalence by demographics (sex and age group) showed that there was no significant difference between male and female children (χ 2 = 0.31, p = 0.578) or by age groups (χ 2 = 4.08, p = 0.130). Whilst non-significant slightly higher hookworm infection prevalence was seen in older children (<5 years: 0%, 5-14 years: 1.0% and >14 years: 1.2%), analyzing differences at this low level of prevalence is affected by the limits of diagnostic accuracy. For A. lumbricoides, non-significantly higher infection prevalence was seen in younger children (<5 years: 14.7%, 5-14 years: 9.7% and >14 years: 4.9%) and for T. trichiura, non-significantly higher infection prevalence was observed among those aged 5 to 14 years at 3.6%, compared to those aged <5 years or >14 years (Table 3). PLOS NEGLECTED TROPICAL DISEASES Sixth year impact assessment of the national school based deworming program in Kenya prevalence of any STH above 20%. All surveyed schools in Eastern and North Eastern regions had STH prevalence below 1%. There was no infection among surveyed schools in Garissa and Wajir counties. Three counties (Kitui, Makueni and Taita Taveta) had STH prevalence below 1%, six counties (Bungoma, Kilifi, Kisumu, Kwale, Migori, and Mombasa) had STH prevalence between 1% and 10%, one county (Kericho) had STH prevalence between 10% and 20%, and the remaining eight counties had STH prevalence between 20% and 50% (Table 4). County-specific mean intensities of infection are shown in Table 5. At regional level, STH infections were generally more prevalent in Rift Valley Region (21.8%) with species-specific prevalence of 17.4% for A. lumbricoides, 5.5% for T. trichiura and 0.1% for hookworm, followed by Western Region (20.9%) with species-specific prevalence of 15.8% for A. lumbricoides, 7.7% for T. trichiura and 1.3% for hookworm. Low percentages of infections were observed in Eastern Region where only hookworm was present at 0.5% (noting that diagnostic accuracy of Kato Katz at this level of prevalence is questionable). However, no STH infection was observed in any of the surveyed schools of North Eastern Region. Using both approaches, we found that STH infections were predominantly of light intensity, followed by moderate and then heavy intensity. In particular, using the first approach, the prevalence of light infections were 5.8% (n = 561) for A. lumbricoides, 5.2% (n = 322) for T. trichiura and 2.8% (n = 91) for hookworm, and prevalence of heavy infections were observed for A. lumbricoides only at 1.9% (n = 3). The prevalence of moderate to heavy infections for any STH infections was 6.0% (n = 399), with the majority of the moderate to heavy infections being for A. lumbricoides. The first approach showed that the prevalence of moderate to heavy intensity of the STH infections has significantly reduced since baseline for all the STH infections except that of T. trichiura (Table 6). Schistosome infections The overall prevalence of S. mansoni infection was 2.2% (95%CI: 1.2-4.3) and for S. haematobium, 0.3% (95%CI: 0.1-1.0). Respective mean intensities of infection of 12 epg (95%CI: 5-31) and 0 eggs/mL (95%CI: 0-1) were seen ( Table 2). Prevalence of schistosomiasis overall was very low, and at these levels only S. haematobium prevalence reduced significantly, by 98.5% (p<0.001) from baseline prevalence of 18.0%. S. mansoni showed a non-significant reduction of 7.9% (p = 0.779) from initial prevalence of 2.4%. However, it is difficult to assess significant reductions in both species of Schistosoma due to poor diagnostic performance of Kato-Katz at low levels of prevalence, and also because the baseline prevalence of S. mansoni was very low to commence with. Similar decline patterns were observed for mean intensities of infection ( Table 2). PLOS NEGLECTED TROPICAL DISEASES Sixth year impact assessment of the national school based deworming program in Kenya -$ Indicate that prevalence of intensity was not assessed at that particular cut-off point n; indicates the number positive for each intensity class # Prevalence of each intensity class was calculated using two approaches 1) � when taking the denominator as the overall number of children examined, and 2) �� when taking the denominator as the total number of positive-children for each infection. The use of these two approaches enabled us to conveniently compare the morbidity due to these infections and for easy comparison to other studies. https://doi.org/10.1371/journal.pntd.0008604.t006 prevalence of S. mansoni above 10%. There was no S. mansoni infection in the surveyed schools in ten counties, below 1% prevalence in four counties, between 1% and 10% in five counties and above 10% only in Busia County. Similarly, S. haematobium prevalence was zero in Makueni, Wajir and Taita Taveta counties, below 1% in Garissa and Kilifi counties, and between 1% and 10% in Kwale and Mombasa counties (Table 4). County-specific mean intensities of infection are shown in Table 5. At regional level, S. mansoni infection was most prevalent in Western (6.3%), followed by Eastern (2.4%), Nyanza (1.9%) and Coast regions (0.3%), however no S. mansoni infection was observed in North Eastern and Rift Valley regions. Additionally, S. haematobium infection was only observed in Coast (0.4%) and North Eastern (0.3%) at very low levels, with no observed infection in Eastern Region. Similarly, using both approaches, we found that intensity of schistosomiasis were predominantly of light intensity, except that of S. mansoni which were predominantly of heavy infections according to first approach. In particular, using the first approach, the prevalence of light infections were 0.9% (n = 85) and 65.2% (n = 6), and prevalence of heavy infections were 2.5% (n = 55) and 0.1% (n = 3) respectively for S. mansoni and S. haematobium. The prevalence of moderate to heavy infections for any schistosome was 3.2% (n = 26) with the majority of the moderate to heavy infections being for S. mansoni. The first approach showed that the prevalence of moderate to heavy intensity of S. haematobium significantly reduced since baseline (RR = 98.5%, p<0.001) but it instead increased by more than two-folds for S. mansoni (Table 6). Individual, household and school WASH characteristics All the 9,801 children surveyed from 100 schools were administered with a questionnaire, where they reported on their WASH practices and behaviours both at school and at home. However, we would like to point out that some of the indictors measured like number of household occupants and shoe-waering behaviour were proxy indicators of poverty but not risk factors per se. Table 7 gives the WASH characteristics, overall and stratified by region. STH infections Any STH infection prevalence The overall reported average number of household occupants was 6.8 people (standard deviation (SD) = 2.6 people). At the time of the interview, the majority 8,101 (84.5%) of the pupils were wearing shoes. Geophagy was not uncommon at 2,596 (27.1%) of the pupils. Nearly half 4,866 (49.7%) of the pupils reported use of an improved water source for drinking at their household. Reported latrine coverage (any type of latrine) at household level was high 9,329 (97.3%), however, fewer pupils reported always having a handwashing facility equipped with water and soap 1,422 (14.5%), or tissues/water for anal cleansing 5,174 (54.0%) available in their households. School WASH conditions varied considerably by region as well as by county ( Table 7). The average number of pupils per school was 527 (SD = 377). Improved water sources were interviewer-observed in 40 Univariable and multivariable analysis of factors associated with STH infections A large number of individual, household and school level WASH and socioeconomic factors were assessed in a univariable model and revealed significant associations with STH infections as shown in Table 8. They were then fitted in a multivariable model after adjusting for other factors, however, many of them did not remain significant (Table 9). Not wearing shoes on day of interview (adjusted odds ratio (aOR) = 1.36, p = 0.007), and household membership of more than five members per household (aOR = 1.21, p = 0.015), showed significant association with increased odds of any STH infections. Additionally, a gradient effect relative to never being absent from school was evident; although one day's absence was non-significant (aOR = 1.05, p = 0.612), two days' absence (aOR = 1.32, p = 0.029), and more than two days' absence (aOR = 1.33, p = 0.045) were both significantly associated with increased odds of STH infection. Children aged 5-14 years (aOR = 3.25, p = 0.015), relative to those aged over 14 years, were shown to have three times greater odds of STH infections; on the other hand, children aged less than 5 years had over four times greater odds of STH infection than those aged over 14 years (aOR = 4.68, p = 0.008). Availability of tissue/newspaper/water for anal cleansing at home (aOR = 0.77, p = 0.005), and households possessing electricity (aOR = 0.75, p = 0.001) were associated with lower odds of any STH infections. Sex was not significantly associated with STH infection. Nearly all the above factors; not wearing shoes (aOR = 1.46, p = 0.003), high household membership (aOR = 1.24, p = 0.001) and school absenteeism of two days (aOR = 1.40, p = 0.016), and more than two days (aOR = 1.35, p = 0.072) similarly showed higher significant risk for A. lumbricoides infection. The availability of tissue/newspaper/water for anal cleansing at home (aOR = 0.78, p = 0.015) and availability of television at home (aOR = 0.68, p = 0.001) showed significant association with lower odds of A. lumbricoides infection (Table 10). Children aged less than 5 years had significantly greater odds of infection than children aged over 14 years (aOR = 3.56, p = 0.045), but children aged 5-14 years did not (aOR = 1.72, p = 0.308). Sex was not significantly associated with A. lumbricoides infection. For T. trichiura infection, children aged 5-14 years (aOR = 10.01, p = 0.027) and two days' school absenteeism (aOR = 1.72, p = 0.020) were the only risk factors identified; though more than two days' absenteeism was mildly non-significantly associated with increased odds of infection, (aOR = 1.59, p = 0.072). Always using school latrine (aOR = 0.26, p = 0.010) was shown as a protective factor (Table 11). A multivariable model for hookworm infection was not run due to insufficient observations. Univariable and multivariable analysis of factors associated with schistosomiasis Univariable analysis of individual, household and school level WASH and socioeconomic factors showed no significant associations between prevalence of schistosomiasis and any of the variables of interest as shown in Table 12. However, factors such as not wearing shoes (odds ratio (OR) = 1.20, p = 0.501), not receiving treatment during the last MDA (OR = 1.08, p = 0.779), sharing toilet/latrine with other households (OR = 1.12, p = 0.515), and school absenteeism (OR = 1.04, p = 0.873) showed increased odds of S. mansoni infection, though not significant. Additionally, only school absenteeism (OR = 1.13, p = 0.894) showed non-significant increased odds of S. haematobium infection. As such, and also due to low number of observations, multivariable analysis for factors associated with schistosomiasis prevalence was not conducted. Discussion Since Kenya has implemented the NSBD program for five years, it is important to conduct an evaluation of the program with the view to refine MDA requirements in line with the WHO guidelines [5]. Additionally, this evaluation enables us to understand the variation of the program impact between and within counties as well as risk factors associated with infection. STH and schistosome infections From this evaluation, STH infections have declined tremendously after five rounds of treatment to prevalence of 12.9%; a highly significant reduction of 61.7% since baseline. Less marked but also important, are reductions in schistosomiasis prevalence to 2.2% (S. mansoni) and 0.3% (S. haematobium). Mean intensities for these parasites similarly reduced over this period. These results indicate that Kenya, via its NSBD, is making good headway towards achieving elimination of these diseases as a public health problem. WHO recommendations indicate that where STH five-year evaluation prevalence is assessed at 10-20%, preventive chemotherapy should continue once per year, and where schistosomiasis five-year evaluation prevalence is assessed at 1-10% it should continue once after every two years [5]. Whilst drops in each STH species prevalence were significant, the species indicated different decline rates perhaps reflecting their different dynamic mechanisms and reactions to albendazole [23,24]. A. lumbricoides has been previously shown to have high levels of re-infections [14], while single-dose oral albendazole, as is given in STH control programs, has been shown to be less efficacious against T. trichiura [25]. For schistosomiasis, whilst only the decline in S. haematobium was significant, numbers overall for both schistosome species were very low. The non-significant decline in S. mansoni might be attributed to several factors such as the poor sensitivity of the Kato-Katz diagnostic technique at low prevalence, its already low baseline prevalence [26], along with a known gap in treatment coverage as there was a lack of availability of praziquantel in one year of the NSBD program. These factors might have resulted in the observed high prevalence of heavy-intensity of any schistosomiasis especiacially S. mansoni infection and as such this prevalence is still above the newly-defined WHO target of <1% [27], indicating that schistosomiasis in the country is still a public health problem, and there is no question that praziquantel treatments should continue. WHO guidelines further provide for conducting of serological surveys where five-year follow-up prevalence of schistosomiasis is <1%, before making any decision to stop MDA [5]. Further, it is concerning that a few pocket of schools mainly in the western part of Kenya had high S. mansoni prevalence above 10%, this could be influenced by their nearness to Lake Victoria and hence indication of high schistosomiasis transmission around that lake and its associated islands, a finding that has been supported by various studies [28][29][30], and calls for critical need for targeted control of schistosomiasis. Examples of large-scale helminth programs achieving levels of reduction warranting reducing or ceasing MDA are still relatively unusual, and whilst WHO decision trees provide guidance on reduction and cessation thresholds, it is also generally acknowledged that MDA needs to continue for some time after achieving the definition of elimination as a public health problem to prevent resurgence of infection. Whilst as noted above the national program has not achieved this level yet, with infection heterogeneity within counties, some but not all counties potentially meet definitions of elimination of STH or schistosomiasis as a public health problem. As the survey was not powered to county level, this is a picture that is emerging that warrants careful investigation going forward. This evaluation did not find any STH infection among the schools surveyed in Garissa and Wajir counties, either a feature of the non-random selection of schools or possibly indicating no ongoing biological transmission of STH in those counties and probably in the greater North Eastern Region of Kenya. This is in agreement with past empirical and research studies in the region, which have indicated that there could be little or no active biological transmission of these infections mainly due to the harsh arid climate in the region [31]. Future surveys should encompass random (for STH) and a mixture of both random and purposive (for schistosomiasis) selection of sites to further verify these results. However, considering the results, Kitui and Makueni counties in the Eastern Region and Taita Taveta County in the upper part of the Coast Region, which all had STH prevalence below 1%, could benefit from development and implementation of surveillance strategies including careful monitoring of coverage and compliance, until there is further evidence that planning to stop MDA would not be followed by resurgence of infection. Six counties, Bungoma, Kisumu, Migori, Kilifi, Kwale and Mombasa with STH prevalence between 1 to <10%, and Kericho County with prevalence between 10 to <20% would need to continue MDA once every year. Eight counties with prevalence between 20 to <50% would be required to maintain the previous MDA pattern (which is also once every year) under the current treatment guidelines (Fig 4, [11]). Heterogeneity within counties for schistosomiasis indicated that prevalence was zero in ten counties, below 1% in four, between 1% and 10% in five and above 10% in only Busia County. Again noting that survey power being for the overall NSBD geographic area rather than county-level, plus the above mentioned recommendation for some new sites to be included in future surveys of schistosomiasis, means that care must be exercised before making a full recommendation that MDA treatment frequencies for schistosomiasis could change. However, again this picture indicates a lot of potential that such decisions may be able to be arrived at in future. According to the WHO guidelines for schistosomiasis, these decisions could in future be as follows: counties with zero prevalence could potentially consider stopping MDA, and counties with prevalence of below 1% would require serological surveys to determine cases of infection [5]; if from these serological surveys, positive cases are found then MDA would be recommended once after every two years. Counties with prevalence ranging between 1 to <10% for S. mansoni and S. haematobium respectively, could potentially conduct MDA once after every two years. Any county with prevalence of between 10 to < 50% would be warranted to maintain the previous MDA schedule of once every year (Fig 5, [5]). The other important observation to note from these low levels of schistosomiasis prevalence is that future large-scale surveys to assess schistosomiasis will not be very effective, as it will not be possible to power them to detect disease. For a highly focal infection it is instead likely that recent evidence of where schistosomiasis is in the country will be brought to bear, possibly with a change of surveying strategies towards more focal mapping (precision mapping) in the known endemic areas. WASH factors and their association with STH and schistosomiasis Examination of the school and household WASH infrastructure as well as children's behaviour and the infection prevalence was done across two domains; (i) helminth species, and (ii) WASH exposure levels i.e. individual-level, household-level and school-level exposures. Generally, coverage and access to key WASH indicators at home and school was relatively good, but was not always associated with lower infection prevalence and the associations were not always consistent between helminth species, likely reflecting both the different pathways of exposure between the species [32] and also some of the complexities in measuring WASH associations with helminth outcomes. Other studies have investigated the association between WASH and STH or schistosome species in Kenya [32][33][34][35][36], but few of them if any had assessed these associations for both individual, household and school level exposures at a national-level. Although no significant associations between the WASH factors and any of the evaluated schistosomiasis prevalence were reported, driven primarily by lack of sufficient observations of schistosomiasis-positive individuals to measure these associations, past studies have investigated and reported possible significant risk factors associated with schistosomiasis [34,35,[37][38][39][40][41]. The association of A. lumbricoides with soil-eating behaviour is not surprising since it is typically acquired through ingestion of helminth egg-contaminated food or untreated water, and so eating of soil (presumably contaminated soil) exposes individuals to high risk of A. lumbricodes as well as T. trichiura infections [42]. Walking bare foot has been previously reported to mostly increase the risk of hookworm infections since hookworm, unlike A. lumbricoides, is primarily acquired through skin contact especially walking bare footed on contaminated soil [43][44][45]. However, a recent study on a sub-cohort of school pupils participating in the Kenyan NSBD program similarly found that children not wearing shoes were equally predisposed to A. lumbricoides infection [32]. Additionally, we found that younger children (<5 years), were 4-fold more likely to be infected with STH while those in the SAC age group (5-14 years) were 3-fold more likely to be infected. Whilst traditionally children below five years were excluded from treatment due to them not attending school and a lack of suitable drug formulation, this finding reiterates the need to widen efforts within the deworming program to cover the PSAC cohort in Kenya. Even though helminth control efforts have been traditionally directed towards SAC, this finding echoes other evidence from within and outside Kenya over the past few years of the high risk of STH infections among PSAC, hence the need to include and strengthen deworming activities in this age group [46][47][48]. Importantly, a higher infection prevalence among this particular age group indicates the underlying risk of infection in a given area, which then become visible when a particular vulnerable age group is not treatment. The study findings at household level of high number of household occupants, as well as children with no or low level of parents/guardians education being at significant risk of STH infections, points to possible increased risk from household overcrowding, which, especially in the rural areas could feasibly result in poor household hygiene practices and inadequate latrine facilities and safe water for drinking. When parents/guardians have no or low level of education, then the level of hygiene education they offer to their children might also be limited. Both these features are markers of general poverty and thus may be reflecting this well-established influencer of STH and schistosomiasis infection. From this result, it is important for control programs to emphasize continual strengthening of community-wide health education programs delivered through schools, health centres, places of worship, village/community meetings and such like channels to improve on the community understanding on STH control and prevention measures [49][50][51][52][53]. At school level, self-reported school absenteeism was the only notable significant risk factor, showing a gradient association with STH infections specifically A. lumbricoides, implying that, whilst one-day absence was not significantly associated with increased odds of STH infections, two-day and more than two-day absence were increasingly significantly associated with these infections. This result is consistent with previous studies that had found strong association between school absenteeism and A. lumbricoides and hookworm infections but not T. trichiura infection [54][55][56]. Whereas this study reports significant association between STH infections and school absenteeism, most studies have acknowledged difficulty in exclusively quantifying the effect of STH infections on school absenteeism, mainly attenuated by the low levels of infection intensity in the sample population [57][58][59]. However, a post-trial non-randomized study has previously shown that pupils with mostly moderate-to-heavy STH infections especially for A. lumbricoides were found to miss up to 2.4% of school days during follow-up periods compared to their uninfected counterparts [60]. Our study provides associational evidence of an effect of STH infections on school absenteeism among SAC. Evidence from robust randomized control trials may be needed to conclusively assess the effect of STH infections on key educational outcomes [59]. However, such intervention trials may be impossible to conduct over sufficient time periods to assess deworming impacts in the manner that such programs are delivered in real-world settings (i.e., repeated rounds administered throughout childhood) [61]. Additionally, the study found some factors to be protective against STH infections. Availability of tissue/newspaper/water for anal cleansing at home significantly lowered the odds of STH infections especially for A. lumbricoides. Proper use of tissue or water for anal cleansing has previously been reported as an important deterrent to STH infections [62]; if the cleansing material is used improperly by children or if inconsistently or completely unavailable, then anal cleansing after defecation may be done using one's hand, which can lead to fecal contamination of the hands and thus exposure to helminth infection [32,63]. Children who reported always use of school drinking water were found to have lower odds of T. trichiura. Availability of drinking water to children especially when in school is key in controlling STH infections especially A. lumbricoides and T. trichiura since these infections are transmitted mainly through ingestion of contaminated food items like vegetables, fruits or drinking water [64]; this finding is consistent with other previous studies conducted among primary school children [8,32,65]. Children from households possessing assets such as television sets and electricity had lower odds of any STH infection. These household assets like television sets can be viewed as sources of information and can be used to increase the spread of STH advocacy, awareness and control messages through mass media adverts and talk shows [66]. More broadly than this, these are variables associated with increased household wealth relative to those households that do not have such items, providing more information on the likely underlying associations of these diseases with poverty. Interventions aimed at improving hygiene practices like proper anal cleansing after defecation, and access to safe drinking water, as well as continuing socioeconomic improvements, should be highlighted as recommendations for the long-term control of STH infections. Study limitations This study was not without limitations. First, individual and household WASH indicators were self-reported by the sampled children at school and not directly collected or observed at home, thus this could potentially lead to reporting bias especially where children are young to recall some of the questions or provide the answer they believe the interviewer wants to hear. Secondly, since the infection prevalence has substantially reduced following several rounds of MDA delivered in the past years, building of robust multivariable models was limited by low prevalence (resulting in few positive values needed to run the models) of infection at Year 6 of the program. Thirdly, whilst the use of purposively sampling of schools within counties was deemed adequate since only schools with highest STH prevalence were targeted, representativeness and generalizability of our results might have been reduced by use of this sampling technique. Lastly, the program used the Kato-Katz technique for examination of STH and S. mansoni eggs in line with the WHO guidelines for examination of these infections in high endemic settings [16], however, this technique has been proven to be less sensitive in low endemic areas [26]. Therefore, our prevalence estimates could have under-estimated the true population prevalence. Conclusions After five rounds of treatment, this impact assessment shows low levels of both STH and schistosomiasis in the National School-Based Deworming Program in Kenya. Ascaris lumbricoides is still the leading STH infection followed by T. trichiura and hookworm among SAC populations. The northern counties, Wajir and Garissa, showed zero prevalence of STH and schistosomiasis, an indication that there may be no ongoing biological transmission of the infections in the region. There was observed infection heterogeneity within counties, warranting careful investigation from this time going forward to determine whether reductions in treatment frequency can be proposed in some counties in future. Our assessment of the individual, household and school WASH practices and behaviour on STH and schistosomiasis prevalence suggested mixed associations and differed across individual helminth species; this is expected given the different mechanisms of infections. Based on the survey findings, the Kenyan NSBD program may wish to adopt county-level treatment frequencies based on the WHO guidelines, prioritize strategies to increase treatment coverage amongst the PSAC population, and incorporate integrated control approaches emphasizing health education and WASH interventions to the communities and schools. Supporting information S1
2020-10-09T13:05:28.183Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "bbdd3ba4215923d0eb3f4798ea57ef3508ba71c7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0008604&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d974b98dfa6a55f0add3a7407f69ae4ae51cf053", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
252125296
pes2o/s2orc
v3-fos-license
Vitamin D deficiency and vitamin D receptor FokI polymorphism as risk factors for COVID-19 Background Given the sparse data on vitamin D status in pediatric COVID-19, we investigated whether vitamin D deficiency could be a risk factor for susceptibility to COVID-19 in Egyptian children and adolescents. We also investigated whether vitamin D receptor (VDR) FokI polymorphism could be a genetic marker for COVID-19 susceptibility. Methods One hundred and eighty patients diagnosed to have COVID‐19 and 200 matched control children and adolescents were recruited. Patients were laboratory confirmed as SARS-CoV-2 positive by real-time RT-PCR. All participants were genotyped for VDR Fok1 polymorphism by RT-PCR. Vitamin D status was defined as sufficient for serum 25(OH) D at least 30 ng/mL, insufficient at 21–29 ng/mL, deficient at <20 ng/mL. Results Ninety-four patients (52%) had low vitamin D levels with 74 (41%) being deficient and 20 (11%) had vitamin D insufficiency. Vitamin D deficiency was associated with 2.6-fold increased risk for COVID-19 (OR = 2.6; [95% CI 1.96–4.9]; P = 0.002. The FokI FF genotype was significantly more represented in patients compared to control group (OR = 4.05; [95% CI: 1.95–8.55]; P < 0.001). Conclusions Vitamin D deficiency and VDR Fok I polymorphism may constitute independent risk factors for susceptibility to COVID-19 in Egyptian children and adolescents. Impact Vitamin D deficiency could be a modifiable risk factor for COVID-19 in children and adolescents because of its immune-modulatory action. To our knowledge, ours is the first such study to investigate the VDR Fok I polymorphism in Caucasian children and adolescents with COVID-19. Vitamin D deficiency and the VDR Fok I polymorphism may constitute independent risk factors for susceptibility to COVID-19 in Egyptian children and adolescents. Clinical trials should be urgently conducted to test for causality and to evaluate the efficacy of vitamin D supplementation for prophylaxis and treatment of COVID-19 taking into account the VDR polymorphisms. Vitamin D deficiency could be a modifiable risk factor for COVID-19 in children and adolescents because of its immunemodulatory action. • To our knowledge, ours is the first such study to investigate the VDR Fok I polymorphism in Caucasian children and adolescents with COVID-19. • Vitamin D deficiency and the VDR Fok I polymorphism may constitute independent risk factors for susceptibility to COVID-19 in Egyptian children and adolescents. • Clinical trials should be urgently conducted to test for causality and to evaluate the efficacy of vitamin D supplementation for prophylaxis and treatment of COVID-19 taking into account the VDR polymorphisms. INTRODUCTION In late December 2019, an outbreak of pneumonia was initially reported in Wuhan, China and later named the novel coronavirus disease 2019 caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). 1 The World Health Organization (WHO) has announced it as a global healthcare pandemic on March 2020. Since then, more than 519 million cases and about 6 million deaths have been reported worldwide. 2 SARS-CoV-2 enters the host cells via binding to the angiotensinconverting enzyme-2 (ACE-2) receptors, mainly expressed on alveolar Type-II pneumocytes, vascular endothelial cells, epithelial cells in the kidney, and enterocytes of the small intestine. 3 Children with COVID-19 may be asymptomatic or with mild clinical course as compared to SARS-CoV-2-infected adults and reports of death are scarce. 4 However, pneumonia followed by acute respiratory distress syndrome (ARDS), sepsis, and multiple organ failure are serious complications of COVID-19. 5 Vitamin D deficiency represents a major public health issue since over one billion people are estimated to have vitamin D deficiency worldwide. Vitamin D is a fat-soluble secosteroid prohormone essential for bone metabolism and mineral homeostasis. Calcitriol [1,25-dihydroxyvitamin D3], an activated analog of vitamin D3, exerts its biological functions through vitamin D receptors (VDRs) mainly expressed on the gut, bone, lungs, and the majority of immune cells. 6 Although the VDR is highly expressed in lung tissue, the potential role of vitamin D-VDR signaling in pulmonary immunopathology remains to be defined. A meta-analysis of 25 randomized controlled trials confirmed that frequent vitamin D supplementation generally protects against acute lower respiratory infections and its deficiency is a risk factor for pneumonia and ARDS. 7,8 Vitamin D not only enhances innate immune response but also modulates adaptive immunity and regulates the inflammatory cascade. 7,9 Vitamin D enhances the expression of VDRs and recognition of viral dsRNA by Toll like receptor 3. It also induces secretion of antiviral peptides such as cathelicidin and betadefensin-2 peptides that can block viral entry into cells and suppress viral replication rate. 10 Moreover, it suppresses the expression of pro-inflammatory cytokines by CD4 + T cells that contribute to the cytokine storm, a major driver of illness severity in COVID-19. 11 A recent study in the mainland of United States reported that sunlight exposure and adequate vitamin D status, with latitude as an indicator, was associated with reduced risk for COVID-19 and related complications. 12 The highest age-specific case fatality rate of COVID-19 was estimated in Italy, Spain, and France where severe vitamin D deficiency is more prevalent than other European countries. 13 The human VDR gene maps to chromosome 12q13. More than 470 single-nucleotide polymorphisms (SNPs) have been identified. However, only few of them modulate vitamin D uptake. Four major SNPs of VDR gene have been shown to influence VDR mRNA stability, expression, and activity [e.g., BsmI, TaqI, ApaI, and FokI]. 14 Among the VDR polymorphisms, FokI SNP is located at the translation start codon that results in transcription of two different VDR proteins, i.e., a short variant (F-VDR) or a longer form (f-VDR). 15 A unique role of FokI polymorphism has been reported as the short F-VDR was found to influence immune cells' behavior and always correlated with a more active immune system. 16 To date, only a few studies in medical English literature reported that vitamin D deficiency may be associated with increased susceptibility to COVID-19 disease in children and adolescents. [17][18][19] Given the sparse data on vitamin D status in pediatric COVID-19 cases, we investigated whether vitamin D deficiency could be a risk factor for susceptibility to COVID-19 and the severity of illness in Egyptian children and adolescents. We also investigated whether the VDR Fok1 (rs2228570, C/T) polymorphism could be a genetic marker for COVID-19 susceptibility. METHODS This prospective multicenter study was performed at Cairo, Ain-Shams, and Assuit University hospitals from October 2020 through March 2021. The study protocol was approved by medical Ethics committees at Cairo, Ain-Shams, and Assuit Universities, Egypt. The study was conducted in accordance with the Declaration of Helsinki and written informed parental consent was obtained for all participants. Case definition One hundred and eighty patients aged <19 years who were diagnosed to have COVID-19 at the study hospitals were recruited. All patients were laboratory confirmed as SARS-CoV-2 positive by real-time reverse transcriptase polymerase chain reaction (RT-PCR) assay of nasopharyngeal swab specimens. Patients' COVID-19 illness severity was categorized into moderate, severe, and critical subgroups according to the recently published classification by Chen et al. 20 No asymptomatic or mild cases were seen among our cohort. Patients were admitted within 72 h from onset of fever and cough. Pulmonary high-resolution computed tomographic images were routinely performed for all patients and evaluated by two experienced radiologists (S.F.O. and A.S.M.) who were blinded to the patients' clinical data. Control group Two hundred healthy children and adolescents of matched age, sex, and season at enrollment who underwent pre-operative assessment for elective surgery at the study hospitals were enrolled as a control group (all tested negative for SARS-Cov2 by RT-PCR and had negative anti-N antibodies test for SARS-Cov2). All patients and control subjects belong to the same ethnic group: African Caucasian. Exclusion criteria Patients with obesity, malnutrition, immunodeficiency, congenital heart disease, malignancy, metabolic diseases, autoimmune disorders, or any chronic debilitating disease were excluded. Those who received vitamin D, calcium, multi-vitamin, or mineral supplementation during the previous 6 months were also excluded. Laboratory investigations Upon enrollment, 5 mL venous blood sample was drawn for molecular and serological evaluation. Laboratory biomarkers including complete blood count, C-reactive protein (CRP), lactate dehydrogenase, serum ferritin, and D-dimer were evaluated. Serum parathyroid hormone was measured using ElectroChemiLuminscence immunoassay (Roche Diagnostics GmbH, Germany). Vitamin D status was defined as sufficient for those having serum 25(OH) D level at least 30 ng/mL (75 nmol/L), insufficient at 21-29 ng/mL (52.5 and 72.5 nmol/l), deficient at <20 ng/mL(<50 nmol/l), and <10 ng/ml (25 nmol/ L) as severe deficiency. 21 Patients diagnosed with COVID-19 were further subdivided into two groups. Those with serum 25(OH) D level <30 ng/ml (75 nmol/L) were determined as Group 1 (vitamin D insufficient and deficient) and those with 25(OH) D levels ≥30 ng/mL as Group 2 (normal VD status). RNA extraction and SARS-CoV-2 diagnosis Nasopharyngeal swabs from all subjects were collected on 0.5 mL TE pH 8 buffer. Viral RNA was automatically extracted using magnetic beads on Chemagic 360 (Perkin Elmer, Germany). Detection of SARS-CoV-2 was done by real-time PCR NAT using Viasure SARS-CoV-2 RT-qPCR kit (CerTest Biotec; Spain) on CFX96 BioRad as described by Freire-Paspuel et al. 22 The Viasure SARS-COV2 detection kit had 97.5% sensitivity and 100% specificity compared to CDC FDA EUA kit as gold standard. Statistical analysis Statistical analysis was performed by the SPSS software for windows, version 20 (IBM, Chicago). Continuous parameters were compared using Student's t test, analysis of variance test, or Mann-Whitney U-test, as appropriate. Association of categorical variables, genotype distribution, and allele frequencies were compared using the Chi-square (χ 2 ) test after calculation of the Hardy-Weinberg equilibrium (HWE). Logistic regression analyses were applied to calculate odds ratios with 95% confidence intervals [OR; 95% CIs] for different VDR genotypes and to quantify the independent effect of serum 25(OH) D levels and VDR Fok I genotypes on disease susceptibility. RESULTS We next compared clinical data and laboratory parameters between COVID-19-diagnosed patients who had deficient and insufficient vitamin D levels (Group 1; n = 94) and patients who had normal vitamin D levels (Group 2; n = 86) as presented in Table 2. There were significantly lower levels of [25(OH) D] and serum phosphorus in Group 1 than those in Group 2 (P < 0.001 and P = 0.026, respectively); Table 2. No significant difference was found between both groups as regards COVID-19 clinical presentation or disease severity as well as inflammatory biomarkers and other measured laboratory variables (all P > 0.05); Table 2. The VDR FokI genotype distribution and allele frequencies for COVID-19 patients and control groups are presented in Table 3 and were compatible with the HWE. The FokI homozygous FF genotype was significantly more represented in COVID-19 patients compared to the control group ( Table 3. However, no significant associations was found between the VDR FokI genotype distribution and disease severity or clinical outcome (all P > 0.05); Table 4. DISCUSSION A high prevalence of vitamin D deficiency has been reported in pediatric and adolescent population across the globe. It has been postulated that vitamin D deficiency is a risk factor for the epidemic of LRTIs in Chinese, Canadian, and Egyptian cohorts. [24][25][26] In our study, patients diagnosed with COVID-19 had significantly lower vitamin D serum levels compared to the control group. Moreover, vitamin D deficiency and insufficiency was detected in more than half (52%) of COVID-19 patients; although our study population resides in Delta and Upper Egypt, both regions have abundant sunlight exposure throughout the year. Of note, the distribution of COVID-19 severity according to 25(OH) D levels was not found significantly different between the studied groups. Our findings confirm and extend recently published data in pediatric and adult age groups. [17][18][19][27][28][29] Recently, two studies investigated vitamin D deficiency as a risk factor for COVID-19 in Turkish children and adolescents. Yılmaz and Şen 17 reported that 72.5% of their cases were vitamin D deficient and patients admitted to ICU had vitamin D level of <10 ng/mL. Consistent with our findings, the authors concluded that vitamin D deficiency may be associated with increased susceptibility to COVID-19 but not disease severity in pediatric population. A similar study by Alpcan et al. 18 reported that vitamin D deficiency was a risk factor for the development of ARD and may be correlated to COVID-19 severity among Turkish children. Akoğlu et al. 19 reported that patients with moderate COVID-19 severity had lower 25(OH) D as compared to the mild disease group. The authors added that vitamin D deficiency may worsen the aggravation of pulmonary involvement by SARS-COV-2. Beyazgül et al. 30 reported that school-aged children and adolescents had low 25 (OH) D levels during COVID-19 pandemic period due to the restriction rules applied to prevent COVID-19 spreads. Darren et al. 31 was the first to study vitamin D status in pediatric multi-system inflammatory syndrome associated with SARS-CoV-2 (PIMS-TS). They reported that 72% of their cohort was vitamin D deficient and specifically all PICU patients had suboptimal vitamin D level. The authors suggested that vitamin D deficiency could be a modifiable risk factor for PIMS-TS because of its immune-modulatory action on inflammatory cytokine signaling. The largest meta-analysis that involved more than one million adult individuals suggested a potential link between vitamin D status and susceptibility to SARS-CoV-2 infection. They added that sufficient vitamin D levels may decrease the risk of becoming infected by SARS-CoV-2. 27 An Israeli epidemiological study reported that 25(OH) D levels <20 ng/mL almost doubled the risk for SARS-CoV-2 infection and hospitalization. 28 Pinzon et al. reported a 90% prevalence of vitamin D deficiency among COVID-19 patients in Indonesia although it is a tropical country with a plenty of sunny weather. 29 A similar study in Italy reported vitamin D deficiency in 81% of patients admitted to ICU with ARF due to COVID-19. The authors added that severe vitamin D deficiency may be a marker of poor prognosis in these patients. 32 Whether any link exists between vitamin D deficiency and the severity of COVID-19 requires further evidence. In keeping with a meta-analysis performed by de Souza et al., 33 the most prevalent symptoms among studied cohort were fever, dry cough, and shortness of breath, followed by diarrhea, vomiting, and fatigue. Moreover, dry cough and development of pneumonia were more frequent in patients who had deficient level of vitamin D (Group 1) than those with normal vitamin D status (Group 2) although it does not reach a statistical significance. Among the most common abnormal laboratory findings were lymphopenia, elevated CRP, and D-dimer level. However, no significant difference was evident between both groups in terms of measured laboratory parameters and inflammatory biomarkers. Fortunately, all patients survived so we could not evaluate the association between vitamin D levels and COVID-19 mortality. In animal models, severe acute lung injury was accompanied by an increase in pulmonary renin and angiotensin II levels and excessive induction of angiopoietin (Ang)-2 and myosin light chain (MLC). The vitamin D-VDR signaling protects the pulmonary vascular barrier and prevents acute lung injury by targeting the renin-angiotensin cascade and blocking the Ang-2-Tie-2-MLC kinase pathway. 34 It has been shown that 1,25-OH2-vit D exhibit antiviral inhibitory effect on human nasal epithelial cells infected with SARS-CoV-2 virus. 35 SARS-CoV-2 enters host cells after its protein S (Spike) binds to ACE2 receptors. The primary targets of SARS-CoV-2 are alveolar cell type-II on which ACE2 receptors is highly expressed. 3 Vitamin D agonist calcitriol enhances the expression of ACE2 and increases soluble ACE2 (sACE2), which may be responsible for trapping and inactivating the virus. Calcitriol also suppresses renin expression and serves as a negative regulator of renin-angiotensin-aldosterone (RAS) system, which is exacerbated in SARS-CoV-2 infection, making more angiotensin II (Ang-II) available to cause tissue damage, inflammation, and multi-organ failure. 36 Vitamin D modulates adaptive immune response as it was found to downregulate the inflammatory cytokine expression, in a VDR-dependent manner, from a Th1 to a Th2 profile. It also inhibits the development of T helper 17 cells and enhances T regulatory (T reg) lymphocytes, thus mitigating tissue damage and inflammation. 37 Moreover, it suppresses the expression of proinflammatory cytokines, specifically interleukin-6 and tumor necrosis factor-α (TNF-α) as both are predictors of severe COVID-19 and worse clinical outcome. Sufficient vitamin D status may help to blunt or dampen the cytokine storm by simultaneously boosting the innate immune response and reducing the overactivation of the adaptive immunity. 11 A recent multicenter study by Shafiek et al. investigated serum cytokine profile in pediatric patients diagnosed with COVID-19 pneumonia. They reported markedly elevated serum pro-inflammatory cytokines and chemokines indicating a cytokine storm following SARS-CoV-2 infection. 38 However, a preliminary data on vitamin D status and concomitant cytokine profile in pediatric COVID-19 patients is still lacking. The VDR polymorphisms have been reported to be associated with susceptibility to lower respiratory infections including respiratory syncytial virus bronchiolitis, 39 symptomatic pertussis, 40 and community-acquired pneumonia 41 in South African, Dutch, and Chinese Han children, respectively. To our knowledge, ours is the first such study to investigate the VDR Fok I polymorphism in Caucasian children and adolescents with COVID-19. In our study, the VDR Fok I homozygous FF genotype and F allele were significantly more represented in COVID-19 patients as compared to the control group. Patients carrying the FF genotype had 4.05-fold increased susceptibility to COVID-19. By contrast, the Fok I Ff genotype showed a significant negative association with COVID-19 risk suggesting that the VDR Fok I f allele may confer protection against SARS-CoV-2 infection. In an attempt to explain our findings, we investigated the 25(OH) D serum level in relation to the studied VDR polymorphism, which was found to be significantly lower in patients homozygous for the Fok I FF genotype compared to those carrying the ff and Ff genotypes. These results are concordant with a recent multicenter study by Abouzeid et al. 42 who investigated the VDR Fok I polymorphism on genomic DNAs of 300 children diagnosed with communityacquired pneumonia. The authors reported that the VDR Fok I FF genotype was associated with lower serum 25(OH) D levels and may confer susceptibility to CAP and related hospital mortality in the under-five Egyptian children. In the current study, no significant association was evident between the VDR FokI genotype distribution and COVID-19 severity or clinical outcome. Of note, our logistic regression model revealed that vitamin D deficiency and the VDR Fok I FF genotype were independent risk factors for COVID-19 among the studied patients controlling for age, sex, season at enrollment, and household crowding as potential confounders. On the contrary, Apaydin et al. reported that the FokI genotypes were similarly distributed in the COVID-19 and control groups. They also found that the Ff genotype for Fok I was associated with disease severity (OR: 3.17) in Turkish population. 43 A similar study reported that FokI genotypic distributions exhibited a remarkable discrepancy in adult patients with severe COVID-19 compared to asymptomatic cases. The authors concluded that the FokI polymorphism may represent a pinpointed associated factor with severity and outcome of COVID-19 in the Iranian population. 44 1,25-OH2-vit D regulates its own serum levels and its precursor 25(OH) D by a VDR-dependent negative feedback loop. Unlike other VDR SNP, the FokI polymorphism at the start codon results in two VDR isoforms with different structure. The f allele encodes a longer and less active VDR protein than that translated by the F allele. Consequently, the f VDR isoform allows more synthesis of 25(OH) D, which may partially explain the observed higher vitamin D serum levels in FokI homozygous ff genotype. 16,45 Together with our findings, it is plausible to speculate that the VDR Fok I FF genotype, being associated with lower serum 25(OH) D levels, may constitute an independent risk factor for susceptibility to COVID-19. As evidence on the link between vitamin D status and COVID-19 in pediatric population continues to emerge, clinical trials should be urgently conducted to test for causality and to evaluate the efficacy of vitamin D supplementation for prophylaxis and treatment of COVID-19. However, several limitations should be considered in this study. First, the small sample size may necessitate adopting a genomewide association study across various ethnic populations. Second, we have studied Fok I SNP in the VDR gene that may represent linkage disequilibrium with other VDR genomic loci, but this is yet to be defined. Third, we have measured 25(OH) D levels at SARS-CoV-2 diagnosis (at the initial phase of illness), so the possibility of reverse causality cannot be completely ruled out. Finally, there is a lack of sufficient data about many environmental risk factors that may predispose to acute respiratory infections including COVID-19 in a genetically susceptible child. IN CONCLUSION Vitamin D deficiency and the VDR Fok I polymorphism may constitute independent risk factors for susceptibility to COVID-19 in Egyptian children and adolescents. Finally, the potential role of vitamin D in pathophysiology of COVID-19 should be further addressed in large-scale studies taking into account the VDR polymorphisms. DATA AVAILABILITY All data generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
2022-09-09T15:02:47.141Z
2022-09-09T00:00:00.000
{ "year": 2022, "sha1": "624603c331a60e260349559a300248d7b942c591", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41390-022-02275-6.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "624603c331a60e260349559a300248d7b942c591", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
231632288
pes2o/s2orc
v3-fos-license
A multi-wavelength look at the GJ 9827 system -- No evidence of extended atmospheres in GJ 9827 b and d from HST and CARMENES data GJ9827 is a bright star hosting a planetary system with three transiting planets. As a multi-planet system with planets that sprawl within the boundaries of the radius gap between terrestrial and gaseous planets, GJ9827 is an optimal target to study the evolution of the atmospheres of close-in planets with a common evolutionary history and their dependence from stellar irradiation. Here, we report on the Hubble Space Telescope (HST) and CARMENES transit observations of GJ9827 planets b and d. We performed a stellar and interstellar medium characterization from the ultraviolet HST spectra, obtaining fluxes for Ly-alpha and MgII of F(Ly-alpha) = (5.42+0.96-0.75) X 10^{-13} erg cm^{-2} s^{-1} and F(MgII) = (5.64 +- 0.24) X 10^{-14} erg cm^{-2} s^{-1}. We also investigated a possible absorption signature in Ly-alpha in the atmosphere of GJ9827b during a transit event from HST spectra, as well as H-alpha and HeI signature for the atmosphere of GJ9827b and d from CARMENES spectra. We found no evidence of an extended atmosphere in either of the planets. This result is also supported by our analytical estimations of mass-loss based on the measured radiation fields for all the three planets of this system, which led to a mass-loss rate of 0.4, 0.3, and 0.1 planetary masses per Gyr, for GJ9827b, c, and d respectively. These values indicate that the planets could have lost their volatiles quickly in their evolution and probably do not retain an atmosphere at the current stage. INTRODUCTION The Kepler mission (Borucki et al. 2010) discovered that planets between the size of Earth and Neptune are the most common type of exoplanets in our Galaxy (i.e., Borucki et al. 2011;Batalha et al. 2013;Rowe et al. 2015). Defined as planets with radii between 1 and 4 R ⊕ , they do not have any analogue in our Solar System. This makes them very captivating targets for the study of their formation and evolution history, as well as understanding their compositions, interior structures, and atmospheres. Moreover, terrestrial planets may be the most attractive targets for the search of biosignatures. One of the most interesting and still unexplained characteristics of the sub-Neptune sized planet population is the gap in the radius distribution around 1.6 R ⊕ found by Fulton et al. (2017). Planet below this radius may be naked rocky cores, while those above this value have retained their atmospheres. A possible explanation for this gap suggests that gas-rich super Earths (mainly solid, rocky planets with a radius up to 1.5 R ⊕ ) will retain or lose their envelope depending on the level of irradiation from their host stars (Lopez et al. 2012;Owen & Wu 2017;Loyd et al. 2020). It might be also possible that the mass-loss can be caused by the luminosity of the cooling planetary cores (core-powered mass-loss mechanism, Gupta & Schlichting 2019). In this context, observing small planets is essential to better understand the role of photo-evaporation in their evolution. Moreover, observing multi-planet systems offer an extra benefit, since such systems presumably formed under the same initial conditions (i.e., same age, same flux evolution) and provide a unique opportunity to compare the compositions of planets with different sizes, as well as atmospheric characteristics at different incident flux. In this paper, we present the case of GJ 9827, a K6V star discovered to host three transiting planets in 1:3:5 commensurability by Kepler/K2 (Niraula et al. 2017;Rodriguez et al. 2018). Teske et al. (2018) showed with archival radial velocity (RV) observations from the Magellan II Planet Finder Spectrograph (PFS) that planet b has a mass of ∼ 8 ± 2 M ⊕ , classifying it as one of the densest planets with an iron mass fraction of 50%; while planets c and d did not show strong constraints on their masses. Prieto-Arranz et al. (2018) refined the planetary parameters through FIES, HARPS, and HARPS-N RVs, finding that planet b is less dense than suggested by Teske et al. (2018), with a mass of 3.74 ± 0.49 M ⊕ . Prieto-Arranz et al. (2018) also calculated the incident fluxes for the three planets, pointing to a rocky composition for planets b and c, and a gaseous composition for planet d, which could possibly retain an extended atmosphere. Rice et al. (2019) combined data from Niraula et al. (2017), Teske et al. (2018), Prieto-Arranz et al. (2018), and a new HARPS-N RV dataset, more precisely constraining the planetary masses for GJ 9827 b and d. A more recent analysis by Kosiarek et al. (2020) refined the ephemerides from Spitzer observations and, adding all the RVs from the previous work to their multi-year HIRES RV follow-up, more precisely constrained the parameters of this multi-planet system. The orbital periods, masses and radii of the three planets by (Kosiarek et al. 2020) are reported in Table 1 and are the ones used in our analyses. It is worth to notice that the radii of the three planets span the above-mentioned radius gap at ∼1.6 R ⊕ , making this system even more appealing for uncovering how super-Earths form and evolve. Straddling this rocky/gaseous planets divide, GJ 9827 is ideal for studying the simultaneous evolution of planets at different orbital distances, having the stellar properties, including age, controlled. A key aspect for the assessment of long-term habitability in a planetary system is the atmospheric mass loss of planets due to the high-energy environment and stellar wind of its host star. Several spectral lines sensitive to extended atmospheres have provided unique measurements of mass loss, including Lyman-α (e.g., Vidal-Madjar et al. 2003;Kulow et al. 2014;Ehrenreich et al. 2015), Hα (e.g., Jensen et al. 2012;Cauley et al. 2015;Casasayas-Barris et al. 2018), and He I 10830 Å (e.g., Spake et al. 2018;Salz et al. 2018;Nortmann et al. 2018;Allart et al. 2018). The far and extreme ultraviolet radiation from the star is capable of heating the upper portions of hot planet atmospheres, causing atoms to escape and possibly driving atmospheric mass loss (e.g., Murray-Clay et al. 2009;France et al. 2016;Linsky et al. 2020). This can have serious consequences for the evolution of the planet; if the mass-loss rate is high enough, the entire atmosphere can escape leaving behind a bare rocky core (e.g., Baraffe et al. 2005). This is especially relevant for planets with masses < 0.1 M J , i.e., the Neptune and super-Earth regime (Owen & Wu 2013). Even in a less catastrophic case, the atmospheric composition of the planet can be highly altered (Lopez et al. 2012). The photochemistry of planetary atmospheres hosted by cool stars is controlled by the two strongest UV emission lines in the stellar spectrum, Lyman-α (Lyα) and Mg II (Madhusudhan 2019). They can drive significant water loss (e.g., Luger et al. 2015;Miguel et al. 2015) and cause abiotic sources of biosignature gases (e.g., Tian et al. 2014;Harman et al. 2015). However, even for the nearest stars, absorption from the local interstellar medium (LISM) dramatically removes Lyα photons from the line of sight (Wood et al. 2005;Youngblood et al. 2016). The LISM is a rich and complex collection of clouds that leads to a unique absorption profile, often with more than one absorbing cloud (Redfield & Linsky 2004, 2008. To aid in the reconstruction of the intrinsic stellar Lyα flux, we observed and characterized the Mg II stellar emission, which samples a similar level of the stellar chromosphere as Lyα, and yet is significantly less altered by LISM absorption. Given then high column density for HI (log N ∼ 18 cm −2 ), >70% of the intrinsic stellar chromospheric emission HI line can be absorbed by the LISM, and in some cases it can be >90% (Wood et al. 2005), whereas the lower abundances and column densities for MgII (log N ∼ 12 cm −2 ), mean that <20% of the intrinsic stellar chromospheric emission MgII line is absorbed (Redfield & Linsky 2002). Fitting the MgII lines provides not only a characterization of the intrinsic chromospheric line shape for Lyα, but also a fit to the LISM absorption profile. Although Wood et al. (2005) used the line shape of MgII to inform the line shape of Lyα, the simultaneous fitting of the Lyα, MgII and ISM absorption that we perform in our analysis is novel. We present here HST and CARMENES transit observations aimed at characterizing the atmosphere of GJ 9827 b, in particular to evaluate its transit signature in different wavelength domains. The CARMENES analysis also includes data for GJ 9827 d, the only planet of this system with previous atmospheric characterization: Kasper et al. (2020) investigated the 10,830 Å HeI triplet spectra, finding no absorption feature. In Section 2 we describe the observations and data reduction for HST and CARMENES data. We also used HST data for characterizing the star, with Lyα, MgII, and XUV fluxes estimation in Section 3. We investigate the possibility of an atmospheric planetary signal in Section 4 and finally present our discussion in Section 5 and conclusion in Section 6. HST data A four-orbit HST pointing on GJ 9827 was carried out with the Space Telescope Imaging Spectrograph (STIS) during the 28 August 2018 transit of planet b as part of the Cycle 25 program GO-15434 (PI: S. Redfield). The first-order farultraviolet (FUV) grating G140M was used, with its 1222 Å setting, covering the wavelength interval 1194-1250 Å, at a resolution of R ≡ λ/∆λ ∼ 10 4 . The chromospheric H I 1215 Å Lyα line is formed in the range 1-3×10 4 K. Other important emission lines in this region are the Si III 1206 Å resonance transition (T ∼ 5×10 4 K) and the N V 1240 Å doublet (T ∼ 2 × 10 5 K), although both of these were expected to be too faint to be significantly detected in the relatively brief FUV exposures of this faint, 10th-magnitude mid-Ktype star. Nevertheless, the peak signal-to-noise (S/N) per 2-pixel resolution element (resel) in the combined spectrum at Lyα was about 35. The 52 ×0.1 narrow long slit was chosen to minimize geocoronal Lyα contamination from the upper atmosphere of the Earth. After the guide stars were acquired, a peak-up was performed to center GJ 9827, using the 31 ×0.5 NDC long slit in the visible at low resolution with the STIS CCD. The rest of the initial orbit was occupied by the first G140M exposure, of 1.8 ks. The remaining three orbits had single G140M exposures, of 2.9 ks each. A summary of the four FUV observations is provided in Table 2. In relation to the planetary transit, the first two exposures were pre-ingress, the third was in-transit, while the final was postegress. Three months later, a single-orbit out-of-transit nearultraviolet (NUV) spectrum of the Mg II 2800 Å region of GJ 9827 was taken, again with STIS, but now using the highresolution echelle E230H with its setting 2713 Å (2574-2851 Å) and the "spectroscopic" slit 0.2 ×0.09 to achieve optimum resolution (R∼110, 000) for the narrow Mg II absorptions. The 2713 Å setting also captures an important Fe II multiplet near 2600 Å, although unfortunately the NUV continuum of the mid-K star was too weak for the Fe II interstellar absorption to be detected (the peak S/N per resel at Mg II 2796 Å in the 1.8 ks exposure is 8). A visible-light peak-up was performed with the same slit prior to the E230H echelle exposure. The NUV observation is also described in Table 2. We performed reductions of the FUV G140M exposures utilizing the pipeline-processed (CALSTIS) X2D files, which are wavelength-and flux-calibrated spatial/spectral images derived from rectified versions of the original longslit stigmatic spectrograms. The image x-direction is along the dispersion, with 0.053 Å pixel −1 . The image ydirection is the spatial (cross-dispersion) dimension, with 0.03 pixel −1 . The image pixel flux densities tabulated in the X2D files, and associated photometric errors, are provided per Å and per 0.0293 (the latter is the cross-dispersion angular pixel size), so the extracted spectrum (and photometric error) must be multiplied by that angular factor to yield flux densities (erg cm −2 s −1 Å −1 ). The upper panel of Fig. 1 illustrates a co-added version of the 2D spatial/spectral image of the four G140M exposures. The vertical extent of the image represents a ±40 pixel slice in the detector y-direction (∼ ±1.2 ) centered on the apparent stellar Lyα feature. The horizontal extent is 1200 pixels along the dispersion. The narrow geocoronal stripe is conspicuous in the y-direction, bisecting the broader stellar Lyα feature. The red band outlines a 9-pixel flux extraction region (∼0.3 ) for the stellar spectrum, while the blue dashed bands highlight flanking regions where the background was sampled. The two background bands are 25 pixels wide, beginning at ±15 pixels from the center. The wide bands increase the signal-to-noise for the background subtraction. In practice, we eliminated the top three of the background values at each wavelength, in an effort to mitigate hot pixels. The middle panel of Fig. 1 depicts the extracted 1D spectrum from the co-added image, zoomed into the Lyα feature. The green tracing is the gross spectrum; blue with grey shadow is the background including the geocoronal H I emission feature; and black is the net flux (gross-background). The wavelength scale was set to place the geocoronal Lyα feature at its laboratory wavelength (1215.670 Å). The thin red dashed curve represents the 10 σ photometric error level (per resel), derived from the spatial/spectral values provided in the original X2D files, smoothed, for display, by a double pass of a rectangular filter 15 pixels wide. The bottom panel shows the extracted Lyα features for the four exposures separately, where the geocoronal feature has been subtracted from the stellar profile. There are small differences between the Lyα peaks of the four profiles, and the difference in the wing of the red profile (that corresponds to ODRL01010 of Table 2, the first observation of the sequence) around 1215 Å is close to 3σ in significance. We reduced the single NUV exposure directly from the CALSTIS pipeline X1D file, which is a tabulation of ex-tracted flux densities and associated photometric errors for 27 of the echelle orders contained in the original E230H-2713 spectral image. We merged the individual orders together, tapering the overlapping zones to preserve the optimum S/N. GJ 9827 is a relatively faint star for STIS highresolution echelle spectroscopy, so the main features visible are the Mg II 2803 Å ("h") and 2796 Å ("k") resonance doublet, the most important spectral signatures of the stellar chromosphere below the temperatures where the higherexcitation Lyα emission forms. The peak S/N at the k line is about 8, and interstellar absorption is apparent in both emission cores. The observed Lyα feature is the combination of the intrinsic stellar emission profile and the interstellar medium (ISM) attenuation profile (Wood et al. 2005). The core of the stellar emission line originates in the lower transition region and upper chromosphere (T ∼ 2-3×10 4 K), while the outer wings form deeper in the stellar chromosphere. The Lyα emission core is strongly attenuated by neutral hydrogen (H I) and deuterium (D I) gas over the 29.7 pc sightline to GJ 9827. The star's +31.9 km s −1 radial velocity (Prieto-Arranz et al. 2018) shifts the stellar emission lines away from much of the ISM attenuation centered near 0 km s −1 (Redfield & Linsky 2008), giving the observed Lyα feature its asymmetric appearance. The Mg II cores are less affected than Lyα owing to the much smaller cosmic abundance of magnesium. Figure 1. Top: Co-added 2D spectral images (in pixels) of GJ 9827 G140M exposures. The red band represent flux extraction region for the stellar spectrum; the blue bands represent the region where the background, including geocoronal Ly-alpha, was sampled. Middle: Co-added 1D spectrum zoomed in the Lyα feature. The green curve is the total extracted spectrum; the gray shaded outlined by the blue curve is the background including the geocoronal emission; and black curve is the net flux (the total flux minus the background). The red dashed line is the photometric error. Bottom: Separate 1D spectra zoomed in the Lyα feature for the four exposures (ODRL01010 red, ODRL01020 green, ODRL01030 blue, ODRL01040 black). The thinner dotted-dashed curves are (smoothed) 3-sigma noise levels per resel for each of the spectra. The one for exposure 10 (in red) is higher than the others because the exposure was shorter. CARMENES data We observed one transit of GJ 9827b and one transit of GJ 9827d with the CARMENES spectrograph (Calar Alto high-Resolution search for M dwarfs with Exo-earths with Near-infrared and optical Echelle Spectrographs; Quirrenbach et al. 2014Quirrenbach et al. , 2018 located at Calar Alto Observatory, in 13 August 2018 and 06 November 2018, respectively. CARMENES simultaneously covers the optical (VIS; 520 to 960 nm) and near-infrared range (NIR; 960 to 1710 nm), giving access to two important traces of planetary evaporation processes: the near-infrared He I triplet at 10830 Å and the visible Hα line at 6562.81 Å. GJ 9827 is sufficiently bright (V= 10.1 mag, J= 7.98 mag) to be observed with both arms simultaneously. Both observations were performed in stare mode, taking continuous exposures before, during, and after the transit. The exposure times were set to be nearly the same for both arms, so that accounting for the different readout overheads of the VIS and NIR arms, the central time of each exposure was coincident in both. Resetting of the Atmospheric Dispersion Corrector took place between two exposures (i.e. during readout) and approximately every 30 minutes. For the planet b observations, the exposure time at the beginning was 190 s but was then increased to 198 s, and the averaged S/N achieved is 44 in the He I order and 30 in the Hα order. On the other hand, for the planet d observations, the exposure times were 200 s and 197 s, with a S/N of 47 and 35 around He I and Hα orders, respectively. Due to a cloud crossing, the exposures taken at 19:22, 19:26, and 19:30 UT presented S/N below 30 in the He I order and below 20 in the Hα order. These exposures are discarded from the atmospheric analysis. In addition, for technical reasons, the observations were stopped from 19:59 to 20:16 UT. We note strong telluric contamination of the He I region in both nights. A log table of the CARMENES observations is given in Table 3. We reduced CARMENES observations with the CARMENES pipeline CARACAL (CARMENES Reduction And CALibration; Caballero et al. 2016), which performs bias, flat-relative optimal extraction (Zechmeister et al. 2014), cosmic-ray correction, and the wavelength calibration (Bauer et al. 2015). The wavelengths are given in vacuum and the reduced spectra referenced to the terrestrial rest frame. STELLAR AND INTERSTELLAR MEDIUM CHARACTERIZATION UV stellar emission dramatically affects the fate of an exoplanet's atmosphere, both in the physical and chemical composition and through possible mass loss, especially when the planet closely orbits its host star. EUV and X-ray photons heat an exoplanet's upper atmospheric layers and drive mass loss, whereas UV photons are primarily responsible for photochemistry (Madhusudhan 2019). In particular, the Lyα emission line at 1215.67 Å dominates the far ultraviolet (FUV) spectrum of late-type stars and is the main source for the photodissociation of molecules such as water and methane. However, the Lyα emission line is heavily absorbed by neutral hydrogen in the interstellar medium (ISM) between the star and the Earth and is also contaminated by the geocoronal emission. This necessitates a reconstruction to recover the intrinsic stellar flux as seen by the planet's atmosphere. In the next sections we describe the Lyα and Mg II emission line reconstructions for GJ 9827, the derived properties of the line-of-sight ISM, and how GJ 9827's UV emission compares to other K dwarfs. Intrinsic Lyman-α and Mg II reconstruction To correct for the ISM attenuation of the Lyα and Mg II lines and recover the intrinsic stellar emission lines, we use the approach described in Youngblood et al. (2016) andGarcía Muñoz et al. (2020), modified to jointly fit Lyα and Mg II. This allows for stronger constraints on the physical For Lyα, we parameterized the broad, intrinsic stellar emission line as a single Voigt profile in emission with a Gaussian in absorption for the line's self-reversal. For the narrow Mg II emission lines, we used a single Gaussian profile in emission for each line and a single Gaussian for each line's self-reversal. We required the radial velocities for the two Mg II lines to be identical and also required their FWHMs to be coupled such that FWHM k = 1.05×FWHM h , because high resolution stellar spectra indicate that the k line is ∼5% broader than the h line (Brian E. Wood, private communication). We did not require the radial velocity of Lyα to match the Mg II lines in case of small wavelength offsets between the two gratings. The ISM absorbers (H I, D I, Mg II) are modeled as single Voigt profiles in absorption (see Sec. 3.2 for justification for one component), each parameterized by a radial velocity (offset from their corresponding stellar emission line's radial velocity), Doppler width b, and species column density N. We assume that the ISM absorbers are coming from the same interstellar gas with the same kinematics, so all three ISM species were required to have the same radial velocity offset from their stellar emission line's radial velocity, and the Doppler widths were all connected assuming pure thermal broadening by b MgII = b HI / √ 24.3 and b DI = b HI / √ 2. While H I and D I are dominated by thermal broadening and thus this assumption should be fine, but heavier species like Mg II will also have a significant contribution from turbulent broadening. Therefore, our value of b MgII will underestimate the true Doppler width of MgII. The D I and H I column densities were fixed to be N(D I)/N(H I) = 1.5×10 −5 (Wood et al. 2004), and N(Mg II) was not linked to N(D I) or N(H I). The corresponding intrinsic emission lines and ISM absorption lines were multiplied together and convolved to the corresponding instrument resolution using the STIS G140M and E230H line spread functions. Given the lack of an in-transit detection (see Section 4.1), we co-added all four orbits' spectra to maximize the precision of the reconstruction. Figure 2 shows the best fit model and intrinsic profiles with 68% and 95% confidence intervals. We find an intrinsic Lyα flux with 68% confidence interval (5.42 +0.96 −0.75 ) × 10 −13 erg cm −2 s −1 . For Mg II, we find a best-fit integrated flux of the h line of (2.30±0.16) × 10 −14 erg cm −2 s −1 , and the integrated flux of the k line is (3.34±0.17)×10 −14 erg cm −2 s −1 (Figure 4). The total flux is F(Mg II h + k) = (5.64±0.24) × 10 −14 erg cm −2 s −1 . These fluxes and the fitted ISM parameters (described in 3.2) are printed in Table 4. In Figure 3, we compare the line core shapes of the reconstructed Lyα and Mg II lines, with ISM attenuation and instrument resolution effects removed. The Lyα line is >100 km s −1 broader than Mg II, as expected. Their self-reversal shapes are roughly similar, but there are significant discrepancies between the depth, width, and asymmetry of the two species. For Lyα, the self-reversal was only allowed to deviate ±10 km s −1 from line center due to significant degeneracy between the self-reversal depth and ISM absorption, but is centered almost exactly at line center. The self-reversal in the Mg II lines was allowed to vary more widely, and it is readily apparent from the observed spectra that the self-reversal is asymmetric (i.e., not centered exactly at the stellar velocity). The Lyα self-reversal depth also appears larger than Mg II's, however, the uncertainty on Lyα's self-reversal parame-ters are extremely large as the line core is not observed due to severe ISM attenuation. Interstellar Medium Characterization We used the LISM Kinematic Calculator 1 (Redfield & Linsky 2008), which calculates whether or not a cloud of LISM traverses any given sight line, in addition to the radial and traverse velocities of the clouds in a given direction. In the case of GJ 9827, the LISM Kinematic Calculator yields no traversing clouds along the sight line. We note that this is likely due to the boundaries of clouds not being well sampled by limited LISM datasets, and this sight line probably does traverse at least one LISM cloud. There are 5 clouds passing within 20 • of GJ 9827's sight line, including the Local Interstellar Cloud (LIC). The radial velocities of these clouds range from -7 to +10 km s −1 with a weighted average of -3.1±5.8 km s −1 . Given the limitations of our data (low S/N in Mg II and low spectral resolution for H I), we fit a single ISM cloud component to our spectra. This results in fitted parameters that are likely akin to an average of the true parameters of the multiple clouds along the sight line. From our reconstructions described in the previous section, we find for interstellar H I the following parameters: HI = -3.35 +2.81 24.3 and the offset between the ISM radial velocity and the intrinsic stellar radial velocity was defined to be the same for the two species. This gives MgII = 2.45 +2.07 −2.31 km s −1 and b MgII = 2.6 +0.4 −0.9 km s −1 . While we expect b MgII to be underestimated, this is within the reasonable range for the Doppler width of LISM Mg II absorption (Redfield & Linsky 2004). Note that the absolute uncertainty in the STIS MAMA wavelength calibration is 0.5-1 pixels, or 6-12 km s −1 for G140M at Lyα and 1.5-3.0 km s −1 for E230H at Mg II. The constraint on the Mg II interstellar absorbers is weak because of the narrowness of the stellar emission line serving as a backlight for the interstellar Mg II ions to absorb against. The interstellar Mg II atoms are Doppler shifted almost -30 km s −1 from an emission line with FWHM = 24 km s −1 , and no stellar continuum is detected around the stellar emission lines. From our simultaneously-fitted H I and Mg II column densities, we calculate the ratio N(MgII)/N(HI) = 8.0 +18.3 −6.0 × 10 −7 for GJ 9827's sight line. This value overlaps with the N(MgII)/N(HI) ratio from Linsky (2019) ((3.6 +2.8 −1.6 ) × 10 −6 ) at the 68% confidence interval (2.0×10 −7 -2.6×10 −6 ). 1 lism.wesleyan.edu/LISMdynamics.html To verify our measurement of the Mg II ISM absorption, we also applied the ISM fitting technique described in Redfield & Linsky (2002). We assume the interstellar absorbers have a radial velocity equal to the LIC for this line-of-sight (+1.24 km s −1 ; Redfield & Linsky 2008), and we assume the wings of the Mg II emission line to be symmetric. Masking the blueward, ISM-affected half of the line, we mirrored the redward half of the Mg II k profile to create the assumed intrinsic stellar profile, which clearly indicated the presence of ISM absorption in the blue wing. The low S/N ratio in the wings of the line prevented a free fit to the column density. However, visual inspection using the mirrored profile led to a minimum column density log 10 N(Mg II) ≈ 12.5. This value is 0.14 dex above the 1-σ uncertainty range of the previous fitting results shown in Figure 4. Using the log 10 N(Mg II)=12.5 value from our mirrored profile fitting of the Mg II line, we find N(MgII)/N(HI) = 1.9 × 10 −6 , which is in agreement with the ratio value from Linsky (2019). The conclusion from using two different methods is that LISM absorption is present on the blueward wing, although it does not significantly alter the Mg II emission profile. Our assumption of a single LISM absorption component and our choice of the stellar emission profile (i.e., automated Gaussian versus a mirrored profile) do not formally enter into our error analysis, and given the comparison to the LISM average N(MgII)/N(HI) ratio indicate that our Mg II LISM column density may be slightly underestimated. We also searched for other ISM-affected spectral lines in the spectral range of STIS/E230H (2576 -2823 Å), such as the iron lines at 2586 and 2600 Å, but the S/N is too low and the spectrum does not present any other spectral features. GJ 9827 and other K dwarfs We compare GJ 9827's Lyα and Mg II fluxes to other K dwarfs with measured fluxes for both lines (Fig. 5). To compare to data from Wood et al. (2005) and Youngblood et al. (2016), we convert our fluxes into surface fluxes using the 0.579±0.018 R radius from Kosiarek et al. 2020 and the 29.69 pc distance from Gaia DR2 to obtain F S (Lyα) = (2.81 +0.50 −0.39 ) × 10 6 erg cm −2 s −1 and F S (MgII) = (2.92±0.13) × 10 5 erg cm −2 s −1 . GJ 9827's rotation period is poorly constrained, but appears to be between 15-30 days with a most likely value of 28.72 days (Rice et al. 2019). Compared to other K dwarfs of similar rotation period 2 , GJ 9827 has approximately 3.0 times less Mg II surface flux, and 2.8 times more Lyα surface flux. Mg II is commonly used as an estimator for the difficult-to-observe Lyα 2 The comparison K dwarfs with rotation period >15 days include HD 40307, HD 85512, HD 97658, α Cen B, 61 Cyg A, Ind, 40 Eri A, 36 Oph A, and σ Gem. More information on all the K dwarfs in Figure 5 can be found in Wood et al. (2005) and Youngblood et al. (2016). line (Wood et al. 2005), and if we relied on the Mg II observation to estimate GJ 9827's Lyα emission, we would have underpredicted it by almost a factor of 5. Figure 2 shows how this MgII-derived Lyα profile (both intrinsic and observed) would appear and how the Lyα spectrum strongly rules out this flux level at the >3σ level. This underestimation could have significant consequences for the atmospheres of GJ 9827's planets, because Lyα has a strong effect on photochemistry and as a proxy for the EUV , could have implications for the atmospheric escape from the planets. We estimate that there is a 0.001% probability that GJ 9827's Lyα flux is consistent with the other K dwarfs in Figure 5, and a 16% probability that its Mg II flux is consistent with other K dwarfs. To calculate this, we drew 10 6 random samples from GJ 9827's Lyα and Mg II flux posterior distributions as well as 10 6 random samples from a normal distribution describing the red best-fit lines in the middle and right panels of Figure 5, and determined what percentage of the posterior samples overlapped with samples from the bestfit lines. The normal distributions describing the best-fit lines had means equal to zero, and standard deviations equal to the standard deviations of all data points about the best-fit line, normalized by the best-fit line. For Lyα, the standard deviation is 0.29 and for Mg II it is 0.46. The samples from the Lyα and Mg II posterior distributions were also cast as differences from the best-fit line and then normalized by the best-fit line. What is causing the apparently significant discrepancy in the Mg II -Lyα flux ratio for this star? It is possible that this ratio is within the expected scatter of K dwarf UV fluxflux relations, and more UV spectroscopic observations K dwarfs are needed to quantify that typical scatter. Mg II and Lyα form in slightly different regions in the stellar atmospheres and therefore their emission mechanisms are not exactly coupled, but in practice the scatter is likely dominated by the non-simultaneity of the Mg II and Lyα observations. For example, the GJ 9827 Mg II and Lyα observations were taken 100 days apart, or approximately 3.5 stellar rotation periods apart. Stellar surface inhomogeneities (e.g., active regions, faculae, plage) as well as evolution of these features responsible for much of the Mg II and Lyα emission could cause deviations in the expected Mg II -Lyα flux ratio. Here we consider some additional effects, including metallicity and rotation evolution. GJ 9827 has slightly sub-solar metallicity ([M/H] = −0.26 to −0.5; Rice et al. 2019), which could potentially explain a low Mg II flux relative to Lyα as well as GJ 9827's potentially anomalously-high Lyα flux. A detailed investigation into the effect of metallicity on relative Mg II and Lyα line strengths in K dwarfs would be needed to determine that. Another possible explanation is the observed rotation evolution of Lyα luminosity compared to less optically thick chromospheric lines like C II. Pineda et al. (under review) showed with a sample of young and field age M dwarfs that Lyα luminosity declines much more slowly with increasing stellar rotation period (a proxy for stellar age) than other far-UV lines like C II. Assuming that Mg II behaves more similarly to lines like C II rather than Lyα, GJ 9827 could be at a point in its rotational evolution when its Mg II luminosity has decreased significantly but Lyα has not been impacted as much by stellar spindown. More UV observations of low-activity K dwarfs are needed to determine if this increasing Lyα/Mg II flux ratio with increasing rotation period is a real effect. Figure 2. The reconstruction of the Lyα profile is shown in the top two panels (the middle panel is a zoomed version of the top panel with no changes). The STIS spectrum is represented in black with error bars (the error bars are generally smaller than the black line width). The best-fit model and 68% and 95% confidence intervals is shown as the pink line, dark gray shading, and light gray shading, respectively (the confidence intervals are also generally thinner than the width of the pink line). The intrinsic stellar emission line corresponding to the best-fit model is shown as the dashed blue line, with the 68% and 95% confidence intervals shown as dark-blue and light-blue shading, respectively. The bottom panel shows the residuals (data-model)/(data uncertainty). The dashed gold and green lines show how the intrinsic and observed (ISM attenuated) spectra would respectively appear if the intrinsic fluxes were consistent with Mg II -Lyα fluxes from the literature (Section 3.3). Fig. 6 shows the out-of-transit (black solid line) and in-transit (red solid line) spectra for Youngblood et al. 2016). The example error bars in each panel apply to colored symbols without error bars. In the left panel, the 68%, 95%, and 99.7% confidence intervals from our simultaneous Lyα and Mg II reconstruction are shown as black contours. In the right two panels, the black box and whisker symbol shows the median Lyα and Mg II surface flux with the 68% and 95% confidence intervals as the box and whiskers, respectively at the assumed stellar rotation period (28.72 days; Rice et al. 2019). GJ 9827 b, where the out-of-transit spectrum (F OUT ) is obtained by averaging the three out-of-transit spectra and the single in-transit spectrum represents F IN . We find that the observed Lyα spectra are very similar in and out of the transit, and there is no evident planetary absorption during the transit. The largest apparent absorption depths occur in the spectral region most strongly contaminated by ISM absorption and geocoronal emission. Furthermore, we compare pre-ingress and post-egress spectra, as in Kulow et al. (2014), to search for a possible atmospheric comet-like tail. McCann et al. (2019) showed that the stellar wind can significantly shape the planetary outflow, creating strong absorption signatures many hours before and after the optical transit. Fig. 7 shows pre-ingress and postegress spectra in the top panel, and the difference between these spectra in the bottom panel. No evident difference is found. We integrated the flux in the Lyα blue wing from -250 to -75 km s −1 in the stellar rest frame (1214.787-1215.496 Å) to obtain the average fluxes of each of the four orbits (units 10 −15 erg cm −2 s −1 ): 2.20±0.19, 2.28±0.15, 2.16±0.15, and 2.41±0.16. To obtain an upper limit of the size of the planet's H I atmosphere, we fit a transit model of an opaque sphere using a MCMC routine. We use the batman package (Kreid-berg 2015) with transit ephemerides from Rice et al. 2019 and uniform limb darkening parameters. The size of the planet at Lyα relative to the star (R Lyα /R ) was the only free parameter, and we find 1-σ, 2-σ, and 3-σ upper limits of 0.36, 0.48, and 0.57 for R Lyα /R in the blue wing. We repeated this upper limit calculation for the Lyα red wing (+10 to +250 km s −1 in the stellar rest frame; 1215.841-1216.813 Å) and find average fluxes of each of the four orbits (units 10 −13 erg cm −2 s −1 ) of 1.10±0.04, 1.10±0.03, 1.09±0.03, and 1.13±0.03. The 1-σ, 2-σ, and 3-σ upper limits on R Lyα /R in the red wing are 0.21, 0.27, and 0.32. Investigating He I and Hα Prieto-Arranz et al. (2018) suggested that the GJ 9827 planetary system is an excellent laboratory to test atmospheric evolution and planetary mass-loss rates. We investigate the presence of evaporation traces in the atmospheres of GJ 9827 b and GJ 9827 d using CARMENES observations. In particular, we use the data obtained with the nearinfrared channel to study the He I triplet lines (at 10829.09 Å, 10830.25 Å and 10830.34 Å), and the visible channel data to study the Hα line (at 6562.81 Å). As a first step, we correct the observed spectra of telluric absorption contamination using molecfit Kausch et al. 2015), assuming the parameters presented in Nagel et al. (submitted) for the CARMENES instrumental line spread function model. In particular, the He I region is contaminated by telluric absorption of water vapor and telluric emission of OH (Nortmann et al. 2018;Salz et al. 2018). The water vapour absorption is corrected with molecfit and the OH emission lines are masked, follwing the methods described in Palle et al. (2020). The telluric line removal and masked regions are illustrated for each planet/night in the top panels of Figures 8 and 9. After removing the telluric contamination, we can extract the transmission spectrum in both He I and Hα regions using the same approach, presented in different studies such as Wyttenbach et al. (2015), Casasayas-Barris et al. (2019), and Chen et al. (2020). CARMENES observations are referenced to the Earth's rest frame. Thus, we shift the spectra to the stellar rest frame considering the barycentric radial velocity information and the system velocity (31.95 km s −1 ; Prieto-Arranz et al. 2018). After computing the ratio of each stellar spectrum to the master out-of-transit spectrum (combination of all out-of-transit spectra using the simple average) we move the residuals to the planet rest frame (see middle panels of Figures 8 and 9). To do this, we calculate the planet's radial velocity using the radial velocity semiamplitudes K b p = 166.3 km s −1 and K d p = 98.2 km s −1 , for GJ 9827 b and GJ 9827 d, respectively. These values are calculated assuming the stellar radial velocity semi-amplitude (K ), and the planet and star masses reported by Kosiarek et al. (2020), and using K p = K M /M p (Birkby 2018). Finally, we combine all in-transit residuals in the planet rest frame to obtain the transmission spectrum. In the He I region, the masked intervals due to OH contamination are the same for all spectra in the stellar rest frame, but change when moving the spectra to the planet rest frame. When combining the in-transit residuals to extract the transmission spectrum, we only include the non-masked pixels in the calculation. The final transmission spectra are presented in the bottom panels of Figures 8 and 9. We note that different K and mass values are reported in the literature, which result in different K p values (see Rice et al. 2019or Prieto-Arranz et al. 2018. However, these different K p values do not have significant impact on the derived transmission spectra. It is important to notice, however, that using the parameters from Kosiarek et al. (2020), the mid-transit time of GJ 9827 d's observations differs from that obtained using the parameters from Prieto-Arranz et al. 2018, Rodriguez et al. 2018, andRice et al. 2019. This difference is produced, mainly, by the differences in the derived orbital period value. The orbital period derived by Kosiarek et al. (2020) differs by more than 30 s from the ones presented in the previous studies, and this difference is propagated along the different epochs. To be on the safe side, we have repeated the analysis using the parameters from the different references and the resulting transmission spectra do not show any significant feature in any of the cases. In the two-dimensional residual maps presented in Figures 8 and 9, we are not able to visually distinguish absorption features during the transit that could have planetary origin, for either of He I or Hα. The overall transmission spectrum does not show significant absorption features either. The excess absorption measured in the transmission spectra of GJ 9827 b using a 0.7 Å passband centred on the He I and Hα lines core is −0.3 ± 0.2 % and −0.2 ± 0.2 %, respectively. On the other hand, for GJ 9827 d, we measure +0.2 ± 0.2 % and −0.1 ± 0.2 % excess, respectively. The expected absorption signal (≈ 2nHR p /R 2 ) of the annular area of one (n = 1) atmospheric scale height (H) during the transit is around 3 × 10 −5 for both planets, respectively, assuming an atmosphere dominated by H/He mixture and near solar composition (µ = 2.3). However, it has been observed in several exoplanet observations that the detected He I signals are comparable to those created by an annular area of n = 100 times the scale height (see dos Santos et al. 2020). Here, with only a single transit per planet, and the relatively small S/N ratio of the observations, especially in the lines core, we find no evidence for an extended H/He upper atmosphere around GJ 9827 b or GJ 9827 d. This is consistent with the non-detection He I presented for GJ 9827 d by Kasper et al. (2020). DISCUSSION The non-detection of planetary Lyα absorption during the transit of GJ 9827 b can be the result of several factors. The particular radial velocity of the host star is such that the interstellar medium absorbs the line core and most of the blue wing of the Lyα line, while the red wing is almost intact. This is relevant, because past Lyα transit observations obtained for both hot Jupiters and warm Neptunes have shown that the planetary atmospheric absorption is strongest in the blue wing and it is typically caused by energetic neutral atoms, which are fast stellar wind protons that received electrons from slow planetary hydrogen atoms via charge exchange and are moving towards us at velocities of the order of 100 km s −1 (e.g., Vidal-Madjar et al. 2003;Kislyakova et al. 2014;Ehrenreich et al. 2015;Khodachenko et al. 2017;Shaikhislamov et al. 2020). Weak planetary atmospheric absorption in the red wing of the Lyα line, attributed to natural and thermal line broadening, has been observed before, for example in the case of HD209458b (Vidal-Madjar et al. 2003), but this absorption extends to a few tens of km s −1 at maximum into both line wings (Kislyakova et al. 2014;Khodachenko et al. 2017). In the particular case of GJ 9827, the interstellar medium absorption is too close to the blue wing to enable detecting planetary absorption at low velocities and in the red wing the observed stellar emission flux may be too weak to allow detecting the planetary absorption, if present, above the noise level. It has been suggested that for low-mass planets Lyα absorption may probe the presence of large amounts of water in planetary atmospheres; in this case the hydrogen is the result of water dissociation and further dragging of the lighter hydrogen in the upper layers as a result of the stellar highenergy irradiation (e.g., Bourrier et al. 2017). In this case, the hydrogen originating from the atmospheric water vapor may be detectable, but, as described above, the specific configuration of the stellar emission and interstellar medium absorption hamper detecting the planetary absorption feature. Furthermore, a too large amount of water would also hamper the detection of hydrogen at Lyα because of the reduced atmospheric scale height due to the high mean molecular weight (García Muñoz et al. 2020). It is also possible that absorption is not observed because the planetary atmosphere does not present enough hydrogen to be detectable. In fact, GJ 9827 b has a bulk density of 7.47 +1.1 0.95 g cm −3 , thus consistent with a primarily rocky composition (Kosiarek et al. 2020). Such a high average density may exclude the possibility that the planet hosts a primary, hydrogen-dominated atmosphere or an atmosphere holding large quantities of water. This is indeed the most likely explanation for the lack of planetary Lyα absorption. Given the high average density of the planet, the rather old age of the system, and the short orbital separation, we can expect that the planet has lost its primary, hydrogendominated envelope through escape in the first few hundreds of Myrs (e.g., Kubyshkina et al. 2018a,b), developing then a secondary (e.g., CO 2 -dominated) atmosphere as a result of magma ocean solidification. If this process happened while the star was still active, it is even possible that the planet has also lost this secondary atmosphere through hydrodynamic escape (Kulikov et al. 2006;Tian 2009;García Muñoz et al. 2020) leaving behind the bare surface exposed to the action of the stellar wind, which may have then led to the formation of a mineral exosphere (e.g., Miguel et al. 2011;Vidotto et al. 2018), not dissimilar from that of Mercury (e.g., Pfleger et al. 2015). The conclusion that GJ 9827 b has most likely lost its primary hydrogen-dominated envelope is supported also by calculations of the expected planetary mass-loss rate. We employed the stellar distance and relations of Linsky et al. (2014) to estimate the stellar high-energy emission (X-ray and EUV; hereafter XUV) from the reconstructed Lyα flux, obtaining an XUV flux at 1 AU of 13.01 erg cm −2 s −1 . We further inserted the XUV flux, scaled to the distance of planet b, and the system parameters given by Kosiarek et al. (2020) in the "hydro-based approximation" presented by Kubyshkina et al. (2018c) that enables one to analytically derive hydrogen atmospheric mass-loss rates for planets below 40 M ⊕ accounting for all effects included in the hydrodynamic modelling (both core-powered mass-loss and photoevaporation). For GJ 9827 b, we obtained a mass-loss rate of 3.6×10 11 g s −1 , that is about 1.9 M ⊕ Gyr −1 (or about 0.4 planetary masses per Gyr). Considering that the star is ≈10 Gyr old (certainly older than 5 Gyr; Rice et al. 2019) and that the star was more active in the past, it is safe to conclude that the planet has lost its primary hydrogen-dominated atmosphere. This is further supported by the rather small restricted Jeans escape parameter Λ (Fossati et al. 2017) of about 20.6, which alone indicates that the planet is subject to intense mass loss, partially driven by the high atmospheric temperature and low planetary gravity (i.e., core powered mass loss). We followed the same procedure to estimate the atmospheric hydrogen mass-loss rates of GJ 9827 c and GJ 9827 d obtaining 1.0×10 11 g s −1 and 5.1×10 10 g s −1 , respectively. We remark that for all three planets, the mass-loss rates obtained from the hydro-based approximation are within a factor of five from those obtained from directly interpolating the grid of hydrodynamic upper atmosphere models presented by Kubyshkina et al. (2018b). These values correspond to about 0.5 M ⊕ Gyr −1 (or about 0.3 planetary masses per Gyr) for GJ 9827 c and about 0.3 M ⊕ Gyr −1 (or 0.1 planetary masses per Gyr) for GJ 9827 d. These values and the bulk density of GJ 9827 c (6.1 g cm −3 ) indicate that it is very unlikely the planet still holds part of its primary hydrogen-dominated at-mosphere. It is therefore possible that GJ 9827 c has followed an evolutionary path similar to that of GJ 9827 b. For GJ 9827 d, the lower bulk density of 2.51 g cm −3 suggests that the planet may still host part of its primary atmosphere. This may be possible given that the planet appears to have a more sustainable mass-loss rate, suggesting that for this planet mass loss is primarily driven by atmospheric heating due to absorption of the stellar XUV emission, which is in general weaker than core-powered mass loss. However, the large difference between the mass-loss rates we obtained assuming a pure hydrogen atmosphere and those obtained from the constraint given by the non-detection of neutral He in the planetary transmission spectrum (< 4.2 × 10 8 g s −1 ; Kasper et al. 2020) suggest that the planetary atmosphere may not be hydrogen-dominated. For GJ 9827 d, mass loss is driven by the stellar high-energy emission and therefore its estimate directly depend on it. However a ten times lower stellar XUV emission compared to what is derived from the Lyα flux would bring the planetary mass-loss rate closer to the upper limit given by Kasper et al. (2020). Such lower stellar XUV emission would be close to what is indicated by the Mg II h&k resonance lines. In fact, following Linsky et al. (2013Linsky et al. ( , 2014, the measured Mg II h&k emission flux would imply an XUV flux at 1 AU of 5.97 erg cm −2 s −1 that is almost 3 times lower than what predicted from the Lyα emission flux. Furthermore one has to consider the rather large uncertainties on the employed conversions that make this value consistent with a ten times lower XUV flux. Nevertheless, even a ten times lower XUV flux would still be about ten times higher than what suggested by the non-detection of the He lines. Interestingly, GJ 9827 d has a bulk density similar to that of π Men c, which also has a predicted hydrogen mass-loss rate of the order of 10 10 g s −1 García Muñoz et al. 2020;Shaikhislamov et al. 2020) and for which Lyα transit observations led to a non-detection of the planetary atmosphere (García Muñoz et al. 2020). Transit observations of GJ 9827 d, and of other similar planets such as π Men c, aiming at characterising their atmospheres would be very valuable for understanding the nature of puffy super-Earths (García Muñoz et al. 2020). CONCLUSIONS In this paper we presented a search for exospheres around two planets orbiting GJ 9827, a K6 bright star discovered to host three super-Earths in 1:3:5 commensurability from the Kepler/K2 mission. We observed GJ 9827 b with HST and GJ 9827 b and d with CARMENES during transit in order to characterize their atmospheres via the Lyα, Hα, and He I transitions, and we found no evidence of an extended atmosphere in either of the planets. Theoretical calculations of the mass-loss rate supported our results, predicting escape rates of 4.3×10 11 g s −1 , 7.2×10 12 g s −1 and 3.3×10 10 g s −1 for GJ 9827 b, c and d, respectively, making them unlikely to still retain their hydrogen-dominated atmosphere. We also made use of the HST spectra in order to characterize GJ 9827's high energy emission, which was used for the above escape rate calculations, and the ISM absorption along its sightline. We reconstructed the intrinsic Lyα and MgII stellar fluxes, necessary because of attenuating H I and Mg II interstellar gas between us and the star, finding F(Lyα) = (5.42 +0.96 −0.75 ) × 10 −13 erg cm −2 s −1 and F(MgII) = (5.64± 0.24) × 10 −14 erg cm −2 s −1 . We report that GJ 9827 is Dopplershifted +30 km s −1 from the velocity frame of the absorbing ISM gas, which results in almost negligible attenuation of the narrow Mg II lines, but dramatic absorption of the broad Lyα line. However, the reconstructed intrinsic Lyα flux is inconsistent with the literature predictions based on its Mg II emission (Wood et al. 2005;Youngblood et al. 2016). Comparing GJ 9827 to other K dwarfs as well as M dwarfs, we found it to have a significantly high Lyα surface flux and a significantly low Mg II surface flux. This could have important implications on the planetary atmospheres in the system as Lyα and Mg II, the two brightest emission lines in GJ 9827's UV spectrum, have a large effect on atmospheric photochemistry, potentially controlling which are the dominant species in the atmosphere. GJ 9827's Lyα and Mg II flux discrepancy also highlights the importance of caution when using UV scaling relations for atmospheric escape calculations or photochemistry calculations. Not acknowledging the natural variability between individual stars could be detrimental to our assumptions about the composition or even presence of exoplanet atmospheres. As a nearby system of planets transiting a bright star, GJ 9827 is being intensely studied for a variety of reasons. More HST STIS UV transit observations are planned for GJ 9827 b, that will allow us to confirm our results presented here and investigate possible variations in the stellar flux. We have also observed transits of all three Super-Earths orbiting GJ 9827 with Spitzer (Livingston et al in prep.). These, together with our approved Cycle 1 GO CHEOPS transit observations, will further enhance dynamical constraints via transit timing variations that will provide invaluable measurements in the infrared to complement our Hubble observations, as well as facilitate efficient future observations (e.g., with JWST) for a system that will be intensely characterized in the years to come. ACKNOWLEDGMENTS Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555.
2021-01-19T02:15:20.129Z
2021-01-15T00:00:00.000
{ "year": 2021, "sha1": "5002bb7a92c33337b5c78eaf49e6e02f576ce70d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2101.06277", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c5ab69d2c05ae9262d59ac7e8aa47d8e1e06cbd3", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
71562462
pes2o/s2orc
v3-fos-license
Prevalence of intestinal parasites in Lorestan Province, West of Iran Prevalence of intestinal parasites in Lorestan Province, West of Iran Ebrahim Badparva, Farnaz Kheirandish*, Farzad Ebrahimzade Department of Parasitology and Mycology, School of Medicine, Lorestan University of Medical Sciences, Khorramabad, Iran Department of Biostatistics, School of Health and Nutrition, Lorestan University of Medical Sciences, Khorramabad, Iran Asian Pac J Trop Dis 2014; 4(Suppl 2): S728-S732 Introduction One of the most common infections in the world is intestinal parasites since about one-third of people has these infections which causes numerous symptoms especially in young children [1] . Despite the effort and extensive programing of the World Health Organization, the prevalence of intestinal parasites has caused economic, social and health losses [2] . The prevalence of these parasites in different regions is different as environmental, economic, regional, political and cultural habits and social factors have a significant role in their prevalence [3,4] . For example, lack of access to clean water, poor sanitation, economic poverty, high population density, living in the tropics and unexpected events such as earthquakes and floods can increase the risk of parasitic infection [5] . Parasitic infections in patients especially children are associated with poor growth, physical weakness and loss of academic progress [4,6] . It is believed that improving living conditions could be effective in reducing the prevalence of parasites [4] . Zoonotic parasites stored in the animals as reservoir host. Special attention should be paid on these hosts to control parasites [7] . Collecting the epidemiological data, such as the prevalence of intestinal parasitic infections in different regions, is a useful prerequisite for planning and controlling of parasitic infections [8] . Iran is located in the south of Caspian Sea, west of Afghanistan, the east of Iraq and north of Persian Gulf. The weather in north and south of Iran is temperate and subtropical, respectively. The prevalence of parasites in this country is diversity because of various weather conditions, diversity of cultures S729 and traditions in different regions of Iran [9] . The purpose of this study was to investigate the prevalence of intestinal parasites in Lorestan Province, Iran. Study area This study was carried out in Khorramabad City, Lorestan Province, West of Iran, located between latitudes 32° 30´ and 48°1´ N and longitudes 55°17´ and 61° 15´ E. Long-term annual mean temperature and precipitation are 17.07 °C and 580 mm, respectively. The weather and climate of Lorestan Province have a variation. This province is classified as a region with a semi-arid climatic condition. The total area of the province is 28 064 km 2 and the total cultivated area of barley is about 138 978 ha consisting of 9 029 ha of irrigated and 129 949 ha dry land barley [10] . Feces collection and analysis This cross-sectional study was conducted in 2013. Samples were collected using multi-stage random cluster sampling. The trained people were chosen for the completion of the questionnaires and collection of the samples. Stool samples were collected in stool container and transported to the laboratory of parasitology. Samples were examined by the techniques including direct wet-mount, Lugol's iodine staining, formaldehydeether sedimentation, agar culture and Trichrome staining. Statistical analysis Results were statically examined by using SPSS software, version 17. Chi-square test at the 5% level was used to assess the relation between the prevalence of intestinal parasites and qualitative variables. Differences were considered significant when P value was less than 0.05. Results The frequency of intestinal parasites in 2 838 stool samples was 465 (16.4%) of which 188(13.5%) samples were for urban areas and 277 (19.2%) for rural areas. Infection in rural areas was significantly higher than urban areas (P<0.001). The prevalence of intestinal parasites in cities studied of Lorestan Province, Iran is reported in Table 1 in details. S730 Out of 465 infected samples, 456 (98%) were contaminated with protozoan parasites and 9 (2%) with helminthes (Table 2). Also, 432 (15.2%) samples were infected with one parasite and 33 (1.2%) with more than one parasite. Infection in people who sometimes used the soap to wash hands was significantly more than those who always used soap (P<0.001) ( Table 2). Infection in people with poor economic conditions, was significantly more than the two groups with moderate and good economic conditions (P<0.001) ( Table 2). The relationship between intestinal parasites and how to wash vegetables, showed no significant (P>0.05) ( Table 2). Discussion One of the most common infections in the world is intestinal parasites. About 3.5 billion people are affected in the world of which 450 million infected people, mostly children, suffer the effect of these contaminants. Infection with this parasite may be associated with symptoms such as intestinal disorders, anemia, malabsorption, growth failure in children and physical psychological problems, therefore, it was seen as the main health problem [12] . In this study, the prevalence of intestinal parasites was reported 16.4% that in comparison with the results of studies conducted in recent decades in other Provinces of Iran, and even in some countries are less [2,5, . The result of the present study was similar to results of previous studies in Lorestan Province in which the prevalence of intestinal parasites was reported low [39][40][41] . A number of similar studies results in other countries show that the frequency of intestinal parasites than it in previous decades, has been significantly reduced which proposed reasons for this reduction [16,31,35,38] . In Iran, due to the significant improvement in the level of public health because of modern agricultural development, improved household economy as well as public health sanitation, the incidence of transmission of parasitic diseases and their prevalence is declining than the past [6,28] . A total of 98% of parasites reported in this study were protozoa that were significantly more than helminthes (P<0.05). This study is consistent with a number of studies that have reported the frequency of helminthes were more protozoa [38] . Higher rate of intestinal protozoa infections is due to such direct transfer, reproduce simple, simple life cycle, resistance of protozoan cysts and healthy carriers [28] . The most common intestinal parasite in this study was G. lamblia, despite its decline in the past, was still at the head of intestinal parasitic infections [39,40] . Almost all children up to age 3 in tropical areas with poor hygiene level were infected once [41] . In this study, the second most common protozoan was B. hominis. Although this frequency is lower than results in some studies [42] , but similar to the reported prevalence in previous studies conducted in Lorestan Province [43] . Since some researchers believe that this parasite is pathogenic, but in most cases it is not treated, so we should be more hesitant to deal with this parasite [44] . In present study, the prevalence of intestinal parasites in groups with poor economic conditions was significantly more. This result is more consistent with the similar studies which have suggested that the prevalence of the parasite in impoverished areas with high population density is higher [5] . In this study, out of 465 infected samples, 262 (56.4%) samples were infected with nonpathogenic intestinal parasites. Although these parasites have no role in causing infectious diseases, the observance of them is due to fecal -oral transition which is an indicator for measurement of health status in area. The results of this study are consistent with the results of the study that showed the prevalence of intestinal parasites in rural areas is more [28] . Effective reasons for the reducing incidence of intestinal parasites in Lorestan Province could be the development of universities with more students led to increased awareness, improvement of the environment, increase of the ease of access to health care centers, increase of advertising in provincial mass media about health training, increased health culture, dispose of sanitary waste properly. These reasons are similar to the obtained results in a number of studies conducted in Iran [16,31,37,38] . According to the results, it is recommended to learn more about the distribution of parasites, and epidemiological studies in other Iranian Provinces that each has its own unique weather conditions should be done. Finally, under the supervision of the Ministry of Health and collaboration with the Medical Universities, the necessary measures to prevent, control and minimize parasitic infections take place. Conflict of interest statement The authors declare that there is no conflict of interests.
2019-03-08T14:02:27.495Z
2014-09-01T00:00:00.000
{ "year": 2014, "sha1": "f595c705500f6459d2002d79cdc5f4baed7e9b44", "oa_license": null, "oa_url": "https://doi.org/10.1016/s2222-1808(14)60716-7", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fb931ff8031f5c336b813d4ddada58e6f8d07c30", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
237939850
pes2o/s2orc
v3-fos-license
Interactions between Reduced Graphene Oxide with Monomers of (Calcium) Silicate Hydrates: A First-Principles Study Graphene is a two-dimensional material, with exceptional mechanical, electrical, and thermal properties. Graphene-based materials are, therefore, excellent candidates for use in nanocomposites. We investigated reduced graphene oxide (rGO), which is produced easily by oxidizing and exfoliating graphite in calcium silicate hydrate (CSHs) composites, for use in cementitious materials. The density functional theory was used to study the binding of moieties, on the rGO surface (e.g., hydroxyl-OH/rGO and epoxide/rGO groups), to CSH units, such as silicate tetrahedra, calcium ions, and OH groups. The simulations indicate complex interactions between OH/rGO and silicate tetrahedra, involving condensation reactions and selective repairing of the rGO lattice to reform pristine graphene. The condensation reactions even occurred in the presence of calcium ions and hydroxyl groups. In contrast, rGO/CSH interactions remained close to the initial structural models of the epoxy rGO surface. The simulations indicate that specific CSHs, containing rGO with different interfacial topologies, can be manufactured using coatings of either epoxide or hydroxyl groups. The results fill a knowledge gap, by establishing a connection between the chemical compositions of CSH units and rGO, and confirm that a wet chemical method can be used to produce pristine graphene by removing hydroxyl defects from rGO. Introduction Graphene [1] is a two-dimensional honeycomb plane of sp 2 carbon atoms, and has received considerable attention for its use in applications such as electronic devices [2], energy storage devices [3], and composite materials [4,5], because of its unique mechanical [6,7], electronic [8], thermal [9][10][11][12], and chemical properties [13][14][15][16][17]. The development of a cheap method of fabrication of high-quality graphene remains a considerable challenge [15]. Graphene can be produced by chemical vapor deposition (CVD), using a catalytic metal substrate made of materials such as Cu or Ni [18,19], or by mechanical and chemical exfoliation of graphite, with deposition of the exfoliated pieces on various substrates [1,15,20]. The chemical vapor deposition method involves the use of high-temperature procedures and specialist equipment, and is a relatively expensive method of producing graphene with very few defects. A great deal of attention has been paid to the development of methods involving the chemical oxidation and exfoliation of graphite, to yield graphene oxide (GO), with the subsequent reduction in the oxygen-containing functional groups, using thermal, chemical, or electrochemical reduction methods, to yield reduced graphene oxide (rGO), The current main challenge in the field area lies in improving our understanding of the mechanisms involved in the interactions between the chemical components of cementitious CSH gel moieties [57] and pristine or defect-containing graphene sheets in water, at the nanoscale. This is particularly relevant in the context of Dimov et al. [58], who demonstrated, experimentally, that graphene-enabled nanoengineered concrete composites can have ultra-high strengths and interesting additional functionalities. Yao et al. Many recent studies have focused on manufacturing very pure rGO sheets (i.e., graphene with low coverage densities of oxygen-containing functional groups) from Nanomaterials 2021, 11,2248 3 of 14 GO [22]. Feng et al. produced rGO structures with low oxygen contents (down to 5.6% by weight), using Na and NH 3 treatments with active solvated electrons as a strong reducing agent [30]. Liao et al. [31] produced graphene from exfoliated graphite oxide in deionized water at~pH 3, between 12 and 48 h at 120 or 95 • C, at an oxygen-to-carbon reduction ratio (O:C) of about 1:6, and analyzed the product by means of C 1s X-ray photoelectron spectroscopy spectra. By considering Fourier transform infrared spectra, they found a marked decrease in the number of hydroxyl and epoxide groups. Pei et al. [32] fabricated very conductive and flexible graphene films by reducing GO films, by immersion in a solution of HI, an 85% N 2 H 4 ·H 2 O solution, and a 50 mM NaBH 4 aqueous solution at room temperature. They found that after reducing the GO films with HI, most of the oxygen-containing groups from the GO film had been removed, causing the C-C bonds to become dominant, giving a C/O atom ratio ≥ 12 and an electrical conductivity of up to 298 S/cm. Much lower C/O atom ratios and electrical conductivities were found for higher GO films that were reduced using N 2 H 4 ·H 2 O and NaBH 4 . Concrete-and cement-based materials are only second to water, in terms of their use around the world [33][34][35][36]. The total annual concrete production around the world is ≥20 × 10 9 t and it is currently increasing by 5% per year, and contributes 5-10% to global anthropogenic carbon dioxide emissions [37,38]. The chemically active ingredients of cements in clinker particles are hydrated to produce cement paste. Unhydrated clinker and calcium silicate hydrates form a multi-scale porous composite, the primary binding phase of which is known as CSH gel [39,40]. This acts as a glue that adheres to fine and coarse aggregates to create concrete [37,41]. CSH gel has a complex structure [33,42,43], including water in the interlayers, layered material structures at the nanoscale [44,45], a globular texture at the mesoscale [46][47][48], and a multi-scale porous structure [49][50][51]. The engineering properties of cement-based materials are largely controlled by the properties of the CSH gel [33,39,52], which has an intrinsically brittle nature and is weak in tension. This weakness is usually overcome by reinforcing cement-based materials with metal fibers and, more recently, the possibilities offered by inorganic or carbon nanotubes are being explored [53][54][55][56]. In the latter case, it is relatively difficult to combine the aqueous solution involved in the cementitious matrix with the hydrophobic carbon nanostructures, such as pristine carbon nanotubes. An understanding of the defects seems to be key in controlling the final properties of the cementitious nanocomposite [56]. The current main challenge in the field area lies in improving our understanding of the mechanisms involved in the interactions between the chemical components of cementitious CSH gel moieties [57] and pristine or defect-containing graphene sheets in water, at the nanoscale. This is particularly relevant in the context of Dimov et al. [58], who demonstrated, experimentally, that graphene-enabled nanoengineered concrete composites can have ultra-high strengths and interesting additional functionalities. Yao et al. recently synthesized very tough highly ordered CSH-GO composites, assuming that they contain COOH groups [59]. Hou et al. recently used reactive force field molecular dynamics to investigate the mechanical properties and reactivities of GO sheets that are functionalized with hydroxyl (C-OH), epoxy (C-O-C), carboxyl (COOH), and sulphonic (SO 3 H) groups, with a 10% ultra-confined coverage with the calcium silicate hydrate gel (CSH), as shown in Figure 1b [60]. Calculations using potentials have mainly been performed to study the GO-COOH groups on graphene, but such groups are attached to the GO edges, so constitute a minority of the groups attached to GO. We therefore focus on hydroxyl and epoxy groups. To our best knowledge, the mechanism involved in the interactions between rGO and monomers in CSH gels during the fabrication of cementitious composite materials (Figure 1c) have not been studied previously. The aim of this study was to improve our fundamental understanding of the interactions between CSHs (e.g., CSH gel) with rGO, using density functional theory (DFT) calculations. Interactions at the interface were taken into account by calculating the adsorption energies of optimized CSH gel units with OH/rGO and epoxide/rGO sheets in various initial configurations. The results fill a key knowledge gap, by establishing connections between the chemical components of CSH gel in cementitious materials and rGO. Simulation Parameters The interactions between rGO and CSHs are studied by performing DFT electronic structure calculations [61]. The Vienna ab initio simulation package [62][63][64] and the projected augmented-wave method were used to define electron-ion interactions. A wellconverged plane-wave cutoff energy of 400 eV was employed. The electron exchange and correlation functional was used in the generalized gradient approximation with Perdew−Burke−Ernzerhof parametrization [65]. A force tolerance of 0.01 eV/Å was used for the structural optimizations. The Brillouin zone was sampled using a well-converged k-sampling, given by 2 × 2 × 1 Monkhorst-Pack k-points for the whole system [66]. The density of state (DOS) was calculated using a refined mesh of 34 × 34 × 1 Monkhorst-Pack k-points. The charge transfers were calculated using Bader analysis, with the code developed by Henkelman et al. [67]. Model Building Models of rGO, focusing on hydroxyl or epoxide groups, are developed with periodic boundary conditions in the x-and y-directions, to remove finite length effects, and a well-converged vacuum slab 10 Å thick to avoid interactions with adjacent cells in the z-direction. Larger boxes of 15 Å were also tested for some cases involving rGO, Ca, and silicate units, and the calculated energy differences were found to converge at the sub meV level. The optimized primitive rGO unit cell therefore has the parameters a = 12.30 Å, b = 12.30 Å, c = 10 Å, α = 90 • , β = 90 • , and γ = 60 • . We treated silicate hydrate moieties as Si(OH) 4 monomers, because these moieties occur in solution when the calcium silicates in cementitious clinkers are hydrated. SiO(OH) 3 − units that were produced in small amounts during the hydration process, causing the pH of cement, were also studied. The contributions of van der Waals interactions were not considered in all the configurations, because the differences in van der Waals potential energies were found to be small when calculated in some tests. The ∆E was calculated from the difference between the energies of the relaxed configuration and the ground-state structure. Further, rGO structures with oxygen-to-carbon reduction ratios (O:C) of 1:50 were used in the simulation models. (The vacancy-adatom pair is one of the most common defects in graphene, but the defect mobility is high [68]. Thus, single vacancies would be saturated in solutions before entering the composites). Adsorption Energy The adsorption energy (E ads ) relates to interactions between the sorbent and substrate, and was calculated using Equation (1), as follows: where E Total is the total energy of the composite systems, E Sub is the energy of the graphene or rGO plane, and E Ab is the energy of the absorbed moiety [69]. Feasible configurations of hydroxyl-and epoxide-reduced graphene with CSH gel moieties were calculated to allow the investigation of the mechanisms involved in the interactions between the different moieties. We were interested in investigating the mechanism involved in the interaction between Si(OH) 4 hydrated silicate monomers and hydroxyl/rGO. Two separate configurations for our simulation models, with Si(OH) 4 in two different positions relative to the hydroxyl/rGO sheets, as input geometric structures, were therefore first prepared. Two dissimilar Si(OH) 4 unit configurations, with respect to the distance to the hydroxyl/rGO sheet, were found. There was a stable configuration in which the starting geometry did not change much after optimization, as shown in Figure 2a. However, optimization indicates that the ground-state structure could become chemically reconstructed, and produce a water molecule between the graphene layer and the SiO(OH) 3 unit, as shown in Figure 2b. The energy of the reconstructed structure (Figure 2b) was ∼ =0.3 eV lower than the expected optimized configuration (Figure 2a). This structure corresponds to the ground-state energy structure for Si(OH) 4 deposited slightly further from the hydroxyl/rGO sheet than in the expected configuration. different positions relative to the hydroxyl/rGO sheets, as input geometric structures, were therefore first prepared. Two dissimilar Si(OH)4 unit configurations, with respect to the distance to the hydroxyl/rGO sheet, were found. There was a stable configuration in which the starting geometry did not change much after optimization, as shown in Figure 2a. However, optimization indicates that the ground-state structure could become chemically reconstructed, and produce a water molecule between the graphene layer and the SiO(OH)3 unit, as shown in Figure 2b. The energy of the reconstructed structure ( Figure 2b) was ≅ 0.3 eV lower than the expected optimized configuration (Figure 2a). This structure corresponds to the ground-state energy structure for Si(OH)4 deposited slightly further from the hydroxyl/rGO sheet than in the expected configuration. Results The calculated adsorption energies for the SiO(OH)3 and Si(OH)4 configurations were −1.683 eV and −0.094 eV (2.17 kcal/mol), respectively. Therefore, the SiO(OH)3 unit in the ground-state structure had a lower adsorption energy than the other configurations, indicating that the system containing SiO(OH)3 with a water molecule next to a graphene plane was the most favorable configuration. Importantly, a condensation reaction occurs when a hydroxyl group at the GO interface becomes dissociated and combines with a hydrogen atom released by Si(OH)4. Then, several configurations of water molecules in a system with a SiO(OH)3 unit on graphene were considered, to determine the optimum location of the water molecule. The geometric structure consisting of a system containing SiO(OH)3, a water molecule, and graphene, shown in Figure 2b, was the ground-state structure and the most stable configuration. The ground-state structure forms for two reasons. The water molecule establishes two strong O-H hydrogen bonds (short) with SiO(OH)3, and, most importantly, it is near The calculated adsorption energies for the SiO(OH) 3 and Si(OH) 4 configurations were −1.683 eV and −0.094 eV (2.17 kcal/mol), respectively. Therefore, the SiO(OH) 3 unit in the ground-state structure had a lower adsorption energy than the other configurations, indicating that the system containing SiO(OH) 3 with a water molecule next to a graphene plane was the most favorable configuration. Importantly, a condensation reaction occurs when a hydroxyl group at the GO interface becomes dissociated and combines with a hydrogen atom released by Si(OH) 4 . Then, several configurations of water molecules in a system with a SiO(OH) 3 unit on graphene were considered, to determine the optimum location of the water molecule. The geometric structure consisting of a system containing SiO(OH) 3 , a water molecule, and graphene, shown in Figure 2b, was the ground-state structure and the most stable configuration. The ground-state structure forms for two reasons. The water molecule establishes two strong O-H hydrogen bonds (short) with SiO(OH) 3 , and, most importantly, it is near the graphene surface, implying an interaction with the graphene surface and an increase in the bonding energy. A SiO(OH) 3 unit on hydroxyl/rGO, in a singly negatively charged system, was also considered. Perhaps surprisingly, we found that in the ground-state structure, hydrogen is dissociated from the hydroxyl/rGO surface and is being transferred to the SiO(OH) 3 unit to saturate the dangling oxygen atom, as shown in Figure 2c. SiO(OH) 3 on hydroxyl/rGO in a neutral system was placed as another initial simulation model, and we also found that the hydrogen becomes dissociated from the hydroxyl/rGO surface and is transferred to the SiO(OH) 3 unit. This indicates that a hydrogen atom becomes dissociated from the hydroxyl/rGO surface regardless of whether the system is charged or neutral. A dangling oxygen bond therefore points towards the graphene plane and is almost fully occupied through charge transfer from the graphene sheet. Model Calculations for an SiO(OH) 3 Unit on Graphene: Chemisorbed and Physisorbed Configurations Next, we considered the adsorption properties of a SiO(OH) 3 unit on the graphene surface. The unit on the graphene plane was placed at several distances in four different configurations, with the system being either neutral or singly negatively charged. All four configurations were optimized, and the ground-state structure was found for the configuration associated with the unit that was physisorbed to the graphene sheet. The configurations for the neutral cases are shown in Figure 3. The ground-state structure was found to be SiO(OH) 3 physisorbed to the graphene sheet, which has a lower energy state (−0.23 eV) than the chemisorbed configuration. When the system is singly negatively charged, both relaxed structures have the SiO(OH) 3 unit physisorbed to the graphene sheet. the graphene surface, implying an interaction with the graphene surface and an increase in the bonding energy. A SiO(OH)3 unit on hydroxyl/rGO, in a singly negatively charged system, was also considered. Perhaps surprisingly, we found that in the ground-state structure, hydrogen is dissociated from the hydroxyl/rGO surface and is being transferred to the SiO(OH)3 unit to saturate the dangling oxygen atom, as shown in Figure 2c. SiO(OH)3 on hydroxyl/rGO in a neutral system was placed as another initial simulation model, and we also found that the hydrogen becomes dissociated from the hydroxyl/rGO surface and is transferred to the SiO(OH)3 unit. This indicates that a hydrogen atom becomes dissociated from the hydroxyl/rGO surface regardless of whether the system is charged or neutral. A dangling oxygen bond therefore points towards the graphene plane and is almost fully occupied through charge transfer from the graphene sheet. Model Calculations for an SiO(OH)3 Unit on Graphene: Chemisorbed and Physisorbed Configurations Next, we considered the adsorption properties of a SiO(OH)3 unit on the graphene surface. The unit on the graphene plane was placed at several distances in four different configurations, with the system being either neutral or singly negatively charged. All four configurations were optimized, and the ground-state structure was found for the configuration associated with the unit that was physisorbed to the graphene sheet. The configurations for the neutral cases are shown in Figure 3. The ground-state structure was found to be SiO(OH)3 physisorbed to the graphene sheet, which has a lower energy state (−0.23 eV) than the chemisorbed configuration. When the system is singly negatively charged, both relaxed structures have the SiO(OH)3 unit physisorbed to the graphene sheet. Two different initial chemisorbed and physisorbed configurations with different distances between the graphene plane and SiO(OH)3 were found for the neutral systems, and physisorbed configurations were found for the singly negatively charged system. The energy difference ΔE (in eV) for each configuration for each charge state is given below the respective structure. The adsorption energy with respect to the SiO(OH)3 units is also shown in each case. 3 were found for the neutral systems, and physisorbed configurations were found for the singly negatively charged system. The energy difference ∆E (in eV) for each configuration for each charge state is given below the respective structure. The adsorption energy with respect to the SiO(OH) 3 units is also shown in each case. The calculated adsorption energies for SiO(OH) 3 , for the physisorbed ground-state structures and the next chemisorbed configuration for the neutral system, are −1.59 and −1.36 eV, respectively, as shown in Figure 3, for the two clearly different distances. In contrast, the adsorption energies for the two similar physisorbed configurations on the negatively doped graphene plane are 2.16 and 2.22 eV, as shown in Figure 3. An electron is transferred to SiO(OH) 3 to form SiO (OH) 3 − for the total charged system, so the initial energy state must change by the difference between the neutral and charged silicate units. We expect that the adsorption of SiO(OH) 3 to the doped graphene sheet in the neutral and singly negatively charged state is almost the same, because the doping electron must be shared between a large number of carbon atoms. The corrected adsorption energy for the SiO (OH) 3 − unit, typical in a basic solution, was found to be exothermic by −1.4 eV, because charge fills the oxygen levels. It seems that the adsorption of silicate hydrate units onto rGO would remain favorable at basic pH values and with negatively charged units. 4 , as the initial geometric structures. The structures were optimized, and the ground-state energy was found to be 0.495 eV lower for the configuration with a Ca ion close to Si(OH) 4 ( Figure 4a) than for the other initial configuration with a Ca ion far from Si(OH) 4 . In fact, for the same configuration without a Ca ion, charge transfer occurred from the graphene sheet to the 2p orbital of the dangling oxygen atom in SiO(OH) 3 . Therefore, in a system containing a Ca ion, the Ca ion interacts directly with SiO(OH) 3 . Some of the charge on the Ca ion is transferred to the 2p orbital of an oxygen atom in SiO(OH) 3 , to occupy the orbital fully, and the remaining charge is transferred to the graphene sheet. The structure of Si(OH) 4 on a hydroxyl/rGO substrate was also optimized in the presence of a Ca ion, in a doubly positively charged system. The hydroxyl group dissociates from the hydroxyl/rGO substrate and a water molecule is formed, and the Ca ion also interacts with SiO(OH) 3 . The distance between the Ca ion and the oxygen atom in the ground-state structure is~2.075 Å, as shown in Figure 4a, which was 0.1 Å more than the distance in the other configuration. Hydroxyl/rGO with Silicate Hydrate Units, in the Presence of Ca Ions and Involving Hydroxyl Groups We also studied the interactions between Si(OH)4 and SiO(OH)3, and the hydroxyl/rGO sheet, in the presence of a hydroxyl group and a Ca ion. The results indicate that the ground-state structure has the same configuration as the structure with a Ca ion close to the silicate monomer, which causes the hydroxyl group to dissociate from the The results indicate that the adsorption energies of SiO(OH) 3 in the four different configurations, with a water molecule on the graphene surface, vary between −1.43 and −1.69 eV; the latter is given as the ground state indicated in Figure 2b. These adsorption energies of the SiO(OH) 3 model are slightly higher than the adsorption energies of −1.36 to −1.59 eV, when no water molecule is present on the graphene surface ( Figure 3). The energies for the adsorption of SiO(OH) 3 to the graphene surface are a few eV lower, at −6.22 eV to −4.38, in the presence of a water molecule or hydroxyl group, and a Ca ion (Figure 4a,b). In other words, the adsorption energy decreases when a Ca ion is added to the system. When a Ca ion is far from SiO(OH) 3 on the hydroxyl/rGO sheet in the initial simulation model, the hydroxyl group is found to not dissociate from the hydroxyl/rGO sheet. A lower energy, −1.93 eV, is found for the ground-state structure than in other configurations. However, the ground-state structure in the initial simulation model relates to the configuration with the Ca ion near SiO(OH) 3 . Optimization for the ground-state structure indicates that the hydroxyl group becomes dissociated from the hydroxyl/rGO, and the Ca ion is involved in bonding, so charge transfer from the Ca ion to two neighboring oxygen atoms occurs, to cause a nearly full occupation of the oxygen orbitals (Figure 4b). Hydroxyl/rGO with Silicate Hydrate Units, in the Presence of Ca Ions and Involving Hydroxyl Groups We also studied the interactions between Si(OH) 4 and SiO(OH) 3 , and the hydroxyl/rGO sheet, in the presence of a hydroxyl group and a Ca ion. The results indicate that the groundstate structure has the same configuration as the structure with a Ca ion close to the silicate monomer, which causes the hydroxyl group to dissociate from the hydroxyl/rGO sheet and the charge to be compensated by charge transfer from the Ca ion to the two neighboring oxygen atoms on two hydroxyl groups, as shown in Figure 4c,d. The ground-state structures of Si(OH) 4 and SiO(OH) 3 have energies that are almost 3 and 0.4 eV lower, respectively, than the next lowest energy configurations. In contrast, for the configuration with a Ca ion that is far from the silicate, the hydroxyl group is not dissociated from the sheet. The adsorption energy for SiO(OH) 3 in the ground-state structure, in the presence of a hydroxyl group and a Ca ion on the graphene plane, is −2.63 eV, which indicates that stronger adsorption occurred than for the removal of only hydroxyl, as shown in Figure 4c. In fact, even when more hydroxyl groups are added to the system, the hydroxyl group becomes dissociated from the hydroxyl/rGO sheet, because the hydroxyl groups from Si(OH) 4 remain strongly bonded to the Si atom. Epoxide/rGO with CSH Units As mentioned above, the hydroxyl group in the ground-state structures of the CSH composites dissociates from the hydroxyl/rGO sheet, to produce pristine graphene (Figure 2b). According to previous works [70], the contribution of carbonyl/epoxy groups on nanocomposites is still important, and they are not fully hydrolyzed during the preparation of composites. However, as shown in Figure 5a, pristine graphene is not produced when Si(OH) 4 is on the epoxide/rGO surface, and the ground-state structure remains similar to the initial structure, with E ads~0 .127 eV (2.93 kcal/mol). The mechanism involved in the interaction between SiO(OH) 3 and the surface of the epoxide/rGO sheet was investigated, and the results are shown in Figure 5b. We found an adsorption energy for SiO(OH) 3 on the epoxide/rGO surface of −0.853 eV, which is lower than the adsorption energy for a neutral Si(OH) 4 silicate hydrate moiety. The SiO(OH) 3 silicate unit moves away from the epoxide groups, as shown by the physisorption model described above. The nearest distance between the dangling oxygen atom and the epoxide/rGO sheet is 2.931 Å. Even longer distances were found in several previous studies. For example, Gao et al. [71] calculated the adsorption energies for H 2 S and CH 4 on intrinsic graphene, which were −0.038 and −0.022 eV, respectively, and distances of 3.813 and 3.865 Å, which are larger than the distances included in Figure 5. Then, the interaction between SiO(OH) 3 and the epoxide/rGO sheet was considered to be neutral in the presence of a Ca ion and two hydroxyl groups, as shown in Figure 5c. As discussed above, the adsorption energy becomes more favorable when Ca ions are involved. The adsorption energy for SiO(OH) 3 , in the presence of a Ca ion and hydroxyl functional groups of hydroxyl, is therefore −1.52 eV lower (Figure 5c) than the adsorption energy for the same structure without the functional groups (Figure 5b). 2.931 Å. Even longer distances were found in several previous studies. For example, Gao et al. [71] calculated the adsorption energies for H2S and CH4 on intrinsic graphene, which were −0.038 and −0.022 eV, respectively, and distances of 3.813 and 3.865 Å, which are larger than the distances included in Figure 5. Then, the interaction between SiO(OH)3 and the epoxide/rGO sheet was considered to be neutral in the presence of a Ca ion and two hydroxyl groups, as shown in Figure 5c. As discussed above, the adsorption energy becomes more favorable when Ca ions are involved. The adsorption energy for SiO(OH)3, in the presence of a Ca ion and hydroxyl functional groups of hydroxyl, is therefore −1.52 eV lower (Figure 5c) than the adsorption energy for the same structure without the functional groups (Figure 5b). Electronic Properties of the Ground State with the Condensation Reaction The electronic properties of the ground-state structures described above are now analyzed in more detail. The total DOS of the ground-state structure consisting of a graphene layer, a water molecule, and a SiO(OH)3 unit is shown in Figure 6a. The Fermi level is indicated by a vertical dashed line at the value of zero. The charge neutrality point for the graphene layer was higher than the Fermi level, because the graphene layer was positively charged. The DOS that was projected on the non-protonated oxygen atom belonging to SiO(OH)3 is plotted in pink. Below the Fermi level, at the valence band energy of −0.073 eV, there is a large DOS peak from the oxygen atom with nearly one extra electron. Bader charge analysis shows the charge density distribution assigned to atoms, as shown in Figure 6b. Thus, our results indicate that there is charge transfer from graphene to the dangling oxygen atom, in order to have the amount of electrons at the valence fully occupied for the SiO(OH)3 moiety. Electronic Properties of the Ground State with the Condensation Reaction The electronic properties of the ground-state structures described above are now analyzed in more detail. The total DOS of the ground-state structure consisting of a graphene layer, a water molecule, and a SiO(OH) 3 unit is shown in Figure 6a. The Fermi level is indicated by a vertical dashed line at the value of zero. The charge neutrality point for the graphene layer was higher than the Fermi level, because the graphene layer was positively charged. The DOS that was projected on the non-protonated oxygen atom belonging to SiO(OH) 3 is plotted in pink. Below the Fermi level, at the valence band energy of −0.073 eV, there is a large DOS peak from the oxygen atom with nearly one extra electron. Bader charge analysis shows the charge density distribution assigned to atoms, as shown in Figure 6b. Thus, our results indicate that there is charge transfer from graphene to the dangling oxygen atom, in order to have the amount of electrons at the valence fully occupied for the SiO(OH) 3 moiety. Electronic Properties of the Ground State SiO(OH)3 with the Graphene Sheet The electronic properties of the ground-state structures given by the physisorption Electronic Properties of the Ground State SiO(OH) 3 with the Graphene Sheet The electronic properties of the ground-state structures given by the physisorption model of SiO(OH) 3 on the graphene plane, for the neutral and singly negatively charged systems, were assessed. The Bader charge analysis method indicates that the charge transfers between the graphene and SiO(OH) 3 were similar to the charge transfers in the ground state on the rGO, as shown in Figure 7a. For the neutral system, the graphene sheet lost 0.62 electrons and the charge density of the dangling oxygen became 7.32 electrons (Figure 7a). ground state on the rGO, as shown in Figure 7a. For the neutral system, the graphene sheet lost 0.62 electrons and the charge density of the dangling oxygen became 7.32 electrons (Figure 7a). We constructed the 3D charge density difference plots to determine the spatial distribution of the charge. The charge density differences for the ground-state structure of SiO(OH)3 on the graphene plane in the neutral system, with respect to the graphene sheet (200 electrons) and the SiO(OH)3 unit (31 electrons), are shown in Figure 7b. The distribution of electrons between the substrate and absorbent matched the results of the Bader charge analysis, because an electron was transferred to the SiO(OH)3 unit. The adsorption energy for SiO(OH)3 on graphene was calculated with respect to the negative unit, and was in the range of a few electron volts. This is typical for Coulomb interactions over a few angstroms distance, caused by charge transfer. Water molecules near the graphene increase the adsorption energy because they form hydrogen bonds with SiO(OH)3, and because the oxygen atom in a water molecule becomes more stable by interacting with the depleted positive charge in the graphene layer. 3 unit physisorbed to graphene determined by Bader charge analysis and the spatial distribution of the three-dimensional charge density differences. The yellow and purple isosurfaces indicate charge gains and losses, respectively. Almost one electron is transferred to SiO(OH) 3 from the graphene layer to establish the strong bond, which is stabilized further by a water molecule, as shown by the adsorption energies (see discussion in the text). We constructed the 3D charge density difference plots to determine the spatial distribution of the charge. The charge density differences for the ground-state structure of SiO(OH) 3 on the graphene plane in the neutral system, with respect to the graphene sheet (200 electrons) and the SiO(OH) 3 unit (31 electrons), are shown in Figure 7b. The distribution of electrons between the substrate and absorbent matched the results of the Bader charge analysis, because an electron was transferred to the SiO(OH) 3 unit. The adsorption energy for SiO(OH) 3 on graphene was calculated with respect to the negative unit, and was in the range of a few electron volts. This is typical for Coulomb interactions over a few angstroms distance, caused by charge transfer. Water molecules near the graphene increase the adsorption energy because they form hydrogen bonds with SiO(OH) 3 , and because the oxygen atom in a water molecule becomes more stable by interacting with the depleted positive charge in the graphene layer. Electronic Properties of the Ground State with Condensation Reaction after Addition of a Ca Ion We assessed the changes in the hydroxyl/rGO electronic properties, caused by adding CSH. The structures with the most favorable adsorption energies were analyzed. The DOS and Bader charge distributions are shown in Figure 8. The DOS indicated that the graphene Fermi level moves above the neutrality point, i.e., into the linear part above the density of states of zero. The oxygen and calcium states are not near the Fermi level, and are more than 1 eV from the graphene neutrality point, meaning that these states are almost fully occupied and empty, respectively. Although the neutrality point can be recovered by doping, occupation of the CSH counterpart remains similar, even for the cases involving hydroxyl groups. The unprotonated oxygen atom has more charge than the neutral cases discussed above, and it becomes almost equally charged as the case for the negatively charged systems. The Ca ion on graphene is almost unoccupied, but has a charge of 0.42 electrons because it is close to graphene. This value is almost the same independently as for the Ca configuration, as long as the Ca ion remains close to the graphene. It seems that the Ca ion helps to form a sandwich structure of charges between the graphene layer and silicate hydrate unit, giving very favorable adsorption energies. involving hydroxyl groups. The unprotonated oxygen atom has more charge than the neutral cases discussed above, and it becomes almost equally charged as the case for the negatively charged systems. The Ca ion on graphene is almost unoccupied, but has a charge of 0.42 electrons because it is close to graphene. This value is almost the same independently as for the Ca configuration, as long as the Ca ion remains close to the graphene. It seems that the Ca ion helps to form a sandwich structure of charges between the graphene layer and silicate hydrate unit, giving very favorable adsorption energies. Figure 8. (a) Density of states and (b) charge distribution of the ground state found for rGO with hydroxyl groups, which is compound of a water molecule, a SiO(OH)3 unit and a Ca ion on the graphene plane. The Fermi level is marked by dashed line and assigned to zero. The partial density of states on the dangling oxygen and calcium are also included using magenta and green colors, respectively. The charges are associated to atoms using Bader charge analysis. Note that because of the Ca ion, the graphene layer is doped negatively. The unprotonated oxygen receives more charge and the silicate hydrate unit is deposited above the Ca ion on the graphene. Electronic Properties of the Ground-State Epoxide/rGO with the Silicate Hydrate Unit The charge density distribution of the neutral systems, determined by Bader charge analysis, and the charge density differences for the molecular orbital isosurfaces, are shown in Figure 9. The Bader charge analysis, the results of which are shown in Figure 9a, indicated that there are 7.30 electrons on the unprotonated oxygen atom. The addition of an electron causes the electron density of the dangling oxygen atom to increase slightly, from 7.30 to 7.46, 0.73 electrons to be transferred to the graphene plane, and 0.11 electrons to be shared between the other atoms. Three-dimensional charge density difference plots were produced, giving more details of the distributions of electrons between the substrate and absorbents than were given by the Bader charge analysis, in order to investigate the electron distributions further. The charge density differences for the ground-state structure of SiO(OH)3 on the graphene plane in the neutral system are shown in Figure 9b. Graphene lost charge to the silicate hydrate unit because it was far from the epoxide group. In fact, the bond between graphene and the SiO(OH)3 unit resembles the bond in the basic bonding model described in Figures 3 and 6. In other words, the adsorption energies indicated that the charge transfer from the graphene plane to SiO(OH)3 occurs in the neutral system, to give an almost fully occupied oxygen 2p orbital. distribution of the ground state found for rGO with hydroxyl groups, which is compound of a water molecule, a SiO(OH) 3 unit and a Ca ion on the graphene plane. The Fermi level is marked by dashed line and assigned to zero. The partial density of states on the dangling oxygen and calcium are also included using magenta and green colors, respectively. The charges are associated to atoms using Bader charge analysis. Note that because of the Ca ion, the graphene layer is doped negatively. The unprotonated oxygen receives more charge and the silicate hydrate unit is deposited above the Ca ion on the graphene. Electronic Properties of the Ground-State Epoxide/rGO with the Silicate Hydrate Unit The charge density distribution of the neutral systems, determined by Bader charge analysis, and the charge density differences for the molecular orbital isosurfaces, are shown in Figure 9. The Bader charge analysis, the results of which are shown in Figure 9a, indicated that there are 7.30 electrons on the unprotonated oxygen atom. The addition of an electron causes the electron density of the dangling oxygen atom to increase slightly, from 7.30 to 7.46, 0.73 electrons to be transferred to the graphene plane, and 0.11 electrons to be shared between the other atoms. Three-dimensional charge density difference plots were produced, giving more details of the distributions of electrons between the substrate and absorbents than were given by the Bader charge analysis, in order to investigate the electron distributions further. The charge density differences for the ground-state structure of SiO(OH) 3 on the graphene plane in the neutral system are shown in Figure 9b. Graphene lost charge to the silicate hydrate unit because it was far from the epoxide group. In fact, the bond between graphene and the SiO(OH) 3 unit resembles the bond in the basic bonding model described in Figures 3 and 6. In other words, the adsorption energies indicated that the charge transfer from the graphene plane to SiO(OH) 3 occurs in the neutral system, to give an almost fully occupied oxygen 2p orbital. Conclusions A DFT method was used to study the mechanism involved in the interactions between hydroxyl or epoxide rGO and the CSH moieties, such as CSH gel in cement. The DFT calculations for silicate tetrahedra, Ca ions, and hydroxyl groups improve our understanding of the bonds between rGO and primary CSH moieties. The results led to the following conclusions. The interactions between hydroxyl/rGO and silicate tetrahedra can repair hydroxyl defects selectively in the rGO lattice, and cause graphene to re-form. The Conclusions A DFT method was used to study the mechanism involved in the interactions between hydroxyl or epoxide rGO and the CSH moieties, such as CSH gel in cement. The DFT calculations for silicate tetrahedra, Ca ions, and hydroxyl groups improve our understanding of the bonds between rGO and primary CSH moieties. The results led to the following conclusions. The interactions between hydroxyl/rGO and silicate tetrahedra can repair hydroxyl defects selectively in the rGO lattice, and cause graphene to re-form. The dissociation of defects in the graphene plane, and the formation of water, even occurs in the presence of Ca ions and hydroxyl groups. In fact, the main interactions between the graphene plane and CSH gel are Coulomb interactions, caused by charge transfer. In contrast, the ground-state structure remains similar to the initial structure model for interactions between epoxide/rGO and CSH gel. Consideration of the strong interactions in this way could allow improvements to be made in the design of composite materials.
2021-09-24T15:08:12.094Z
2021-08-31T00:00:00.000
{ "year": 2021, "sha1": "cdfb715759a7fa56e18537a40775586292a63232", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/11/9/2248/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8cd5c4e1edaa49a3211ffc7e2de6491ee127d59e", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
12925663
pes2o/s2orc
v3-fos-license
Pharmacokinetics of Quercetin-Loaded Methoxy Poly(ethylene glycol)-b-poly(L-lactic acid) Micelle after Oral Administration in Rats The purpose of this study was to evaluate the potential of micelle to change the pharmacokinetics of quercetin (QUT), with a primary goal of enhancing its oral bioavailability. QUT-loaded methoxy poly(ethylene glycol)-b-poly(L-lactic acid) micelle (QUT-loaded MPEG-b-PLLA micelle) was prepared by a thin-film hydration method, resulting in a particle size of 88.5 nm. A liquid chromatography tandem-mass spectrometry (LC-MS/MS) method was developed and validated for determination of QUT in rat plasma. The chromatographic separation was performed on an Agilent Eclipse-C18 (4.6 mm × 50 mm, 3.5 μm) with an isocratic mobile phase system consisting of water and methanol (30 : 70, v/v) at a flow rate of 0.4 mL/min. Calibration curves were linear over the concentration ranges of 2.5–2000 ng/mL for QUT. The micelle was orally administered at a single does in rats, and the pharmacokinetic parameters were evaluated and compared with that administered with the QUT aqueous suspension. The results show that the micelle was able to increase the QUT's oral bioavailability 9-fold compared to the QUT aqueous suspension. These results suggest that methoxy poly(ethylene glycol)-b-poly(L-lactic acid) is a potential carrier for the oral delivery of QUT. Introduction Quercetin (QUT) is a flavonoid found in onions, apples, tea, and some vegetables [1]. It has been reported to be strong antioxidant because of its ability to scavenge free radicals and strong anti-inflammatory due to its inhibition of the proinflammatory cytokine TNF- [2,3]. Apart from these pharmacological activities, QUT has also shown strong antiproliferative, antiviral, and neuroprotection activities [1,4]. In spite of these beneficial pharmacological properties, QUT's application in vivo is hampered by its low aqueous solubility and poor stability. The oral bioavailability of QUT is <17% in rats and ∼1% in humans [5,6]; this low bioavailability would be a consequence of QUT's lipophilic character. QUT which exhibited poor water solubility, low permeation, and short biological half-life was classified as a class IV compound according to the Biopharmaceutical Classification System [7,8]. Therefore, there is a need to improve the oral bioavailability of QUT in the further study and application of QUT. In recent years, more attention has been paid to polymeric micelles due to their ability to solve the problems of drugs that were associated with poor water solubility and low oral bioavailability [9][10][11][12][13]. Polymeric micelles formed by self-assembly of amphiphilic copolymers were considered as one of the most effective strategies to render the hydrophobic drug dispersible in aqueous solution. Moreover, amphiphilic copolymers have a lower critical micelle concentration and are more stable than conventional lowmolecular-weight surfactants [14]. Many studies have shown that polymeric micelles have the ability to improve solubility, oral absorption, bioavailability, and pharmacological effect of hydrophobic drug [10,15,16]. Methoxy poly(ethylene glycol)--poly(L-lactic acid) (MPEG--PLLA) is one of the amphiphilic copolymers and it could be used to form micelles to encapsulate many drugs [17,18]. In this study, MPEG--PLLA was used as a carrier to prepare QUT-loaded MPEG--PLLA micelle. A bioanalytical method based on LC-MS/MS was developed and validated to determine QUT in rat plasma. The QUT-loaded MPEG--PLLA micelle was orally administered at a single dose in rats, and the pharmacokinetic parameters of QUT from the micelle were calculated and compared with those from QUT aqueous suspension. Preparation of Drug-Loaded Micelle. The QUT-loaded MPEG--PLLA micelle was prepared by a thin-film hydration method [19]. In brief, MPEG--PLLA (90 mg) and quercetin (10 mg) were codissolved in 50 mL solutions containing methanol and chloroform (3/7, v/v). After sonication at room temperature for 0.25 h, the solvent was removed by rotary evaporation, and then a drug/MPEG--PLLA matrix was obtained. The matrix was further dried under high vacuum at 37 ∘ C overnight to form a dried solid film. Then the film was hydrated with 20 mL of ultrapure water using a probe-type sonicator (Xin Zhi Biotechnology Co., Ltd., China) at 200 w for 5 min to form a solution. The resulting solution was filtered through a 0.22 m membrane filter to remove any aggregated particles and unencapsulated drug. After filtration, the QUT-loaded MPEG--PLLA micelle solution was obtained and kept at 4 ∘ C for further use. Particle Size and Zeta Potential Measurement. The mean particle size and polydispersity index were measured by dynamic light scattering (DLS) using a Zeta Size Nano-S (Malvern Instrument) at a detection angle of 173 ∘ C. The zeta potential was determined by the light scattering method using 90 Plus Particle Size Analyzer (BC Haven Instruments Corporation). All measurements were repeated three times at the temperature of 25 ∘ C. Measurement of Encapsulation Efficiency (EE) and Drug Loading Content (DL). EE and DL of the QUT-loaded MPEG--PLLA micelle were determined by the previously reported method [20]. A Shimadzu HPLC system equipped with a SPD-10 Avp detector, a LC-10ADvp pump, and a Diamonsil C 18 reversed phase column (4.6 mm × 250 mm, 5 m) was used for the determination of QUT. The mobile phase was composed of a mixture of acetonitrile, 10 mM ammonium acetate buffer, and methanol (32/48/20, v/v/v), pumped at a flow rate of 1.0 mL/min. The amount of QUT encapsulated into the micelle was determined directly after dissolution of micelle in acetonitrile. After the appropriate dilutions in acetonitrile and filtration with a 0.22 m membrane filter, 20 L of the sample was injected into the HPLC system and the detection wavelength was set at 370 nm. The EE% was estimated by comparing the weight of QUT extracted from micelle with the feeding QUT. The DL% was estimated by comparing the weight of QUT extracted from micelle with the weight of the micelle. 2.5. In Vitro Drug Release Study. The release of QUT from micelle was studied by a dialysis method as previously described [20]. Briefly, 1 mL of QUT-loaded MPEG--PLLA micelle solutions was placed in dialysis bags with a molecular weight cut-off of 3500 Da (Snakeskin, Pierce, USA). The dialysis bag was suspended in a tube and then 30 mL of the release medium (consisting of PBS (pH 7.4) and 0.5% (v/v) of Tween) was added to the tube. The tubes were placed in a shaking water bath at 37 ∘ C, 120 rpm. At predetermined time intervals, the release medium in the tube was completely drawn and replaced with fresh release medium. After dilution with acetonitrile, the amount of QUT in the release medium was determined by HPLC method as described above. Chromatograph System and Conditions for QUT Quantitation in Plasma. The triple quadrupole LC-MS/MS system consisted of a 1200 series HPLC system (Agilent Technologies, USA) and a mass spectrometer (6420 triple Quad LC/MS, Agilent Technologies, USA). Chromatographic separation was achieved on an Agilent Eclipse-C 18 (4.6 mm × 50 mm, 3.5 m) column at 20 ∘ C with an isocratic mobile phase system consisting of water and methanol (30 : 70, v/v). The injection volume was 5 L and the flow rate was 0.4 mL/ min. QUT and IS were all ionized by ESI source in negative ion mode. The MS parameters were as follows: capillary, 4000 V; gas temperature, 300 ∘ C; gas flow, 11 L/min; and nebulizer, 15 psi. Quantification was performed using multiple reaction monitoring (MRM) of the transition of m/z 301.1 → 151.0 with collision energy (CE) of 16 eV for QUT, and m/z 285.0 → 133.0 with CE of 32 eV for IS. The fragmentor voltage was kept at 135 and 160 V for QUT and IS, respectively. The system control and data analysis were performed by Mass Hunter Work station Software Qualitative Analysis (Version B.06.00) and Quantitative Analysis (Version B.05.02). Preparation of Standard Solutions, Calibration, and Quality Control Samples. Stock solution of QUT (10 g/mL) and IS (60 g/mL) was prepared in methanol and stored at −20 ∘ C away from light. Subsequently, the working standard solutions of QUT were prepared by serial dilution of the stock solutions with methanol. Calibration standards solution were prepared daily using blank rat plasmas which were obtained from rats without administration of any drugs spiked with the appropriate working solution of QUT to yield the concentration of 2.5-2000 ng/mL. The quality control (QC) solutions were prepared at three concentrations of 7.5 (low concentration), 750 (medium concentration), and 1500 ng/ mL (high concentration) by the similar method as that for the calibration standards solutions. All the calibration standards and QC solutions were stored at 4 ∘ C away from light and brought to room temperature before use. The experimental data were expressed as means ± standard deviation (SD) and all the experiments were done with six parallel samples. Plasma Sample Preparation . 50 L of blank rat plasma with 50 L of IS solution (2 g/mL) and 300 L of acetonitrile were placed in a 1.5 mL polypropylene microcentrifuge tube. The mixture was vertex for 0.5 min and then was centrifuged at 15,000 rpm for 10 min at 4 ∘ C. 5 L of supernatant was injected into the LC-MS/MS system for analysis. Bioanalytical Method Validation 2.9.1. Specificity. The specificity of the developed method was investigated by comparing chromatograms from six different rats' plasma samples with those of QC plasma samples and with the samples collected from rats after administration of QUT to find out interference from endogenous components. Linearity and Lower Limit of Quantification (LLOQ). The linearity of the bioanalytical assay was determined by observed peak area ratios of analytes to IS ( ) versus the spiked concentrations of analytes ( ) in the concentration range of 2.5-2000 ng/mL at least six-point calibration curves; the acceptance criterion for a calibration curve was a correlation coefficient ( ) of 0.99 or better. The LLOQ was defined as the lowest concentration of the analytes in the calibration curve with acceptable precision within 20% and accuracy of 80-120%. Precision and Accuracy. Intraday and interday precision and accuracy were determined by analyzing the three different QC concentrations, six replicates at each concentration, on the same day and for three consecutive days. The assay accuracy was expressed as (measured concentration/added concentration) × 100%. The intra-and interday precision was expressed as RSD, and the accuracy was defined as the RE. Extraction Recoveries and Matrix Effect. The extraction recoveries of QUT were determined by comparing the peak areas from blank plasma samples spiked with QC working solutions before extraction with those from blank plasma samples spiked after extraction. The matrix effects were evaluated by comparing the peak areas of QUT from blank plasma samples spiked with QC working solutions before extraction with those QUT spiked in mobile phase at corresponding concentrations. The experiments were done with six parallel samples. 2.9.5. Stability. The stability of the analytes in plasma was investigated under the following conditions: the long-term stability was evaluated by determining QC plasma samples stored at −20 ∘ C for 30 days; the freeze-thaw stability was determined after three freeze-thaw cycles; short-term stability was evaluated after the exposure of QC samples to room temperature for 24 h. They were investigated by determining QC plasma samples of the three concentration levels, six replicates at each concentration, and considered stable when 85-115% of the initial concentrations were got. Pharmacokinetic Study in Rats. Healthy male SD rats weighing from 200 to 250 g were used for this PK study. The experiments were performed according to institutional guidelines of the University Committee on Use and Care of Animals, Guangxi Medical University. Twelve rats were randomly divided into two groups (six rats per group) for oral administration of a single dose of QUT (40 mg/kg) or QUT-loaded MPEG-b-PLLA micelle (equivalent to 40 mg/kg of QUT), respectively. The rats were fasted with free access to water overnight prior to the experiments. Blood sample (approximately 0.2 mL) was collected via the eye ground veins into microcentrifuged tubes containing 10 L of 15% K 2 EDTA solution at 0 (before drug), 0.083 h, 0.25 h, 0.5 h, 1 h, 2 h, 4 h, 6 h, 8 h, 10 h, 12 h, and 24 h after dose, according to the previous reported methods with some modifications [21][22][23][24][25][26]. The collected blood samples were immediately centrifuged at 5000 rpm for 10 min. The supernatant was transferred to tightly seal plastic tubes and stored at −20 ∘ C until analysis by LC-MS/MS. Statistical Analysis. The pharmacokinetic parameters were calculated by the PKSolver software package (version 2.0, China Pharmaceutical University, Nanjing, China). Significant differences between group values were analyzed using one-tailed Student's -test. Differences were considered statistically significant at < 0.05. Preparation and Characterization of the QUT-Loaded MPEG-b-PLLA Micelle. The prepared QUT-loaded MPEGb-PLLA micelle was successfully obtained by a thin-film hydration method. The particle size of the micelle was 88.5 ± 2.6 nm with polydispersity index (PDI) of 0.13 ± 0.04, as assessed by dynamic light scattering (DLS). These results indicated that the prepared micelle presented monodisperse profile and narrow size distribution. The drug loading content and encapsulation efficiency of the prepared micelle were determined by HPLC method. The drug loading contents of the prepared micelle were 6.1 ± 0.4% and the encapsulation efficiency was 82.5 ± 2.1%. The method used for prepared micelle resulted in significant enclosure of QUT, and the process was found to be highly reproducible. The surface zeta potential of the prepared micelle was −8.72 ± 1.03 mV, indicating that the surface charge of the prepared micelle was negative. The in vitro release of QUT from micelle was examined simulating physiological conditions (37 ∘ C, PBS buffer pH 7.4). As shown in Figure 1, the QUT-loaded MPEG--PLLA micelle displayed slow and sustained release patterns, under physiological conditions. After the initial burst release over about 12 h, the release rate of QUT slowed down to show sustained release patterns. During the first 12 h, the percentages of QUT released from the QUT-loaded MPEG--PLLA micelle were 26.89 ± 1.99%. This initial burst release may be attributed to QUT desorption from the particle surface. After 168 h, approximately 86.89 ± 3.02% of the total QUT was found to be released from QUT-loaded MPEG--PLLA micelle. This sustained drug release profile could be characterized by the QUT diffusion through the polymeric matrix and subsequent diffusion/erosion of the polymeric matrix. Bioanalytical Method Development. A LC-MS/MS method for the determination of QUT in rat plasma has been developed and validated. Luteolin has similar structure with QUT, so it was used as internal standard (IS) in the development of the LC-MS/MS method. The automatic tuning mode was used to optimize MS conditions for detection of QUT and IS. Both positive and negative ion models were used and the results showed that QUT and IS showed higher responses in negative ion detection model than in positive model. As shown in Figure 2, the ion transitions of m/z 301.0 → 151.0 for QUT and m/z 285.0 → 133.0 for IS were used for multiple reaction monitoring (MRM). Isocratic elution was used to separate analytes. Chromatographic peaks of QUT and IS were sharp and baseline separation showed no interference with substances. The retention times were about 1.98 min and 2.17 min for QUT and IS, respectively. A simple and rapid protein precipitation method with acetonitrile was utilized to extract QUT and IS from the rat plasma sample and the results showed that protein precipitation with acetonitrile provided a higher recovery for both QUT and IS. Bioanalytical Method Validation 3.3.1. Specificity. The specificity of the developed method in this study was evaluated by comparing the chromatograms of QUT and IS in plasma and those of potentially interfering plasma components. Figure 2 shows the representative chromatograms obtained from a blank plasma sample (Figure 2(a)), plasma containing QUT (Figure 2(b)), and a plasma sample obtained 6 h after the oral administration of 40 mg/kg of QUT-loaded MPEG--PLLA micelle (Figure 2(c)). No interference from the endogenous compound with the QUT and the IS was detected under the described chromatographic conditions. These results demonstrated the high specificity of the LC-MS/MS method developed in this study. Linearity and LLOQ. The calibration curves were linear over the concentration range of 2.5-2000 ng/mL. Typical linear regression equation for the calibration curve was Y = 0.443466X + 0.018011 ( = 0.9990), where is the peak area ratio of QUT to the IS and X is the concentration of QUT (ng/mL). The correlation coefficients ( ) of the calibration curves for QUT exceeded 0.99, indicating an excellent linearity over the concentration range. The lower limit of quantitation (LLOQ) was 2.5 ng/mL for QUT in rat plasma. Precision and Accuracy. The determined concentrations of QUT in plasma at three QC levels (7.5, 750, or 1500 ng/mL) and a summary of intra-and interday precision and accuracy for QUT was presented in Table 1. The RSD of each QC concentration examined was measured to be <7%, while the RE in the accuracy was within 7.5%. These dates were within the limits established by the FDA guidelines for the validation of bioanalytical methods. And these results indicated that the developed method is accurate and reliable. Extraction Recovery, Matrix Effect, and Stability. The mean extraction recoveries were 89.12 ± 4.58%, 102.39 ± 3.54%, and 96.68 ± 5.09% for QUT at 7.5, 750, and 1500 ng/mL, respectively. The mean extraction recovery of IS was 88.01 ± 5.09% at a concentration of 2 g/mL. These results indicated that the recoveries were consistent and reproducible and the protein precipitation method with acetonitrile was good for extracting QUT and IS in rat plasma. For matrix effect, they were 97.96 ± 2.63%, 104.44 ± 2.72%, and 95.90 ± 3.67 at 7.5, 750, and 1500 ng/mL, respectively, for QUT, and it was 96.73 ± 4.71% at 2 g/mL for IS. All of the matrix effect results were with the acceptable limits, suggesting that no obvious coeluting endogenous matrix influenced the ionization of QUT and IS in the plasma samples. As illustrated in Table 2, QUT was found to be stable in rat plasma under all testing conditions, including shortterm storage, long-term storage, and freeze-thaw cycling. Pharmacokinetic Study. The validated method was successfully applied to study the pharmacokinetic of QUT in rat plasma after an oral administration of QUT-loaded MPEGb-PLLA micelle or QUT aqueous suspension with the same dose of QUT at 40 mg/kg. The mean plasma concentrationtime profile is illustrated in Figure 3. The relevant pharmacokinetic parameters of QUT which were calculated using PKSolver software using the noncompartmental model were summarized in Table 3. After the oral administration of a single dose of QUT aqueous suspension or the QUT-loaded MPEG-b-PLLA micelle, the maximum drug concentrations ( max ) were 628.67 ± 64.66 ng/mL and 1920.83 ± 250.14 ng/mL for the QUT aqueous suspension and the QUT-loaded MPEG--PLLA micelle, respectively. Compared to QUT aqueous suspension, max of QUT from QUT-loaded MPEG--PLLA micelle was increased 3.1-fold. The increase in max indicates that the micelle was effective in increasing drug absorption. max of QUT was 3.0 ± 1.1 h and 7.3 ± 1.6 h in the QUT aqueous suspension treated and QUT-loaded MPEG--PLLA micelle treated rats, respectively. The delayed max of the micelle treated rats may be attributed to the sustained release of QUT from QUT-loaded MPEG--PLLA micelle. Furthermore, the results of MRT also confirm the sustained effect of QUT-loaded MPEG--PLLA micelle as compared with QUT aqueous solution, the MRT of QUT-loaded MPEG--PLLA micelle, and QUT aqueous solution were 20.2 ± 2.4 and 5.4 ± 0.5 h, respectively. The enhanced sustained effect of micelle may be partly due to the prolonged circulation of the micelle in the bloodstream. The clearance (CL) of QUT from the micelle was 2-fold lower than that of free QUT. The micelle decreased the QUT volume of distribution ( ) by 9-fold compared to free QUT. The mean area under the curve (AUC 0-∞ ) of QUT-loaded MPEG--PLLA micelle was 41677.10 ± 4573.95 h ng/mL, while the AUC 0-∞ of orally administered QUT aqueous suspension was 4633.71 ± 557.67 h ng/mL, indicating that QUT-loaded MPEG--PLLA micelle provided a 9-fold increase in the relative oral bioavailability of QUT as compared with the QUT aqueous suspension. Our results demonstrated that MPEG--PLLA could serve as a carrier to increase the oral bioavailability of QUT. The enhanced bioavailability after oral administration of QUT-loaded MPEG--PLLA as compared with suspension, probably owing to increased solubility, absorption, and residence time of drug delivered. Conclusion In summary, QUT-loaded MPEG--PLLA micelle was successfully prepared by the thin-film hydration method. Besides, an analytical method for determining QUT in rat plasma was developed and validated. The developed and validated LC-MS/MS method was applied to study the pharmacokinetic study of QUT and the results demonstrated that most pharmacokinetic parameters of QUT were changed by the QUT-loaded MPEG-b-PLLA micelle. max , max , 1/2 , AUC, and MRT of QUT were significantly increased, while CL and were decreased by the QUT-loaded MPEG--PLLA micelle as compared with the QUT aqueous suspension. The QUT-loaded MPEG--PLLA micelle was able to increase the QUT bioavailability in 9-fold compared to the QUT aqueous suspension. It can be concluded that MPEG--PLLA could be a promising carrier for the oral delivery of QUT.
2018-04-03T00:35:24.829Z
2017-11-06T00:00:00.000
{ "year": 2017, "sha1": "1656767aefb2c7fdcefad9465f1c54d6d3a9b886", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2017/1750895.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bca949367d17c84eaf2dbc3b26cbd8f0e4eddfa6", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
125843950
pes2o/s2orc
v3-fos-license
Contribution of modulation spectral features on the perception of vocal-emotion using noise-vocoded speech Previous studies on noise-vocoded speech showed that the temporal modulation cues provided by the temporal envelope play an important role in the perception of vocal emotion. However, the exact role that the temporal envelope and its modulation components play in the perceptual processing of vocal emotion is still unknown. To clarify the exact features that the temporal envelope contributes to the perception of vocal emotion, a method based on the mechanism of modulation frequency analysis in the auditory system is necessary. In this study, auditory-based modulation spectral features were used to account for the perceptual data collected from vocalemotion recognition experiments using noise-vocoded speech. An auditory-based modulation filterbank was used to calculate the modulation spectrogram of noise-vocoded speech stimuli, and ten types of modulation spectral features were then extracted from the modulation spectrograms. The results showed that there were high similarities between modulation spectral features and the perceptual data of vocal-emotion recognition experiments. It was shown that the modulation spectral features are useful for accounting for the perceptual processing of vocal emotion with noise-vocoded speech. INTRODUCTION Speech waves are highly complex signals that transmit both linguistic information and various nonlinguistic information, such as vocal emotion. The human auditory system can ingeniously decode the emotional information included in speech signals to perceive the emotional state of speakers. Emotional expression in speech plays an important role in our daily lives; however, the perceptual processing of vocal emotion is still not fully clarified at present. Previous studies related to the perception of vocal emotion focused on the acoustic features and sound patterns of speech signals. Banse and Scherer presented speech stimuli that contained 14 different emotions to listeners with normal hearing and asked them to label the emotion of each stimulus [1]. At the same time, they also extracted 29 different acoustic features (fundamental frequency (F0), intensity, speaking rate, duration, timeaveraged spectrum, etc.) for each emotional speech stimulus. An emotion classification model was constructed using multiple regression analysis which analyzed the contributions of each acoustic feature. The results of discriminant analysis on the basis of this model showed that the confusion patterns were close to those of human responses. Huang and Akagi proposed a three-layered model with semantic primitives as a middle layer between vocal emotion and acoustic features [2]. In the previous studies, only the acoustic features based on the source-filter model (F0 and spectral envelope) and speech waveforms (intensity and duration) were investigated, regardless of what kinds of model were used. In a study on vocal emotion perception by listeners with cochlear implants and its simulations, it was shown that such typical acoustic features have difficulty to account for the human response from cochlear-implant listeners [3]. Chatterjee et al. carried out vocal-emotion recognition experiments for cochlear implant listeners and normal hearing listeners using noise-vocoded speech as a cochlear implant simulation. They then analyzed the F0, intensity, and duration of the stimuli to clarify how cochlear implant listeners process the vocal emotion information included in speech. As cochlear implants only present a poor spectral resolution, the acoustic features related to the spectral envelope (formants, etc.) were not used. The results showed that the acoustic analyses could not account for all of the perceptual data from the vocal-emotion recognition experiments. An probable reason is that, for cochlear implant listeners, the temporal modulation cues provided by the temporal envelope are used as primary cues, however, the typical acoustic features can not represent the features of the temporal envelope well. The temporal envelope of sound signals has been proven to be important in auditory system. The signal processes in the peripheral auditory system can be computationally modeled as a bandpass filterbank, envelope extraction and amplitude compression [4,5]. Furthermore, Dau et al. proposed a computational model of human auditory signal processing and perception using a modulation filterbank after the process of temporal envelope extraction [6,7]. There are both physiological [8] and psychological [9] evidences that suggest the existence of a modulation filterbank in the auditory system. The auditory system has a modulation frequency analyzer which analyzes the modulation frequency components of the temporal envelope. On the other hand, Wu et al. proposed an automatic speech emotion recognition system using an auditory-based modulation analysis to extract the modulation spectral features of emotional speech [10]. The results showed that the modulation spectral features can be used to represent the features of temporal envelope related to vocal emotion better than the typical acoustic features. In our previous study, we investigated the contribution of temporal modulation cues on the perception of nonlinguistic information using noise-vocoded speech [11]. The results showed that the temporal modulation cues play an important role in the perception of vocal emotion. However, the role that temporal envelope is playing in the perceptual processing of vocal emotion is still unknown. As there is no harmonic structure, noise-vocoded speech does not contain the temporal fine structure of original speech, that is, the information related to F0. The intensity of noise-vocoded speech stimuli in the experiments was also normalized. Therefore, similar to the results in [3], the typical acoustic features cannot be used to account for the perceptual data collected from the experiments using noisevocoded speech. An analysis based on the modulation frequency analysis mechanism of the auditory system is necessary. It has been shown that auditory-based modu-lation spectral features have the potential to account for the perceptual data of vocal-emotion recognition experiments [12]. However, the specific relationship between the modulation spectral features and the perception of vocal emotion is still unknown. In this study, the relationship between the modulation spectral features and the perceptual data of vocal-emotion recognition experiments [11] was investigated to clarify the contribution of the modulation spectral features on the perception of vocal emotion. An auditory-based modulation filterbank was used to calculate the modulation spectrograms of the temporal envelope of noise-vocoded speech. Then, ten types of modulation spectral feature extracted from the modulation spectrograms were analyzed. Finally, the modulation spectral features and the perceptual data of vocal emotion recognition were compared to investigate the contribution of temporal modulation cues to the perception of vocal emotion with noisevocoded speech. The originality of this study is that we considered the problem of vocal emotion perception from the viewpoint of auditory with the use of auditory-based modulation spectral features rather than from the viewpoint of speech production with the use of typical acoustic features. This paper is organized as follows: Section 2 analyzes the perceptual data of vocal-emotion recognition experiments in [11]. Section 3 introduces the method for calculating the modulation spectral features from the noise-vocoded speech stimuli. Section 4 discusses the relationship between the modulation spectral features and the perceptual data. Section 5 summarizes the results and the discussion. PERCEPTUAL DATA OF VOCAL-EMOTION RECOGNITION EXPERIMENTS In our previous study, in order to study the contribution of temporal modulation cues on vocal-emotion recognition, we varied the spectral and temporal resolution of noisevocoded speech stimuli presented to normal hearing listeners. The detailed method of signal processing to generate noise-vocoded speech can be found in [11]. The Fujitsu Japanese Emotional Speech Database was used. This database includes five emotions (neutral, joy, cold anger, sadness, and hot anger) expressed by a professional actress. The spectral resolution of the noise-vocoded speech stimuli was manipulated by varying the number of channels from 4 to 16. The temporal resolution was manipulated by varying the upper limits of the modulation frequency from 0 to 64 Hz. The results demonstrated that the vocal-emotion recognition rates significantly decreased as the upper limit of the modulation frequency decreased. Therefore, it was confirmed that the temporal modulation cues provided by the temporal envelope (in other words, the information contained in the modulation frequency band below 64 Hz) contribute to the perception of vocal emotion. To clarify the exact features that the temporal envelope contributes to the perception of vocal emotion, the current results regarding the condition of the 64-Hz upper limit of the modulation frequency and 4 channels noise-vocoded speech were used as the perceptual data of vocal-emotion recognition experiments. In this condition, the noisevocoded speech stimuli contain all the information in the modulation frequency band below 64 Hz. Furthermore, the spectral cues were reduced mostly because we want to focus on the temporal modulation cues. Figure 1 shows the vocal-emotion recognition rates of the perceptual data that was used in this study. The results showed that joy was the most difficult to recognize and that the mean recognition rate was close to the chance level (20%). On the contrary, the recognition rates of sadness and hot anger were higher than that of the other emotions. The recognition rates of neutral emotion and cold anger were in the middle of the other three emotions, however, the recognition rate of cold anger was much lower than that of neutral. To better understand the perceptual data, the discriminability index (d 0 P ) of each emotion was calculated from the mean confusion matrix of the perceptual data ( Table 1). Fig. 1 were based on the hit rates and false alarm rates derived from the confusion matrix, as follow: where H and F are the hit rate and false alarm rate. ZðÁÞ is the inverse of the normal distribution function. Generally, high d 0 P values are derived from high hit rates and low false alarm rates. Because of the relatively higher hit rates and lower false alarm rates, the d 0 P values of sadness and hot anger were much higher than those of the other emotions, as seen in the results for the recognition rates. The d 0 P value of cold anger was lowest, due to the low hit rate and high false alarm rate. The hit rate of joy was the lowest, however, as it had a low false alarm rate, the d 0 P value was higher than that of cold anger. For neutral emotion, the high hit rate and false alarm rate led to a low d 0 P value. For the perception of vocal emotion with noisevocoded speech, the temporal modulation cues provided by the temporal envelope were used as primary cues. Therefore, in the next section, the modulation spectral features extracted directly from the modulation spectrograms of the temporal envelope were used to account for the perceptual data. Figure 2 shows the auditory-based process used in this study to calculate the modulation spectrograms. Emotional speech signal s was first band-pass filtered using an auditory-based band-pass filterbank as follows: Modulation Spectrogram where à denotes the convolution operation, h k ðnÞ is the impulse response of the kth channel and n is the sample number in the time domain. The bandwidth and boundary frequencies of the band-pass filters (6th-order Butterworth infinite impulse response (IIR) filters) were defined using ERB N (Equivalent Rectangular Bandwidth) and ERB Nnumber scales [13]. The boundary frequencies of the bandpass filters were defined as 3 to 35 ERB N -number with an 8 ERB N bandwidth, and the number of channels was 4. The temporal envelope of the output signal from each band-pass filter s k ðnÞ was extracted using the Hilbert transformation, and a low-pass filter (2nd-order Butterworth IIR filter, cut-off frequency: 64 Hz) was performed as follows: where H denotes the Hilbert transform and, gðnÞ denotes the impulse response of the low-pass filter. The signal 1 The results of vocal-emotion recognition experiment in [11] on the condition that the upper limit of modulation frequency was 64 Hz and the number of channels was 4. processing methods of bandpass filterbank and temporal envelope extraction was as same as the methods used in [11]. The next step involved decomposing e k ðnÞ into several modulation frequency bands by using a modulation filterbank: where m is the channel number of the modulation filter, f m ðnÞ is the impulse response of the modulation filterbank, and e k ðnÞ is the time-averaged amplitude of e k ðnÞ. The 0 Hz component was removed because we only focused on the dynamic components of the temporal envelope. The modulation filterbank consisted of six filters (one low-pass filter and five band-pass filters). The boundary frequencies of the filters were spaced on an octave frequency band from 2 to 64 Hz. Figure 3 shows the frequency responses of the modulation filterbank. Finally, the root mean square of E k;m ðnÞ calculated as the modulation spectrogram, where N is the length of the speech signal sðnÞ. E k;m was then used to calculate modulation spectral features. Figure 4 shows examples of the modulation spectrograms of the speech with five different emotions from the Fujitsu database. The results show that each different emotion had different characteristics in the modulation spectrograms. The modulation spectrogram for sadness speech had significantly more low acoustic and modulation frequency energy. Contrarily, the modulation spectrogram of hot anger speech had more high acoustic and modulation frequency energy. These results should be related to the facts showed in [14,15] that sadness speech has lower high frequency energy and speech rate and anger speech has higher high frequency energy and speech rate. These results should be consistent with the perceptual data showing that sadness and hot anger had relatively higher d 0 P values. However, it is difficult to directly connect the results of the modulation spectrograms and the perceptual data. Therefore, to quantitatively investigate the contributions of the modulation spectrogram to the perception of vocal emotion, the modulation spectral features extracted from the modulation spectrograms were then analyzed. Modulation Spectral Features Two kinds of modulation spectral feature were calculated by analyzing the modulation spectrograms in the acoustic frequency and modulation frequency domains. In the acoustic frequency domain, the first feature was the modulation spectral centroid (MSCR m ), which is defined as follows: where K is the number of the acoustic frequency bands that is 4. The MSCR m indicates the center of the spectral balance across acoustic frequency bands (k). The modulation spectral spread (MSSP m ) was then calculated by: The MSSP m represents the spread of the spectrum around its MSCR m as the 2nd order moment. Two other higher-order features, modulation spectral skewness (MSSK m ) and kurtosis (MSKT m ), were also calculated. The MSSK m describes the degree of asymmetry of the modulation spectrogram, which was calculated from the 3rd order moment: The MSKT m gives a measure of the peakedness of the modulation spectrogram, which was calculated from the 4th order moment: In the modulation frequency domain, the first feature was the MSCR k which was the barycenter of the modulation spectrum in each acoustic frequency band. Different from the MSCR m which was calculated across the acoustic frequency bands (k), the MSCR k was calculated across the modulation frequency bands (m). Then, the other three higher-order features of the modulation spectrograms in the modulation frequency domain (MSSP k , MSSK k , and MSKT k ) were calculated as follows: where M is the number of channels in the modulation filterbank which is six. Figure 5 shows an example of calculating the modulation spectral centroid in the acoustic frequency domain (MSCR m ) and the modulation frequency domain (MSCR k ). For the modulation spectral features in the acoustic frequency domain (modulation spectral features with subscript m), the modulation frequency channel was fixed, and the features were calculated on the basis of the acoustic frequency axis. On the contrary, for the modulation spectral features in the modulation frequency domain (modulation spectral features with subscript k), the acoustic frequency channel was fixed, and the features were calculated based on the modulation frequency axis. The last two modulation spectral features in the acoustic frequency and modulation frequency domains were modulation spectral tilt (MSTL m and MSTL k ), which are the linear regression coefficient obtained by fitting a first-degree polynomial to the modulation spectrograms. Finally, to investigate the relationship between the modulation spectral features and the perceptual data, the discriminability index of the modulation spectral features (d 0 MSF ) were also calculated by the following equation: where, " and ' 2 are the mean value and variance of a modulation spectral feature (taken across the 10 utterances of each emotion). The mean value of all the d 0 MSF values for each emotion was computed as an approximate measure of the net discriminability of the modulation spectral features (see Table 2). Thisd 0 MSF value represents the mean distance of a modulation spectral feature between different emotions. Similarities between the Perceptual Data and Modulation Spectral Features The centered cosine similarity between d 0 P (Fig. 1) andd 0 MSF were calculated to investigate the relationship between modulation spectral features and the perception of vocal emotion with noise-vocoded speech. The similarity was defined as follow: where, em is the emotion which could be neutral, joy, cold anger, sadness, and hot anger. Tables 3 and 4 show the results of the similarities of the modulation spectral features in the acoustic frequency and modulation frequency domains, respectively. Figure 6 shows the highest similarity of each modulation spectral feature (taken across all the acoustic frequency or modulation frequency channels). The results showed that there were high similarities between the modulation spectral features and the perceptual data. For some modulation spectral features, the similarities were close to 1. These results suggest that the modulation spectral features are useful in accounting for the perceptual data of vocal-emotion recognition experiment using noise-vocoded speech. DISCUSSION The d 0 P values of the perceptual data obtained from vocal-emotion recognition experiments represent the psychological distance between the emotions for participants. Table 3 The similarities between modulation spectral features on the acoustic frequency domain and the perceptual data. Thed 0 MSF values of the modulation spectral features represent the physical distance of the modulation spectral features between different emotions. The probability distribution functions (PDFs) of the modulation spectral features with the highest similarity showed in Fig. 6 were estimated to discuss the reason for the high similarity between the modulation spectral features and the perceptual data (Fig. 7). Figure 7(a) shows that the MSCR m for hot anger speech was highest, and the MSCR m of sadness speech was lowest in the 4th modulation frequency channel. In addition, the distributions of the other emotions (neutral, joy, and cold anger) overlapped. Similar phenomenon also appeared in the distribution of MSTL m . The reason for this is that hot anger speech had more high-acoustic frequency energy, and sadness speech had more low-acoustic frequency energy. The distributions of neutral, joy and cold anger speech on the acoustic frequency domain were similar. These results were consistent with the perceptual data that sadness and hot anger stimuli had higher d 0 P values and that the d 0 P values of other emotions were much lower. On the contrary, for the other high-order features MSSP m (Fig. 7(b)), MSSK m (Fig. 7(c)), and MSKT m (Fig. 7(d)) in the acoustic frequency domain, the PDFs of hot anger speech were lowest and the PDFs of sadness speech were highest. The high-order features in the modulation frequency domain MSSP k (Fig. 7(g)) and MSKT k (Fig. 7(i)), also showed a similar trend. These results showed that the spread and peakedness of sadness speech in both the acoustic frequency and modulation frequency domains were higher than those of the other emotions. Moreover, the PDFs of joy and hot anger speech overlapped, which were consistent with the results of confusion matrix ( Table 1) that nearly 35% of the joy stimuli were recognized as hot anger. It was also shown that the similarities of the modulation spectral features in the acoustic frequency domain (Table 3) in the 4th and 5th modulation frequency channel (from 8 to 32 Hz) were much higher. Similar to the results in [11], high-modulation frequency band was shown to be more important to the perception of vocal emotion with noise-vocoded speech. The high-modulation frequency components are related to auditory roughness, which should affect the speech quality of noise-vocoded speech. The high-modulation frequency components should also affect the modulation spectral features in the modulation frequency domain. Hot anger speech had much more high-modulation frequency components that resulted in higher MSCR k . On the contrary, the MSCR k of sadness speech should be lower because sadness speech had much less high-modulation frequency components. The degree of asymmetry (MSSK k ) of sadness speech should be higher than that of hot anger speech as the modulation spectrogram for sadness speech was central in the low-modulation frequency band. The shapes of the modulation spectrograms of the other three emotions in the modulation frequency domain were similar. To summarize the results: hot anger speech has more high acoustic frequency and modulation frequency components; sadness speech has less high acoustic frequency and modulation frequency components; regarding hot anger and sadness speech, the distributions of the modulation spectrogram for neutral, joy, and cold anger speech are similar. These physical characteristics are consistent with the perceptual data which showed that sadness and hot anger stimuli had higherd 0 MSF values and thed 0 MSF values of neutral, joy, and cold anger stimuli were much lower. Therefore, there were high similarities between the modulation spectral features and the perceptual data. Modulation spectral features have been shown to be useful in accounting for the perception of vocal emotion. In this study, the modulation spectral features of timeaveraged modulation spectrograms were analyzed. The modulation spectrograms were 4-dimensional data containing information on acoustic frequency, modulation frequency, amplitude, and time. It is necessary to analyze the details regarding modulation spectrograms in time domain. However, as the modulation spectrograms were 4-dimensional data, it would be difficult to extract the features related to nonlinguistic information from them. Deep learning may be a good solution for analyzing the modulation spectrogram in the time domain. The modulation spectral features should be derived from human vocal organs. It is also necessary to connect the auditorybased modulation spectral features to the mechanism of speech production to investigate the relationship between modulation spectral features and the perception of not only noise-vocoded speech but also normal speech. SUMMARY In this study, the relationship between the auditorybased modulation spectral features and perceptual data of vocal-emotion experiments using noise-vocoded speech was investigated to clarify the exact features that the temporal envelope contributes to the perception of vocal emotion. The discriminability indices (d 0 P andd 0 MSF ) of each emotion were calculated from the modulation spectral features and the mean confusion matrix of the perceptual data. It was shown that for both the modulation spectral features and the perceptual data, the d 0 P andd 0 MSF values of sadness and hot anger speech were higher than those of neutral, joy, and cold anger speech. These results led to high similarities between the modulation spectral features and the perceptual data. This suggests that the modulation spectral features play an important role in the perception of vocal emotion with noise-vocoded speech. The modulation spectral features have shown to be useful in accounting for the perceptual processing of the temporal modulation cues provided by the temporal envelope of speech.
2019-04-22T13:12:51.372Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "11e071427a6acf86c7674442dc068b3179dc25e2", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/ast/39/6/39_E1808/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "ee195bf672a67e226734d0f5a847142ba4138e56", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Psychology" ] }
202716583
pes2o/s2orc
v3-fos-license
Arbuscular mycorrhizal fungi alter the food utilization, growth, development and reproduction of armyworm (Mythimna separata) fed on Bacillus thuringiensis maize Background The cultivation of Bt maize (maize genetically modified with Bacillus thuringiensis) continues to expand globally. Arbuscular mycorrhizal fungi (AMF), an important kind of microorganism closely related to soil fertility and plant nutrition, may influence the ecological risk of target lepidopteran pests in Bt crops. Methods In this study, transgenic Bt maize (Line IE09S034 with Cry1Ie vs. its parental line of non-Bt maize cv. Xianyu335) was inoculated with a species of AMF, Glomus caledonium (GC). Its effects on the food utilization, reproduction and development of armyworm, Mythimna separata, were studied in a potted experiment from 2017 to 2018. Results GC inoculation increased the AMF colonization of both modified and non-modified maize, and also increased the grain weight per plant and 1,000-grain weight of modified and non-modified maize. However, the cultivation of Bt maize did not significantly affect the AMF colonization. The feeding of M. separata with Bt maize resulted in a notable decrease in RCR (relative consumption rate), RGR (relative growth rate), AD (approximate digestibility), ECD (efficiency of conversion of digested food) and ECI (efficiency of conversion of ingested food) parameters in comparison to those observed in larvae fed with non-Bt maize in 2017 and 2018, regardless of GC inoculation. Furthermore, remarkable prolongation of larval life span and decreases in the rate of pupation, weight of pupa, rate of eclosion, fecundity and adult longevity of M. separata were observed in the Bt treatment regardless of GC inoculation during the two-year experiment. Also, when M. separata was fed with Bt maize, a significant prolongation of larval life and significant decreases in the pupal weight, fecundity and adult longevity of M. separata were observed when inoculated with GC. However, it was just the opposite for larvae fed with non-Bt maize that was inoculated with GC. The increased percentage of larval life-span, the decreased percentages of the food utilization, and the other indexes of reproduction, growth, and development of M. separata fed on Bt maize relative to non-Bt maize were all visibly lower when under GC inoculation in contrast to the CK. Discussion It is presumed that Bt maize has a marked adverse impact on M. separata development, reproduction and feeding, especially when in combination with the GC inoculation. Additionally, GC inoculation favors the effectiveness of Bt maize against M. separata larvae by reducing their food utilization ability, which negatively affects the development and reproduction of the armyworm. Thus, Bt maize inoculated with AMF (here, GC) can reduce the severe threats arising of armyworms, and hence the AMF inoculation may play an important ecological functions in the field of Bt maize ecosystem, with potentially high control efficiency for the target lepidopteran pests. Arbuscular mycorrhizal fungi (AMF) form symbiotic relationships with plant roots for the purpose of improving the uptake of water and nutrients, accelerating plant growth, and helping to build soil structure and function (Smith & Read, 2008). Correspondingly, AMFs require an adequate plant host. The fungi get carbon from plants, and in return, fungi give nitrogen, phosphorus and other nutrients, and can improve soil stability and resistance to disease (Singh, Singh & Tripathi, 2012;Steinkellner et al., 2012). Plants associated with AMF may alter their interactions with insects, pollinators or herbivores, and this will affect the plant health (Vannette & Rasmann, 2012;Koricheva & Jones, 2009;Wolfe, Husband & Klironomos, 2005). AMF colonization often affects insect herbivores (Koricheva & Jones, 2009), AMF influences defense chemicals, nutrient contents, and plant biomass (Bennett, Bever & Bowers, 2009). For instance, the interaction with mycorrhizal fungi may give the plant resources against hervibores, but it may instead make the plant a better food source (Vannette & Hunter, 2011). Food nutrition is an important indicator of insect selection behavior and food competition behavior. The choice of insects for different foods is related to the efficiency of insects utilization of food. Different foods directly affect the growth and development of insects and the efficiency of food utilization (Schmidt et al., 2012). Hence, Waldbauer (1968) suggested using RCR (relative consumption rate), RGR (relative growth rate), AD (approximate digestibility), ECD (efficiency of conversion of digested food) and ECI (efficiency of conversion of ingested food) as nutritional indicators to measure the efficiency of food digestion. Moreover, previous research has shown nitrogen is the most active nutrient element in the crop growth process and is the main constituent of Bt protein. Plant nitrogen uptake and nitrogen metabolism levels can change the carbon-nitrogen ratio in plant tissues and can also affect the production of Bt toxins (Jiang et al., 2013;Gao et al., 2009). The presence of Bt toxin also affects insect's feeding efficiency, growth and reproduction (Li, Parajulee & Chen, 2018). AMF can enhance plant absorption and utilization of soil nutrients (mainly N and P). Thus, the effects of AMF on Bt maize and target lepidopteran pests has naturally become an interesting and significant research priority. There have been some studies that focused on the influence of Bt on AMF colonization, the AMF community diversity, and soil ecology (Zeng et al., 2014;Cheeke, Cruzan & Rosenstiel, 2013). However, the effects of AMF on the resistance of Bt crops against target lepidopteran pests has not been explored in the previous reports. In this work, we studied the indirect effects of AMF on the food utilization, growth, and development of armyworm M. separata feeding on transgenic Bt maize, and the direct influence of AMF on the yields of Bt and non-Bt maize. We expect that this work will help reduce the risk of Bt crops resistance and ultimately provide for the sustainable and ecological usage of Bt crops. Plant materials and AMF inoculation A two-year study (2017)(2018) was conducted in Ningjin County, Shandong Province of China (37 • 38 30.7 N, 116 • 51 11.0 E). The Institute of Crop Sciences, Chinese Academy of Agricultural Science provided us with the transgenic Bt maize cultivar (Line IE09S034 with Cry1Ie, Bt) and its non-Bt parental line (cv. Xianyu 335, Xy). Glomus caledonium (strain number 90036, referred to as GC) was provided by the State Key Laboratory of Soil & Sustainable Agriculture, Institute of Soil Science, Chinese Academy of Sciences. The inoculum consisted of spores, mycelium, maize root fragments, and soil. Both genetically modified and non-modified maize were put into plastic buckets (45 cm height, 30 cm diameter) with 20 kg of soil sterilized in an autoclave, 300 g GC inoculum (GC inoculation treatment, ab. GC) and 300 g sterilized strains (control group, ab. CK) were evenly spread four cm below the maize seeds on June 10 in each sampling year. The whole experiment involved four treatments: two maize cultivars (Bt and Xy), and two AMF inoculations (GC and CK). Each bucket served as one replication and replicated 15 times for each treatment. So, there were 15 buckets for each maize cultivar × AMF inoculation treatment, and a total of 60 buckets in this study. In each bucket, three maize seeds were put at a depth of two cm. During the whole experimental period, no pesticides were applied and the manual weeding was performed to keep the maize buckets free from incidence of weeds. AMF colonization AMF colonization was determined on July 3 (seeding stage), August 25 (heading stage) and September 23 (harvest stage) in two sampling years. This was determined by the method of trypan blue staining and grid counting (Phillips & Hayman, 1970). The fresh plant roots were washed with distilled water and then blotted dry with absorbent paper. One hundred one cm roots were randomly cut and placed in a 10% KOH solution at 30 • C for 30 min, and then the KOH was discarded and rinsed with distilled water. After acidification in 2% HCl for 60 min, the HCl was discarded, rinsed with distilled water and stained in 5% trypan blue dye solution (w/v, lactic acid: glycerol: water = 1:1:1). Then the dye solution was discarded, and the roots were rinsed with distilled water and transferred to a square with a grid at the bottom. We observed the number of infected and uninfected root segments under the microscope. Colonization (%) = number of infected root segments/total root segments (McGonigle et al., 1990). Insect rearing The colony of armyworm M. separata was originated from a population collected in maize fields in Kangbao County, Hebei province of China (41.87 • N, 114.6 • E) in the summer of 2014. They were reared on the artificial diet (Bi, 1981) for more than 15 generations in climate-controlled growth chambers (GDN-400D-4; Ningbo Southeast Instrument Co., Ltd., Ningbo, China) at 26 ± 1 • C, 65 ± 5% RH, and 14: 10 h L/D photoperiod. The same rearing breeding conditions were maintained kept for the subsequent experiments. The newly-hatched first instar larvae were randomly selected from the above colony of M. separata and fed on the same artificial diet until the second instar larval stage, and then the third instar M. separata larvae were individually fed on excised leaves of the sampled maize plants. A certain amount of experimental maize leaves were randomly chosen from 10 buckets of each maize cultivar × AMF inoculation treatment beginning August 22 (heading stage) for the feeding trials conducted in plastic dish (six cm in diameter and 1.6 cm in height) and replace fresh maize leaves every 24 h until M. separata pupation in 2017 and 2018. Each maize cultivar × AMF inoculation treatment consisted of five replicates (30 larvae for one replicates). Food utilization of M. separata larvae The initial weights of the tested third instar larvae of M. separata were individually determined with an electronic balance (AL104; METTLER-TOLEDO, Greifensee). The weights of the total feces from the third instar until pupation (sixth instar), pupal weight, and the residual leaves were also carefully measured. At the same time, the moisture content of the third instar larvae, the sixth instar larvae and maize leaves replaced each time were determined to calculate the dry weight of the tested larvae and the maize leaves during the experiment. Several food utilization indexes of M. separata larvae fed on the excised leaves of Bt and non-Bt maize inoculated with AMF, G. caledonium and without G. caledonium, were determined. The indexes included RCR (relative consumption rate), RGR (relative growth rate), AD (approximate digestibility), ECD (efficiency of conversion of digested food) and ECI (efficiency of conversion of ingested food) (Li, Parajulee & Chen, 2018). The indexes calculations were done with formulas adapted from Chen et al. (2005): Where I is the feeding amount (the dry weight of maize leaves before feeding minus the dry weight of maize leaves before feeding after feeding); B is the average larval weight during the experiment (the average larval dry weight before feeding and after feeding); T is experiment time (d); G is the added larval weight (the larval dry weight after feeding minus the larval dry weight before feeding); F is the dry weight of total feces. Growth & development and reproduction of M. separata Larval growth and development were evaluated from the third instar to pupation by observing each petri dish every 8 h and recording the timing of larval ecdysis, pupation, and emergence of M. separata moths that fed on the excised leaves of Bt and non-Bt maize inoculated with G. caledonium and without G. caledonium. After the eclosion, novel moths were paired by maintaining the female: male ratio of 1: 1 in a metal screen cage and were fed with a 10% honey cotton ball, covered with cotton net yarn and butter paper for oviposition which were replaced every day. Survivorship and oviposition were recorded on a daily basis until death. Yield of Bt and non-Bt maize On September 25, 2017 and 2018, eight maize plants were randomly taken from five pots of each maize cultivar × AMF inoculation treatment at the harvest stage to measure the grain weight per plant (g) and 1,000-grain weight (g) with an electronic balance (AL104; METTLER-TOLEDO, Greifensee, Switzerland), in order to ascertain the effects of AMF inoculation on the yield of Bt and non-Bt maize inoculated with and without GC. Data analysis All experimental data were analysed with the software IBM-SPSSv.20.0 (IBM, Armonk, NY, USA). Three-way repeated-measures ANOVA was used to study the impacts of treatment (Bt maize vs. non-Bt maize), AMF inoculation (GC vs. CK), sampling years (2017 vs. 2018), and their bi-and tri-interaction on the AMF colonization. Moreover, three-way ANOVA was used to analyze the effects of treatment (Bt maize vs. non-Bt maize), AMF inoculation (GC vs. CK), sampling years (2017 vs. 2018), and their bi-and tri-interactions on the measured indexes of growth, development, reproduction and food utilization of M. separata, and the yield of Bt and non-Bt maize inoculated with and without GC in 2017 and 2018. Finally, the means were separated by using the Turkey test to examine significant difference between/among treatments at P < 0.05. AMF colonization of Bt and non-Bt maize inoculated with and without G. caledonium Colonization represents the infestation status of inoculated AMF, proving whether the access of AMF in maize is effective. Three-way repeated-measures ANOVAs indicated that GC inoculation (P < 0.001) and sampling year (P < 0.001) both significantly affected the AMF colonization, and there were significant interactions between GC inoculation with sampling year (P < 0.001), and between transgenic treatment with sampling year (P = 0.013 < 0.05; Table 1). Compared with the non-GC inoculation, the GC inoculation significantly enhanced the AMF colonization of Bt and non-Bt maize in 2017 and 2018 respectively, with significant increases for the Bt maize during the seedling (2017: +653.9%; 2018: +284.1%), heading (2017: +589.6%; 2018: +491.0%) and harvest (2017: +457.6%; Food utilization of M. Separata larvae fed on Bt and non-Bt maize inoculated with and without G. caledonium Food utilization indexes can reflect the preference and adaptability of insects to food materials to a certain extent. Transgenic treatment significantly affected all the measured indexes of feeding of M. separata larvae (P < 0.001), AMF inoculation (P < 0.05) and the interactions between transgenic treatment and AMF inoculation (P < 0.001) had important effects on the RGR, RCR, ECI and AD of M. separata larvae, and there were significant differences in the RGR, ECD, ECI and AD of M. separata larvae between the two sampling years (P < 0.01; Table 2). Moreover, there were significant interactions between transgenic treatment and sampling year on the RGR, RCR and AD of M. separata larvae (P < 0.05; Table 2). Furthermore, there were significant interactions between AMF inoculation and sampling year, and among transgenic treatment, AMF inoculation and sampling year on the RGR of M. separata larvae fed on the detached leaves of Bt maize and its parental line of non-Bt maize during the heading stage in 2017 and 2018. When considering the case of M. separata larvae fed on Bt maize and non-Bt maize inoculated with GC in comparison with the CK in both sampling years, differing trends in the food utilization indexes were observed (Fig. 2). In relation to the CK (i.e., non-GC inoculation), GC inoculation significantly reduced the RGR (−24.2% and −23.3%), RCR (−10.5% and −6.1%), ECI (−15.3% and −18.2%) and AD (−9.0% and −16.4%) of M. separata larvae fed on the detached leaves of Bt maize during the heading stage in 2017 and 2018 (P < 0.05; Fig. 2). However, GC inoculation significantly enhanced the RGR (+36.9% and +56.7%), RCR (+10.8% and +15.0%), ECI (+19.9% and +26.4%) and AD (+17.2% and +19. Reproduction, growth and development of M. Separata fed on Bt or non-Bt maize inoculated with and without G. caledonium The effects of a Bt and non-Bt maize diet on the growth, development and reproduction of M. Separata demonstrate the indirect effects of AMF on the suitability of M. Separata through Bt and non-Bt maize. Transgenic treatment (P < 0.05) and AMF inoculation (P < 0.05) significantly affected all the calculated indexes of M. separata in two sampling years, and there were significant difference in pupal weight (P < 0.001) of M. separata between two sampling years (Table 3). Moreover, there were significant interactions between transgenic treatment and sampling year on larval life-span (P = 0.008 < 0.01), between AMF inoculation and sampling year on larval life-span (P = 0.011 < 0.05), adult longevity (P = 0.044 < 0.05) and fecundity (P = 0.004 < 0.01), between transgenic treatment and AMF inoculation on all the measured indexes except larval life-span (P < 0.01), and among transgenic treatment, AMF inoculation and sampling year on larval life-span (P = 0.013 < 0.05) and fecundity (P = 0.009 < 0.01) for M. separata fed on the detached leaves of Bt maize and non-Bt maize inoculated with and without GC during the heading stage in 2017 and 2018 (Table 3). Opposite trends were also seen in the calculated indexes for reproduction, growth, and development of larvae fed on the detached leaves of Bt and non-Bt maize inoculated with GC in contrast to the CK in both sampling years (Fig. 3). In comparison with the CK, GC inoculation significantly extended the larval life cycle (+7.6% and +10.4%) and shortened the adult longevity (−14.7% and −15.2%), and decreased the pupal weight (−9.1% and −14.1%) and fecundity (−19.2% and −19.9%) of larvae fed on the detached leaves of Bt maize in 2017 and 2018 (P < 0.05; Fig. 3). At the same time, GC inoculation significantly shortened the larval life-span (−12.3% and −10.3%) and prolonged the adult longevity (+24.6% and +17.1%), and significantly increased the pupation rate (+8.4% and +11.9%), pupal weight (+10.5% and +11.9%) and fecundity (+35.7% and +14.1%) of larvae fed on the detached leaves of non-Bt maize in 2017 and 2018, and significantly increased the eclosion rate (+16.3%) of larvae fed on the detached leaves of non-Bt maize in 2018 (P < 0.05; Fig. 3). In comparison with the non-Bt maize, Bt maize significantly prolonged the larval life cycle (2017: +31.6% and +7.3%; 2018: +31.1% and +6.6%) and shortened the adult longevity ( larval life-span percentage increased, and percentages decreased in the pupation rate, pupal weight, eclosion rate, adult longevity and fecundity of larvae fed on the detached leaves of Bt maize were all obviously higher under GC inoculation in contrast to CK when compared with non-Bt maize. Yields of Bt and non-Bt maize inoculated with and without G. caledonium The increase in yields is our ultimate goal; that is, the inoculation of AMF has a corresponding indirect effects on the M. Separata feeding on Bt and non-Bt maize, and in this section, we evaluated the change of the final economic maize output after inoculation. AMF inoculation and sampling year significantly affected the grain weight per plant (P < 0.001), and 1,000-grain weight were significantly affected by AMF inoculation and transgenic treatment (P < 0.05). There were significant interactions between sampling year and transgenic treatment on the grain weight per plant (P = 0.041 < 0.05) and 1,000-grain weight (P = 0.001 < 0.01), and between sampling year and AMF inoculation on the grain weight per plant (P = 0.012 < 0.05) of Bt and non-Bt maize inoculated with and without GC inoculation in 2017 and 2018 (Table 1). Compared with the CK, GC inoculation significantly increased the grain weight per plant (Bt maize: +39.6% and +24.1%; non-Bt maize: +33.1% and +30.6%) and 1,000-grain weight (Bt maize: +8.7% and +7.4%; non-Bt maize: +8.5% and +8.5%) of Bt and non-Bt maize in 2017 and 2018 (P < 0.05; Fig. 4). DISCUSSION AMF are a group of fungi belonging to phylum Glomeromycota that penetrates the cortex of the roots of vascular plants (Parniske, 2008;Smith & Read, 2008). On the whole, the colonization in 2018 was lower than 2017, this may be due to an increase of hot weather during the experiment in 2018 that unfavorable to colonization of AMF. However, despite the differences between the data from the two years, the impact trend of AMF inoculation was consistent. GC inoculation significantly enhanced AMF colonization of Bt and non-Bt maize from the seedling stage to the harvest stage in two sampling years, and this result demonstrated the effectiveness for the inoculation of AMF. No significant difference was found in 2017 or 2018 for the AMF colonization of both types of maize (Bt or non-Bt ). So, it is presumed that the cultivation of Bt maize has no significant impact on the AMF colonization between Bt maize (Line IE09S034) expressing Cry1Ie protein and the near isogenic non-Bt variety (cv. Xianyu335). Although an important negative effect of Bt on the AMF community was found (Castaldini et al., 2005;Turrini et al., 2005), some researchers reached the conclusion that the cultivation of Bt crops had no significant impact on AMF colonization of roots between Bt maize (MEB307) expressing Cry1Ab protein and the near isogenic non-Bt variety (Monumental), and the arrangements of AMF in roots in non-Bt were almost identical to those in Bt cultivars of cotton (Cry1Ac and Cry2Ab) (Vaufleury et al., 2007;Knox et al., 2008). Hodge, Helgason & Fitter (2010) reported that the AMF promoted the absorption and utilization of soil nutrients in maize plants, thus improving the nutrient levels of nitrogen, phosphorus and potassium of plant tissues and organs, and in turn, promoting the growth and development of maize plants. In our study, yield data also substantiated this viewpoint. The GC inoculation significantly increased the grain weight per plant and 1,000-grain weight regardless whether the maize was Bt or non-Bt, and this is exactly because of the promotion of plant nutrition by AMF. Meanwhile, there were no notable differences found in the grain weight per plant of Bt maize when compared with that of non-Bt maize, regardless of GC inoculation or non-GC inoculation. This result was also consistent with our results of colonization which showed that Bt treatment had no effect on AMF infection and that there was no difference in yields with its parental line of non-Bt maize. Globally, transgenic Bt maize has been rapidly commercialized to control lepidopteran insects (for example: Ostrinia nubilalis, Mythimna separata and Ostrinia nubilalis) (James, 2012;ISAAA, 2017), but have been no reports that the defense responses of M. separata to transgenic Bt maize inoculated with AMF. Most studies have shown that Cry proteins have adverse effects on the life-table parameters of different herbivores (Lawo, Wäckers & Romeis, 2010;Smith & Fischer, 1983). Li, Parajulee & Chen (2018) reported that Bt maize significantly affected the food utilization, reproduction, growth & development of the armyworm, M. separata. The research of Prutz & Dettner (2005) showed that Bt maize decreased the rate of growth and increased the mortality of Chilo partellus. In this study, important reductions in the RCR, RGR, AD and ECI occurred when the larvae were fed on Bt maize inoculated with and without GC in 2017 and 2018. Moreover, Bt maize also markedly extended the larval life-span, shortened the adult longevity, and significantly decreased the pupation rate, pupal weight, eclosion rate and fecundity of larvae regardless of if they were inoculated with or without GC in 2017 and 2018. This can prove that Bt toxins protein does have a marked negative effects on the food utilization, reproduction, growth, and development of M. separata. Opposite trends were found in the food utilization, reproduction, growth, and development of M. separata fed on Bt maize and non-Bt maize inoculated with and without GC. For the measured indexes of food utilization, GC inoculation significantly reduced the RGR, RCR, AD and ECI of larvae fed on Bt maize, while it was just the opposite was shown for those of larvae fed on non-Bt maize in 2017 and 2018. For the measured indexes of growth, development and reproduction, there were also opposite trends for larvae fed on Bt and non-Bt maize inoculated with and without GC. GC inoculation markedly extended the larval life-span, shortened the adult longevity, and significantly decreased the pupal weight and fecundity of fed on Bt maize in 2017 and 2018. Conversely, GC inoculation significantly shortened the larval life-span, prolonged the adult longevity, and significantly increased the pupation rate, pupal weight and fecundity of M. separata fed on non-Bt maize in 2017 and 2018, and significantly increased the eclosion rate of M. separata fed on non-Bt maize in 2018. This phenomenon can be explained by the fact that AMF inoculation promoted the absorption and utilization of soil nutrients in maize plants, thus improving the nutrient level (e.g., nitrogen, phosphorus and potassium) of plant leaves Hodge, Helgason & Fitter, 2010 ;Rodriguez & Sanders, 2015). Many studies have shown that the nitrogen metabolism and nitrogen level of transgenic Bt crops could affect the expression of Bt toxin protein (Wang et al., 2012;Liu et al., 2019). Stimulating in plant N uptake can increase biomass N relative to C and enhance the nitrogen metabolism enzyme (e.g., nitrate reductase, nitrite reductase, and so forth) activity, transgene expression and Bt toxin production of Bt crops (Stitt & Krapp, 1999;Gao et al., 2009;Jiang et al., 2017). In brief, AMF could enhance the maize nitrogen level which was important for Bt protein synthesis, therefore, the inoculation with AMF was beneficial to the production of Bt protein. For non-Bt maize, the leaf food source with high nutrition means the intake and utilization of high nutrient elements, which naturally has a more positive and beneficial effect on M. separata. For Bt maize, an increase in nutrient levels may also mean an increase in toxin protein expression, as higher toxins were bound to be more damaging to M. separata, which could account for the inverse trends in RGR, RCR, AD, and ECI fed on Bt maize and non-Bt maize inoculated with GC. In addition, the increased percentage of the larval life-span, and decreased percentages of the indexes of food utilization, pupation rate, pupal weight, eclosion rate, adult longevity and fecundity of M. separata larvae fed on Bt maize when compared with non-Bt maize were obviously higher under the GC inoculation in contrast to CK. This is mainly due to the opposite impacts above-mentioned that AMF treatment appears on M. separata fed on Bt and non-Bt maize. AMF treatment is more beneficial for M. separata fed on non-Bt maize, but is more unfavorable for M. separata fed on Bt maize. This result indicates that Bt maize will have better control effects on M. separata when combined with GC inoculation. CONCLUSION AMF can induce changes in plant morphology, physiology, biochemistry, and even gene expression, which in turn may change the food quality of herbivorous insects, thus affecting their feeding tendency, growth, reproduction and harmfulness (Jung et al., 2012). This research indicated that the inoculation of AMF using G. caledonium (GC) had positive effects on the AMF colonization of Bt maize or non-Bt maize. This, in turn, resulted in higher yields of Bt maize and non-Bt maize, and the cultivation of Bt maize did not significantly affected AMF colonization. Moreover, Bt maize had marked adverse effects on the food utilization, the reproduction, growth, and development of M. separata, particularly in combination with GC inoculation. Furthermore, GC inoculation was viable for Bt maize in their defense against larvae due to its ability to reduce their food utilization ability, which negatively affects the reproduction, growth, and development of M. separata. Simultaneously, the GC inoculation had adverse effects on the production of non-Bt maize due to the high potential risk of population occurrence through the enhancing of their food utilization ability, and positively affecting the reproduction, growth, and development of M. separata. The results indicated that the AMF inoculation of GC was conducive to improving the performance of Bt maize for the M. separata control, and it was also a very friendly and effective way for increasing the yield and reducing fertilizer use of crop plants. Therefore, we believe that AMF will play important ecological functions in the future Bt maize ecosystem.
2019-09-14T20:09:43.410Z
2019-09-12T00:00:00.000
{ "year": 2019, "sha1": "c4c3defd922a85b6fd9d57f464e05bae752152d8", "oa_license": "CCBY", "oa_url": "https://peerj.com/articles/7679.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c4c3defd922a85b6fd9d57f464e05bae752152d8", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
56354964
pes2o/s2orc
v3-fos-license
Causality , Memory Effect and Relativistic Dissipative Hydrodynamics The ideal hydrodynamical description for the dynamics of hot and dense matter achieved in RHIC experiments works amazingly well, particularly for the behavior of collective flow parameters. However, we know that there still exist several open problems in the interpretation of data in terms of the hydrodynamical model [1]. These questions require careful examination to extract quantitative and precise information on the properties of QGP. In particular, we should study the effect of dissipative processes on the collective flow variables. Several works have been done in this direction. However, strictly speaking, a quantitative and consistent analysis of the viscosity within the framework of relativistic hydrodynamics has not yet been done completely. This is because the introduction of dissipative phenomena in relativistic hydrodynamics casts difficult problems, both conceptual and technical. Initially Eckart, and later, Landau-Lifshitz introduced the dissipative effects in relativistic hydrodynamics in a covariant manner. It is, however, known that their formalism leads to the problem of acausality, that is, a pulse signal propagates with infinite speed. Thus, relativistic covariance is not a sufficient condition for a consistent relativistic dissipative dynamics. I. INTRODUCTION The ideal hydrodynamical description for the dynamics of hot and dense matter achieved in RHIC experiments works amazingly well, particularly for the behavior of collective flow parameters.However, we know that there still exist several open problems in the interpretation of data in terms of the hydrodynamical model [1].These questions require careful examination to extract quantitative and precise information on the properties of QGP.In particular, we should study the effect of dissipative processes on the collective flow variables.Several works have been done in this direction.However, strictly speaking, a quantitative and consistent analysis of the viscosity within the framework of relativistic hydrodynamics has not yet been done completely.This is because the introduction of dissipative phenomena in relativistic hydrodynamics casts difficult problems, both conceptual and technical.Initially Eckart, and later, Landau-Lifshitz introduced the dissipative effects in relativistic hydrodynamics in a covariant manner.It is, however, known that their formalism leads to the problem of acausality, that is, a pulse signal propagates with infinite speed.Thus, relativistic covariance is not a sufficient condition for a consistent relativistic dissipative dynamics. II. CAUSALITY IN DIFFUSION PROCESS The fundamental problem of the first order theory like the Navier-Stokes theory is attributed to the fact that the diffusion equation is parabolic.The diffusion process is a typical relaxation process of conserved quantities.Thus, it should satisfy the equation of continuity, where n is a density of a conserved quantity.The irreversible current j is, phenomenologically assumed to be proportional to a thermodynamic force F, which is given by the gradient of n, where ζ is the Onsager coefficient.Substituting Eq.(2) into Eq.(1),we get the diffusion equation, Fick's law tells us that the above diffusion process is induced by an inhomogeneous distribution.In Eq.( 2), the spatial inhomogeneity immediately gives rise to irreversible currents.However, this is a very idealized case.In general, the generation of irreversible currents has a time delay.Thus, we may think of memory effects by introducing the following memory function [2][3][4], where τ R characterizes the memory time and is called the relaxation time.Then, we rewrite Eq.( 2) as In the limit of τ R → 0, we have G (t,t ) → δ (t − t ) so that the original equation ( 2) is recovered [5].Substituting into the equation of continuity (1), we arrive at This equation is hyperbolic.This telegraph equation is sometimes called the causal diffusion equation. The the maximum velocity of the signal propagation of the causal diffusion equation is [6], For a suitable choice of the parameters τ R and ζ, we can recover the causal propagation of diffusion process.On the other hand, the diffusion equation corresponds to τ R = 0 and hence v max → ∞.This is the reason why the diffusion equation breaks causality. III. RELATIVISTIC DISSIPATIVE HYDRODYNAMICS Eckart and Landau-Lifshitz derived the relativistic dissipative hydrodynamics following non-equilibrium thermodynamics.Their theories are just the covariant versions of the Navier-Stokes equation and the corresponding equations still continue to be parabolic. As a matter of fact, the irreversible currents of the Landau-Lifshitz theory (LL) are constructed as follows.First of all, the energy-momentum tensor is expressed as where, ε, u µ , Π and π µν are respectively the energy density, the four velocity of the fluid and the bulk and shear viscous stresses.The velocity field satisfies u µ u µ = 1.The tensor P µν is the projection operator to the space orthogonal to u µ and given by P µν = g µν − u µ u ν .On the other hand, the current for the conserved quantity (e.g., baryon number) takes the form where ν µ is the heat conduction part of the current.It should be noted that for the irreversible currents, we require the constraints, u µ π µν = 0, u µ ν µ = 0.Then, the divergence of the entropy four flux is where α = µ/T and µ is the chemical potential.From the second law of thermodynamics, the r.h.s. of the equation should be positive.Then, the irreversible currents are given by Π = −ζ∂ α u α , π µν = ηP µναβ ∂ α u β and ν µ = −κP µν ∂ ν α, where ζ, η and κ are bulk viscosity, shear viscosity and thermal conductivity coefficients, respectively.Here, P µανβ is the double symmetric traceless projection, P µναβ = 1 2 P µα P νβ + P µβ P να − 1 One can see that the irreversible currents are induced by inhomogeneous distributions, and the space inhomogeneity immediately gives rise to the irreversible current.This is the same structure as the diffusion equation.In this sense, the LL is parabolic and does not obey causality.To solve this problem, we will introduce the memory effect in the same way as for the diffusion equation.Then, we use the same memory function as Eq. ( 4).Thus, the modified irreversible currents are [7] Π where τ = τ ( r,t) is the local proper time.The initial value of currents Π 0 , π µν 0 and ν µ 0 are given at an arbitrary initial time. IV. BJORKEN'S SCALING SOLUTION To see how the above scheme works, let us apply it to the one dimensional scaling solution of the Bjorken model.The time component of the divergence of T µν gives where ) .The equation for the space component is automatically satisfied by the scaling ansatz showing its consistency.For simplicity, we consider only the effect of the shear viscosity.(The contribution of the bulk viscosity is same as that of the shear viscosity in this simple model.) A typical estimate from the kinetic theory shows that the shear viscosity η is proportional to the entropy density s, η = bs, where b is a constant [8].Following Ref. [8], we choose b = 1.1.Furthermore, the relaxation time is given by τ R = 3η IS 4p = 3η 8p [8].We further assume the equation of state of the ideal gas. In Fig. 1, we show the energy density ε obtained by solving Eq. ( 14) as function of proper time τ.As the initial condition, we set ε(τ 0 ) = 1 GeV/fm 3 , Π (τ 0 ) = Ω (τ 0 ) = 0 at the initial proper time τ 0 = 0.1 fm/c.The first two lines from the top represents the results of the LL.The next two lines shows the results of our theory.The last line is the result of ideal hydrodynamics.For the solid lines, we calculated with the viscosity and relaxation time which depend on temperature.Initially, the effect of viscosity is small because of the memory effect, the behavior of our theory is similar to that of ideal hydrodynamics.After the time becomes larger than the relaxation time, the memory effect is not effective anymore and the behavior is similar to the result of the LL.As we have mentioned, the behavior of our theory is the same as the result obtained in Ref. [8] in this case.For the dashed lines, we calculated with the constant viscosity and relaxation time, η = η(ε 0 ) and τ R = τ R (ε 0 ).In this case, the viscosity is constant so that the heat production stays longer and has a smaller slope as function of time asymptotically.Sometimes the emergence of the initial heat-up in the LL (the dashed curve in Fig. 1) is interpreted as an intrinsic problem of the first order theory.However, such behavior can also appear even in the second order theory.In Fig. 2, we set Π (τ 0 ) = ζ(τ 0 )/τ 0 and Ω (τ 0 ) = η(τ 0 )/τ 0 as the initial conditions.In particular, the initial heat-up also appear in the second order depending on the initial condition for the irreversible currents (see Fig. 2).Therefore, this heat-up is not the problem of the first order theory but rather the specific property of the scaling ansatz.This was already pointed out by Muronga.The physical reason for this heat-up is due to the use of the Bjorken solution for the velocity field.In this case, the system acts as if an external force is applied to keep the velocity field as a given function of τ.Thus, depending on the relative intensity of the viscous terms compared to the pressure, the external work converted to the local heat production can overcome the temperature decrease due to the expansion. V. SUMMARY AND CONCLUDING REMARKS In this proceeding, we discuss relativistic dissipative hydrodynamics consistent with causality by introducing the memory effects to the Landau-Lifshitz theory.In this way, a simple physical structure of the LL is preserved.The resulting equation of motion then becomes hyperbolic and causality can be restored. We have applied our theory to the case of the onedimensional scaling solution of Bjorken and obtained the analogous behavior of previous analysis.We showed the time evolution of the temperature.As expected, our theory gives the same result of Ref. [8], because the no-acceleration condition used in Ref. [8] is automatically satisfied in this model.Note that our theory is applicable to more general case where the acceleration is important. Our theory is particularly adequate to be applied to the hydro-code such as SPheRIO which is based on the Lagrangian coordinate system [9].Implementation of the present theory to the full three-dimensional hydrodynamics is now in progress. FIG. 1 : FIG.1:The time evolution of the energy density.The dashed curves correspond to the calculations with the constant viscosity and relaxation time.The first two lines from the top represents the results of the LL.Next two lines shows the results of our theory.The last line is the result of ideal hydrodynamics. FIG. 2:The time evolution of energy density with the different initial conditions from Fig.1.The dashed and short dashed lines represent the result of the LL and our theory, respectively .For comparison, our result of Fig.1is shown, again (ideal T µν (τ 0 )).The last line from the top is the result of ideal hydrodynamics.In this case, the energy heat-up is observed even in our theory.
2018-12-15T05:55:16.579Z
2007-06-01T00:00:00.000
{ "year": 2007, "sha1": "92f0ed1aba57edfc035f2669c23d07cc22d134d4", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/bjp/a/Lm8CRcr9VJMZqwd4YZ96PPq/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "92f0ed1aba57edfc035f2669c23d07cc22d134d4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
197662327
pes2o/s2orc
v3-fos-license
{\epsilon}-BMC: A Bayesian Ensemble Approach to Epsilon-Greedy Exploration in Model-Free Reinforcement Learning Resolving the exploration-exploitation trade-off remains a fundamental problem in the design and implementation of reinforcement learning (RL) algorithms. In this paper, we focus on model-free RL using the epsilon-greedy exploration policy, which despite its simplicity, remains one of the most frequently used forms of exploration. However, a key limitation of this policy is the specification of $\varepsilon$. In this paper, we provide a novel Bayesian perspective of $\varepsilon$ as a measure of the uniformity of the Q-value function. We introduce a closed-form Bayesian model update based on Bayesian model combination (BMC), based on this new perspective, which allows us to adapt $\varepsilon$ using experiences from the environment in constant time with monotone convergence guarantees. We demonstrate that our proposed algorithm, $\varepsilon$-\texttt{BMC}, efficiently balances exploration and exploitation on different problems, performing comparably or outperforming the best tuned fixed annealing schedules and an alternative data-dependent $\varepsilon$ adaptation scheme proposed in the literature. INTRODUCTION Balancing exploration with exploitation is a well-known and important problem in reinforcement learning [Sutton and Barto, 2018]. If the behaviour policy focuses too much on exploration rather than exploitation, then this could hurt the performance in an on-line setting. Furthermore, on-policy algorithms such as SARSA or TD(λ) might not converge to a good policy. On the other hand, if the exploration policy focuses too much on exploitation rather than exploration, then the state space might not be explored sufficiently and an optimal policy would not be found. Historically, numerous exploration policies have been proposed for addressing the exploration-exploitation trade-off in model-free reinforcement learning, including Boltzmann exploration and epsilon-greedy [McFarlane, 2018]. There, this trade-off is often controlled by one or more tuning parameters, such as ε in epsilongreedy or the temperature parameter in Boltzmann exploration. However, these parameters typically have to be handcrafted or tuned for each task in order to obtain good performance. This motivates the design of exploration algorithms that adapt their behaviour according to some measure of the learning progress. The simplest approaches adapt the tuning parameters of a fixed class of exploration policies such as epsilon-greedy [Tokic, 2010]. Other methods, such as count-based exploration [Thrun, 1992, Bellemare et al., 2016, Ostrovski et al., 2017 and Bayesian Q-learning [Dearden et al., 1998], use specialized techniques to develop new classes of exploration policies. However, despite the recent developments in exploration strategies, epsilon-greedy is still often the exploration approach of choice [Vermorel and Mohri, 2005, Heidrich-Meisner, 2009, Mnih et al., 2015, Van Hasselt et al., 2016. Epsilon-greedy is both intuitive and simpler to tune than other approaches, since it is completely parameterized by one parameter, ε. Another benefit of this policy is that it can be easily combined with more sophisticated frameworks, such as options [Bacon et al., 2017]. Unfortunately, the performance of epsilon-greedy in practice is highly sensitive to the choice of ε, and existing methods for adapting ε from data are ad-hoc and offer little theoretical justification. In this paper, we take a fully Bayesian perspective on adapting ε based on return data. Recent work has demonstrated the strong potential of a Bayesian approach for parameter tuning in model-free reinforcement learning [Downey and Sanner, 2010]. Another key advantage of a fully Bayesian approach over heuristics is the ability to specify priors on parameters, such as the predictive inverse variance of returns, τ in this work, which are more robust to noise or temporary digressions in the learning process. In addition, our approach can be combined with other exploration policies such as Boltzmann exploration [Tokic and Palm, 2011]. Specifically, we contribute: 1. A new Bayesian perspective of expected SARSA as an ε-weighted mixture of two models, the greedy (Q-learning) bootstrap and one which averages uniformly over all Q-values (Section 4.1); 2. A Bayesian algorithm ε-BMC (Algorithm 1) that is robust (Section 4.2), general, and adapts ε efficiently (Section 4.3); 3. A theoretical convergence guarantee of our proposed algorithm (Theorem 1). Empirically, we evaluate the performance of ε-BMC on domains with discrete and continuous state spaces, using tabular and approximate RL methods. We empirically show that our algorithm can outperform exploration strategies that fix or anneal ε based on time, and even existing adaptive algorithms. In the end, ε-BMC is a novel, efficient and general approach to adapting the exploration parameter in epsilon-greedy policies that empirically outperforms a variety of fixed annealing schedules and other ad-hoc approaches. RELATED WORK Our paper falls within the scope of adaptive epsilon greedy algorithms. Perhaps the most similar approach to our work is the Value Differences Based Exploration (VDBE) algorithm of Tokic [2010], in which ε was modelled using a moving average and updated according to the Bellman (TD) error. However, that algorithm was presented for stationary-reward multi-armed bandits. Modelling Q-values using normal-gamma priors, as done in our paper, is a cornerstone of Bayesian Q-learning [Dearden et al., 1998]. However, that paper is fundamentally different from ours, in that it addresses the problem of exploration by adding a bonus to the Q-values that estimates the myopic value of perfect information. Our pa-per, on the other hand, applies the normal-gamma prior only to model the variance of the returns, while the exploration is handled using the epsilon-greedy policy with ε modelled using a Beta distribution. MARKOV DECISION PROCESSES In this paper, we denote a Markov decision process (MDP) as a tuple S, A, T, R, γ , where S is a set of states, A is a finite set of actions, T : S×A×S → [0, ∞) is a transition function for the system state, R : S × A × S → R is a bounded reward function, and γ ∈ (0, 1) is a discount factor. Randomized exploration policies are sequences of mappings from states to probability distributions over actions. Given an MDP S, A, T, R, γ , for each stateaction pair (s, a) and policy π, we define the expected return where s t is sampled from T , a t are sampled from π(·|s t ) and r t+1 = R(s t , a t , s t+1 ). The associated value function is V π (s) = max a∈A Q π (s, a), and the objective is to learn an optimal policy π * that attains the supremum of V π (s) over all policies. Puterman [2014] contains a more detailed treatment of this subject. REINFORCEMENT LEARNING In the reinforcement learning setting, neither T nor R are known, so optimal policies are learned from experience, defined as sequences of transitions (s t , a t , r t+1 , s t+1 , a t+1 ), t = 0, 1, . . . broken up into episodes. Here, states and rewards are sampled from the environment, and actions follow some exploration policy π. Given an estimateG t of the expected return at time t starting from state s = s t and taking action a = a t , temporal difference (TD) learning updates the Q-values as follows: where η t ∈ (0, 1] is a problem-dependent learning rate parameter. Typically,G t is computed by bootstrapping from the current Q-values, in order to reduce variance. Two of the most popular bootstrapping algorithms are Q-learning and SARSA, given respectively as: Q-learning is an example of an off-policy algorithm, whereas SARSA is on-policy. Under relatively mild conditions and in tabular settings, Q-learning has been shown to converge to the optimal policy with probability one [Watkins and Dayan, 1992]. One additional algorithm that is important in this work is expected SARSÃ which is similar to SARSA, but in which the uncertainty of the next action a t+1 with respect to π is averaged out. This results in considerable variance reduction as compared to SARSA, and theoretical properties of this algorithm are detailed in Van Seijen et al. [2009]. Sutton and Barto [2018] provides a comprehensive treatment of reinforcement learning methods. EPSILON-GREEDY POLICY In this paper, exploration is carried out using ε-greedy policies, defined formally as (4) In other words, π ε samples a random action from A with probability ε t ∈ [0, 1], and otherwise selects the greedy action according to Q t . As a result, ε t can be interpreted as the relative importance placed on exploration. The optimal value of the parameter ε t is typically problem-dependent, and found through experimentation. Often, ε t is annealed over time in order to favor exploration at the beginning, and exploitation closer to convergence [Sutton and Barto, 2018]. However, such approaches are not adaptive since they do not take into account the learning process of the agent. In this paper, our main objective is to derive a data-driven tuning strategy for ε t , that depends on current learning progress rather than trial and error. ADAPTIVE EPSILON-GREEDY In this section, we show how the expected return under epsilon-greedy policies can be written as an average of two return models weighted by ε. There are two relevant Bayesian methods for combining multiple models based on evidence: Bayesian model averaging (BMA), and Bayesian model combination (BMC). Generally, the Bayesian model combination approach is preferred to model averaging, since it provides a richer space of hypotheses and reduced variance [Minka, 2002]. By interpreting ε as a random variable whose posterior distribution can be updated on the basis of observed data, BMC naturally leads to a method for ε adaptation. A BAYESIAN INTERPRETATION OF EXPECTED SARSA WITH THE EPSILON-GREEDY POLICY We begin by combining the definition of expected SARSA (3) with the ε-greedy policy (4). For s = s t+1 , a * = arg max a Q t (s , a ), and r = r t+1 we havẽ whereG Q t is the Q-learning bootstrap (1) and is an estimate that uniformly averages over all the actionvalues, which we call the uniform model. This leads to the following important observation: expected SARSA can be viewed as a probability-weighted average of two models, the greedy modelG Q t that trusts the current Qvalue estimates and acts optimally with respect to them, and the uniform modelG U t that completely distrusts the current Q-value estimates and consequently places a uniform belief over them. Under this interpretation, ε t and 1 − ε t are the posterior beliefs assigned to the two aforementioned models, respectively. In the following subsections, we verify this simple fact algebraically in the context of Bayesian model combination. We also develop a method for maintaining the (approximate) posterior belief state efficiently, with a computational cost that is constant in both space and time, and with provable convergence guarantees. BAYESIAN Q-LEARNING In order to facilitate tractable learning and inference, we assume that the return observation q s,a at time t, given the model m ∈ {Q, U }, is normally distributed: where the meansG Q t andG U t are given in (1) and (6), respectively, and τ > 0 is the inverse of the variance, or precision. This assumption can be justified, by viewing the return as a discounted sum of future (random) reward observations, and appealing to the central limit theorem when γ is close to 1 and the MDP is ergodic [Dearden et al., 1998]. There are two special cases of interest in this work. In the first case, τ is allowed to be constant across all stateaction pairs and models, and naturally leads to a stateindependent ε adaptation. This approach is particularly advantageous when it is costly or impossible to maintain independent statistics per state, such as when the state space is very large or continuous in nature. In the second case, independent statistics are maintained per state and lead to state-dependent exploration. In order to update τ , we consider the standard normalgamma model: where q s,a are i.i.d. given µ and τ . Since the returns in different state-action pairs are dependent, this assumption is likely to be violated in practice. However, it leads to a compact learning representation necessary for tractable Bayesian inference, and has been used effectively in the existing literature in similar forms [Dearden et al., 1998]. Furthermore, (8) is not used to model the Q-values directly in our paper, but rather, to facilitate robust estimation of τ , as we now show. Given data D = {q si,ai,i | i = 0, 1 . . . t−1} of previously observed returns, the joint posterior distribution of µ and τ with likelihood (7) and prior (8) is also normal-gamma distributed, and so the marginal posterior distribution of τ , P(τ |D), is gamma distributed with parameters: whereμ t andσ 2 t are the sample mean and variance of the returns in D, respectively [Bishop, 2006]. These quantities can be updated online after each new observation d in constant time [Welford, 1962]. Finally, for each model m ∈ {Q, U }, we marginalize over the uncertainty in τ , using (7) and (9) as follows: Finally, we have: where St(µ, λ, ν) is the three-parameter Student tdistribution [Bishop, 2006]. Therefore, marginalizing over the unknown precision τ leads to a t-distributed likelihood function. Alternatively, one could simply use the Gaussian likelihood in equation (7) and treat τ as a problem-dependent tuning parameter. However, the heavy tail property of the t-distribution is advantageous in the non-stationary setting typically encountered in reinforcement learning applications, where Q-values change during the learning phase. We now show how to link this update with the expected SARSA decomposition (5) to derive an adaptive epsilon-greedy policy. EPSILON-BMC: ADAPTING EPSILON USING BAYESIAN MODEL COMBINATION In the general setting of Bayesian model combination, we model the uncertainty in Q-values for each state-action pair (s, a) as random variables with posterior distribution P(q s,a |D). The expected posterior return can be written as an average over all possible combinations of greedy and uniform model, where w is the weight assigned to the uniform model and 1 − w is the weight assigned to the greedy model [Monteith et al., 2011]. As will be verified shortly, the expectation of this weight w given the past return data D will turn out to be a Bayesian interpretation of ε t . The belief over w is maintained as a posterior distribution p t (w) = P(w|D). Continuing from (11) and using (10): which is exactly (5) except that now ε t = E[w|D]. We have thus shown that the expected SARSA bootstraps with data-driven ε t can be viewed in terms of Bayesian model combination. We denote this new estimate ε BM C t . The posterior distribution p t (w) = P(w|D) is updated recursively by Bayes' rule: for every new observation d . Since the number of terms in p t grows exponentially in |D|, it is necessary to use posterior approximation techniques to compute E[w|D]. One approach to address the intractability of computing an exact posterior p t (w) is to sample directly from the distribution. However, such an approach is inherently noisy and inefficient in practice. Instead, we apply the Dirichlet moment-matching technique [Hsu andPoupart, 2016, Gimelfarb et al., 2018] that was shown to be effective at differentiating between good and bad models from data, easy to implement, and efficient. In particular, we apply the approach in Gimelfarb et al. [2018] with the modelsG Q t andG U t , by matching the first and second moments of p t to those of the beta distribution Beta(α t , β t ) and solving the resulting system of equations for α t and β t . The closed-form solution is: where e U t and e Q t are the respective probabilities of observing a return d =G ExpS t under the distributions (10). It follows that All quantities, including a t , b t , α t and β t , can be computed online in constant time and with minimal storage overhead without caching D. We call this approach ε-BMC and present the corresponding pseudo-code in Algorithm 1. Therein, lines with a * indicate additions to the ordinary expected SARSA algorithm. Algorithm 1 ε-BMC with Expected SARSA 1: * initialize µ 0 , τ 0 , a 0 , b 0 ,μ = 0,σ 2 = ∞, α, β 2: for each episode do choose action a using ε-greedy policy π ε (4) 7: take action a, observe r and s 8: 12: * updateμ andσ 2 using observationG ExpS 13: * compute a and b using (9) 14: * compute e Q and e U using (10) 15: * update α and β using (12)- (16) 16: s ← s 17: end for 18: end for In Algorithm 1, the expected SARSA returnG ExpS was used to update both the posterior on ε and the Q-values. However, it is possible to update the Q-values in line 11 using a different estimator of the future return, including Q-learning (1), SARSA (2), or other approaches. The Q-values could also be approximated using a deep neural network or other function approximator. The algorithm can also be run off-line by caching D for an entire episode and then updating ε. State-dependent exploration can be easily implemented by maintaining the posterior statistics independently for each state, or approximating them using a neural network. A final advantage of our algorithm is a provable convergence guarantee under fairly general assumptions. Proof. Let ε t = ε BM C t and observe that: where the first line uses (15)-(17), the second uses (12) and the third uses (15) and (16). Then, Now, if ε t ≤ 1 2 , then this implies that and from (18) we conclude that ε t+1 ≤ ε t . The first statement of the theorem follows from the assumption ε 0 ≤ 1 2 and a standard induction argument. The second statement follows from the monotone convergence theorem (see, e.g. Rudin [1976], pg. 56). The convergence of ε-BMC holds using any value function representation, including neural networks. It is important to note that convergence of ε-BMC can only be guaranteed when ε is initialized in [0, 0.5]. However, this is not a concern in practice, since it has been found that there is no significant gain in using values of ε larger than 0.5 [dos Santos Mignon and da Rocha, 2017]. EMPIRICAL EVALUATION To demonstrate the ability of Algorithm 1 to adapt in a variety of environments, we consider a deterministic, finite state grid-world domain, the continuous state cart-pole control problem, and a stochastic, discrete state supply-chain problem. The third domain was chosen to show how our algorithm performs when the action space is large and the problem is stochastic. We considered two different reinforcement learning algorithms: on-policy tabular expected SARSA [Sutton and Barto, 2018], and off-policy DQN with experience replay [Mnih et al., 2015] 1 . The parameter settings are listed in Tables 1 and 2 in the supplementary materials 2 . All experiments were run independently 100 times, and mean curves with shaded standard error are reported. We do not compare to Tokic and Palm [2011], since that paper falls outside the scope of epsilon-greedy policies. However, we reiterate that it is a trivial matter to interchange VDBE and ε-BMC in that framework. GRID-WORLD The first benchmark problem is the discrete deterministic 5-by-5 grid-world navigation problem with sub-goals presented in Ng et al. [1999]. Valid moves incur a cost of 0.1, and invalid moves incur a cost of 0.2, in order to encourage the agent to solve the task in the least amount of time. We set γ = 0.99. Testing consists of running a single episode, starting from the same initial state, using the greedy policy at the end of each episode. The results are shown in Figures 1 and 2. CART-POLE The second problem is the continuous deterministic cartpole control problem. A reward of 1.0 is provided until the pole falls, to encourage the agent to keep the pole upright. We also set γ = 0.95. To run the tabular expected SARSA algorithm and VDBE, we discretize the four-dimensional state space into 3 × 3 × 4 × 3 = 108 equal regions. Since the initial position is randomized, testing consists of evaluating the greedy policy on 10 independent episodes and averaging the returns. Since over-fitting was a significant concern for DQN, we stop training as soon as perfect test performance (the pole has not been dropped) was observed over four consecutive episodes. The results are shown in Figures 3 and 4. SUPPLY-CHAIN MANAGEMENT This supply-chain management problem was described in Kemmer et al. [2018], and consists of a factory and warehouses. The agent must decide how much inventory to produce at the factory and how much inventory to ship to the warehouse(s) to meet the demand. The parameters used in our experiment are: K = 1, p = 0.5, κ pr = 0.1, κ st,j = 0.02, κ tr,j = 0.1, ζ j = 5, c j = 50, ρ max = 10, and demand follows a Poisson distribution with rate λ = 2.5. We also set a transportation limit of 10 items per period, and assume that unfulfilled demand is not backlogged, but lost forever. The initial state is always (10, 0) for training and testing (e.g. 10 items initially at the factory and 0 at the warehouse). We set γ = 0.95. Like cart-pole, testing is done by averaging the returns of 10 independent trials using the greedy policy. The results are illustrated in Figures 5 and 6. DISCUSSION Overall, we see that ε-BMC consistently outperformed all other types of ε annealing strategies, including VDBE, or performed similarly. However, ε-BMC converged slightly later than VDBE on the grid-world domain and the fixed annealing strategy ε t = 1 2 (t + 1) −0.25 on the supplychain problem, using tabular expected SARSA. However, in the former case, ε-BMC outperformed all fixed tuning strategies, and in the latter case, it outperformed VDBE by a large margin. These observations are related to the speed of convergence; asymptotically, ε-BMC approached the performance of the best policy that was attained (for grid-world this is indeed the optimal policy). While it performed well on the simple grid-world domain, VDBE performed considerably worse than ε-BMC on the more complex supply-chain problem. We believe that the Bayesian approach of ε-BMC smooths out the noise in the return signals better than VDBE and other ad-hoc approaches for adapting ε. This also suggests why our algorithm performed better on DQN. Furthermore, we see that no single family of annealing strategies worked consistently well across all domains and algorithms. For instance, geometric decay strategies worked well on the grid-world domain, while per-forming poorly on the supply-chain problem using tabular SARSA. The power decay strategies worked well on the supply-chain problem using tabular SARSA, but failed to match the performance of other strategies when switching to DQN. Also, the performance of VDBE was highly sensitive to the choice of the σ parameter. A lower value of σ worked well for grid-world and cart-pole, but higher values of σ worked better for supply-chain. The performance of ε-BMC was relatively insensitive to the choice of prior parameters for µ and τ (a 0 , b 0 , µ 0 , τ 0 ), so we were able to use the same values in all our experiments. However, unsurprisingly, it was more sensitive to the strength of the prior on ε (α 0 , β 0 ). Since we can always set α 0 ≈ β 0 , this effectively reduces to the problem of selecting a single parameter that controls the strength "-BMC VDBE (< = 0:05) "t = 0:01 "t = 1 2 0:9 t "t = 1 2 (t + 1) !1 Figure 6: Average performance (return) on the supply-chain domain using deep Q-learning. of the prior on ε. This is considerably easier to do than to select both a good annealing method and the tuning parameter(s). CONCLUSION In this paper, we proposed a novel Bayesian approach to solve the exploration-exploitation problem in general model-free reinforcement learning, in the form of an adaptive epsilon-greedy policy. Our novel algorithm, ε-BMC, is a novel approach for tuning the ε parameter automatically from return observations based on Bayesian model combination and approximate moment-matching based inference. It was argued to be general, efficient, robust, and theoretically grounded, and was shown em-pirically to outperform fixed annealing schedules for ε and even a state-of-the-art ε adaptation scheme. In future work, it would be interesting to evaluate the performance of ε-BMC combined with Boltzmann exploration [Tokic and Palm, 2011], as well as the statedependent version. We believe that it is possible to obtain a Bayesian interpretation of VDBE by placing priors over the Bellman errors and updating them using data, but we have not investigated this approach. It would also be interesting to extend our approach to handle options.
2019-07-20T13:06:50.857Z
2020-07-02T00:00:00.000
{ "year": 2020, "sha1": "e012f897d861d81095c5fff89fab0b9e51d03c16", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e7c936be43b3b39e892e4c066b8f93889a57bc68", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
244921200
pes2o/s2orc
v3-fos-license
Tumor-antigens and immune landscapes identification for prostate adenocarcinoma mRNA vaccine Prostate adenocarcinoma (PRAD) is a leading cause of death among men. Messenger ribonucleic acid (mRNA) vaccine presents an attractive approach to achieve satisfactory outcomes; however, tumor antigen screening and vaccination candidates show a bottleneck in this field. We aimed to investigate the tumor antigens for mRNA vaccine development and immune subtypes for choosing appropriate patients for vaccination. We identified eight overexpressed and mutated tumor antigens with poor prognostic value of PRAD, including KLHL17, CPT1B, IQGAP3, LIME1, YJEFN3, KIAA1529, MSH5 and CELSR3. The correlation of those genes with antigen-presenting immune cells were assessed. We further identified three immune subtypes of PRAD (PRAD immune subtype [PIS] 1–3) with distinct clinical, molecular, and cellular characteristics. PIS1 showed better survival and immune cell infiltration, nevertheless, PIS2 and PIS3 showed cold tumor features with poorer prognosis and higher tumor genomic instability. Moreover, these immune subtypes presented distinguished association with immune checkpoints, immunogenic cell death modulators, and prognostic factors of PRAD. Furthermore, immune landscape characterization unraveled the immune heterogeneity among patients with PRAD. To summarize, our study suggests KLHL17, CPT1B, IQGAP3, LIME1, YJEFN3, KIAA1529, MSH5 and CELSR3 are potential antigens for PRAD mRNA vaccine development, and patients in the PIS2 and PIS3 groups are more suitable for vaccination. Supplementary Information The online version contains supplementary material available at 10.1186/s12943-021-01452-1. Background Prostate adenocarcinoma (PRAD) is the second diagnosed and the fifth death-related malignancy among men worldwide [1]. Positive responses in patients with PRAD were rarely observed after immunotherapies including programmed cell death protein 1 (PD-1), PD-Ligand 1 (PD-L1), or cytotoxic T lymphocyte antigen 4 (CTLA4). Previously, Sipuleucel-T brought prostate cancer immunotherapy into a sharper focus whereas no significant effect was reported regarding progression-free survival [2,3]. Therefore, novel therapeutics should be developed for effective PRAD treatment. In the past 2 years, under the background of coronavirus disease-2019 pandemic, the enthusiasm for mRNA vaccine development, showing advantages of flexibility, productivity, non-genomic integration, and low immunogenicity [4], was also brought into the field of cancer therapy [5]. Previous phase I/II clinical trial showed good tolerability and favorable immune activation of mRNA vaccines CV9103 for PRAD; however, the subsequent clinical trials of CV9104 containing two more antigens (prostatic acid phosphatase [PAP] and Mucin-1) were terminated due to failure of improving the overall survival (OS) [6]. These findings indicated that antigen selection is critical for activating antigen-presenting cells (APCs) and immune response. Moreover, the identification of immune subtypes of patients with PRAD for mRNA vaccination is another crucial factor for the curative effect [7]. Hence, this study, exploring novel candidate tumor antigens for PRAD mRNA vaccine and identifying suitable patients for vaccination, aims to pave an avenue for the application of mRNA vaccine in PRAD population. Identification of potential tumor antigens of PRAD A total of 733 overexpressed genes in TCGA-PRAD samples were identified (Fig. S1A, B) and their distribution in chromosomes was shown in Fig. 1A. We then identified 10881 genes that potentially encode tumor-specific antigens through calculating the fraction of altered genome and tumor mutational counts (Fig. 1B, C). Ten genes with the highest altered genome fractions and mutation counts were displayed in Fig. S1C and D. The 733 overexpressed genes were then intersected with the 10881 mutated tumor antigen-encoding genes, and 311 genes were identified afterwards (Fig. S1E). Cox regression revealed that 13 genes were significantly associated with OS ( Fig. 1D) and 70 genes were significantly associated with disease free interval (DFI) (Fig. 1E). Further intersection analysis indicated eight genes, including KLHL17, CPT1B, IQGAP3, LIME1, YJEFN3, MSH5, CELSR3 and KIAA1529, were correlated with both OS and DFI (Fig. 1F). The Kaplan-Meier survival curves of OS for those eight genes were shown in Fig. 1G, and the higher expression of them was indicative of worse survival. The correlation between them and the infiltration of major APCs, including B cells, macrophages as well as dendritic cells (DCs), was also analyzed. Figure 1H showed a significantly positive correlation of IQGAP3, CELSR3, and KIAA1529 with APCs, whereas negative for CPT1B (Fig. S1F). Taken together, our evidence identified tumorspecific antigens owning potentiality of mRNA vaccine development for PRAD. To the best of our knowledge, our study is the first to systematically screen suitable tumor antigens for mRNA vaccine development in PRAD. Despite the non-application of these antigens in mRNA vaccine development until the present, some of these antigens have been functionally explored in previous studies. For instance, CPT1B silencing could reduce cell proliferation and invasion in PRAD cell lines and its expression level might be regulated by androgen receptors [8]. IQGAP3 was upregulated in most cancer types and predicted a poor prognosis. It might also participate in the Paris forrestii antitumor effect [9]. Besides, CELSR3 downregulation significantly suppresses PRAD cell proliferation and migration [10]. MSH5 has been reported as a pleiotropic susceptibility locus for several cancers and was identified as a novel candidate gene warranting additional follow-up as a prospective PRAD risk locus [11]. However, KLHL17, KIAA1529, LIME1, and YJEFN3 were not fully elucidated in PRAD or other cancers. Their function in cancers especially PRAD warrants further exploration. Identification and validation of the PRAD immune subtypes A total of 13426 immunogenic genes were obtained from the MSigDB c7 datasets, 23 of which were associated with predictive survival outcomes through Lasso regression. The PAM algorithm accordingly identified the optimal number of clusters as three based on the training cohort ( Fig. 2A). The accumulative curve and delta area of clustering were displayed in Fig. S2A and B. Principal component analysis showed the distribution of TCGA-PRAD individuals in each cluster (Fig. S2C), and heatmap revealed the differential expression of partial immunogenic genes across the three clusters (Fig. S2D). Importantly, the PRAD immune subtype 1 (PIS1) consistently had better survival outcomes compared to the PIS2 and PIS3 in both training cohort (P < 0.0001) and validation cohort (P = 0.041) ( Fig. 2B and S2C). Clinical, mutational and immunological features of the PRAD immune subtypes Clinical features of the PRAD immune subtypes were assessed. The predicted response to immunotherapy of the subtypes indicated that PIS2 and PIS3 were more likely to respond to anti-PD-L1 treatment (Fig. 2D). PIS3 had a higher frequency of biochemical recurrence ( Fig. 2E) and pathological N1 stage and higher pathological T stage (Fig. S2E, F). Moreover, patients in PIS3 also had a higher risk of receiving radiation therapy (Fig. S2G). This evidence implies that PIS2 and PIS3 are associated with more aggressive clinical features and more suitable for immunotherapy. The correlation of existing PRAD biomarkers with PRAD immune subtypes was evaluated. The expression of HOXC6, whose higher expression indicated short survival and higher recurrence rate of PRAD [12], was significantly higher in PIS2 and PIS3 than in PIS1 (Fig. S3A). Nevertheless, PDK4 and STAT3, two classical genes whose low expression was associated with worse survival of PRAD [13,14], were significantly less expressed in PIS2 and PIS3 ( Fig. S3B and C). Notably, our PRAD immune subtypes were also compared with previously published pan-cancer immune subtypes. Figure 2F demonstrated that the C3 subtype had a decreasing distribution, and C1 and C2 showed an increasing tendency across PIS1 to PIS3. Interestingly, C3 was claimed to be a positive marker of prognosis but C1 and C2 were negatively associated with survival [15], which was consistent with our findings. In terms of mutation status, PIS3 and PIS2 presented more frequent CNV (either gain or loss) across the chromosomes (Figs. 2G and S3D). Similarly, it could be seen from the mutation landscape of the immune subtypes that PIS3 had a more frequent mutation of the top 20 most mutated genes (Fig. S3E), and PIS2 and PIS3 had significantly higher tumor mutation counts 2H). In addition, tumor mutation burden (TMB) was also found to be significantly heavier in PIS2 (P = 0.0053) and PIS3 (P < 0.0001) than in PIS1 (Fig. 2I). Our results also demonstrated that PIS2 and PIS3 had higher telomeric allelic imbalance (HRD-NtAI), large scale transition (HRD-LST) and loss of heterozygosity (HRD-LOH) and combined homologous recombination deficiency (HRD scores) (Fig. 2J, S3F-H). Moreover, mRNA stemness index (mRNAsi) was also higher in PIS2 and PIS3 compared to that of PIS1 (Fig. 2K). As shown in Fig. S4A and D, the tendency of stromal score, immune score, and tumor purity was variable across the subtypes in both training and validation cohorts. In the training cohort, PIS3 had richer infiltration of M2 macrophages and memory B cells, but PIS1 had more infiltration of naïve B cells and memoryresting CD4+ T cells (Fig. S4B, C). Consistently, PIS3 still had a higher degree of memory B cell infiltration compared to PIS1 and PIS2 in the validation cohort (Fig. S4E, F). Hence, PIS3 implies having a better performance of presenting tumor antigen during the immune response. The anticancer immune activity of the three immune subtypes was calculated with the TIP analysis. PIS1 performed better at recruiting CD4+ T cells, Th22 cells, and monocytes, whereas PIS2 and PIS3 were still proved to be better at recruiting B cells (Fig. S5A). These outcomes may explain the better survival of PIS1 and also indicate the suitability of PIS2 and PIS3 for receiving tumor vaccines. As for the immune modulators, a total of 37 ICP genes and 25 ICD genes were analyzed, which revealed that 31 ICP genes and 18 ICD genes were differentially expressed across the immune subtypes in the training cohort from TCGA datasets (Fig. S5B, C). Interestingly, fewer ICPs and ICDs were significantly differentially expressed among three clusters in the GEO cohorts, and the PIS3 cluster showed markedly lower ICP and ICD expression (Fig. S5D, E). These findings indicated that the immunotyping showed a distinct expression pattern of ICPs and ICDs, and these modulators could be utilized as potential markers for treatment with mRNA vaccines. Immune subtype-based landscape of PRAD The gene expression value of each patient across the three PRAD immune subtypes was used to build the immune landscape of PRAD (Fig. 2L), with PIS1, PIS2 and PIS3 were generally distributed in different branches of the tree. Principal component 1 (horizontal axis) was positively correlated with plasma cells and M2 macrophages, but negatively correlated with naïve B cells, resting DCs and M1macrophages. Interestingly, principal component 2 (vertical axis) had a positive correlation with DCs and naïve B cells, but a negative correlation with M0 and M1 macrophages (Fig. 2M). The general distribution of PIS1 can be observed to contrast PIS3. Also, individuals within the same immune subtype of PIS1 and PIS3 showed opposing distribution. Therefore, PIS1 and PIS3 were further divided (Fig. 2N), which turned out that PIS1A had a generally higher enrichment score regarding activated DCs and memory B cells compared to PIS1B (Fig. S6A). Similarly, PIS3A had a higher enrichment score of activated DCs, activated B cells and memory B cells than PIS3B (Fig. S6B). Therefore, tumor antigen may be more effective in PIS1A and PIS3A compared to PIS1B and PIS3B, respectively. Besides, individuals with extreme distribution in the immune landscape (Fig. 2O) were taken into further survival analysis. Group N1 was associated with the worst survival and group N3 had the best survival outcomes (P = 0.0011) (Fig. 2P). The immune subtype-based landscape can potentially designate the precise mRNA vaccine therapeutics for patients with PRAD by identifying immune components of patients with PRAD and predict survival. Weighted immunogenic gene co-expression network of PRAD WGCNA with a fixed soft threshold of nine was used to construct the immunogenic gene co-expression network of PRAD (Fig. S7A-C). Eventually, 9 co-expression modules were obtained (Fig. S7D, E). Distribution analysis showed that PRAD immune subtypes were differentially distributed in most of the modules (Fig. S7F) CI 1.16-2.24) were presented in Fig. S8A. The biological function of the prognostic modules, including B cell activation, and regulation of adaptive immune response were also displayed in Fig. S8B-D. 62 hub genes in the prognostic modules were identified, and three of them (CDC20, ESPL1, MAPK8IP3) were eventually selected after multivariate Cox regression (Fig. S8E, F). Patients were divided into the high-risk and low-risk groups. KM curve demonstrated that the high-risk group had worse survival (P = 0.0011) and the area under the receiver operating curve (AUC) was 0.852, indicating a good accuracy of the model (Fig. S8G, H). Thus, this risk model based on immunogenic genes co-expression network may work as a novel biomarker for predicting the prognosis. DEG-based risk model construction 391 DEGs across PRAD immune subtypes were found and displayed (Fig. S9A, B). The prognostic value of these DEGs was calculated and 93 DEGs were prognostic. Lasso regression reduced the dimension of these DEGs and 21 genes were finally used to construct the risk model (Fig. S9C-F). The risk of each patient was calculated based on the expression value of the 21 genes and their coefficients in Lasso regression (Fig. S9G, H). Patients were categorized into high-risk or low-risk groups, and the high-risk group had worse survival with an AUC of 0.892 (Fig. S10A, B). Consistently, Fig. S10C and D summarized that PIS2 and PIS3 had higher risk scores with more PIS1 were distributed in the low-risk group, which in reverse proved the accuracy of PRAD immune subtype in predicting PRAD prognosis. Conclusions In this study, KLHL17, CPT1B, IQGAP3, LIME1, YJEFN3, KIAA1529, MSH5 and CELSR3 were identified as potential tumor-specific antigens for PRAD mRNA vaccine development. PRAD patients of PIS2 and PIS3 might be suitable candidates of vaccination. These findings provided new sights in selecting antigens and populations for future PRAD mRNA vaccine development and application. Methods and availability of supporting data Methods and materials used in our study are attached as Supplementary information. All data are freely available from the public databases and the other necessary and reasonable information could be obtained from the corresponding author. Additional file 1: Figure S1. The distribution of PIS1, PIS2, and PIS3 in the groups diagnosed with different pathologic T stages (E) or N stages (F) or treated with radiation therapy (G). Additional file 3: Figure S3. Correlation of PRAD immune subtypes with existing biomarkers and homologous recombination deficiency score. Differential expression of HOXC6 (A), PDK4 (B), and STAT3 (C) across the PRAD immune subtypes. D. Copy number variation (CNV) counts across the PRAD immune subtypes. E. Mutation frequency of the top 20 mostly mutated genes in each PRAD immune subtype. F-H. Telomeric allelic imbalance score, large scale transition score, and loss of heterozygosity score for each PRAD immune subtype. * P < 0.01, ** P < 0.001, and *** P < 0.0001. Additional file 4: Figure S4. Correlation between the PRAD immune subtypes and the infiltration of immune cells. The comparison of the stromal score, immune score, tumor purity, and immune cells infiltration across PIS1 to PIS3 in the CGA-PRAD cohort (A-C) and validation cohort (D-F). * P < 0.01 and *** P < 0.0001. Additional file 5: Figure S5. Immune status of the PRAD immune subtypes. A. Association of anticancer immune activity and PRAD immune subtypes. Immune-checkpoint genes and immunogenic cell death genes are differentially expressed across the PRAD immune subtypes in the training (B-C) and validation (D-E) cohorts. * P < 0.01, ** P < 0.001, and *** P < 0.0001.
2021-12-07T14:48:21.831Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "a97a4c1c339d8b0b5dce23eb0a8a29bffc8ca71e", "oa_license": "CCBY", "oa_url": "https://molecular-cancer.biomedcentral.com/track/pdf/10.1186/s12943-021-01452-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e17e50ce0b3d3244570f83eeb02f4c396e7bd8b5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
16673911
pes2o/s2orc
v3-fos-license
Contribution of Visuospatial and Motion-Tracking to Invisible Motion People experience an object's motion even when it is occluded. We investigate the processing of invisible motion in three experiments. Observers saw a moving circle passing behind an invisible, irregular hendecagonal polygon and had to respond as quickly as possible when the target had “just reappeared” from behind the occluder. Without explicit cues allowing the end of each of the eight hidden trajectories to be predicted (length ranging between 4.7 and 5 deg), we found as expected, if visuospatial attention was involved, anticipation errors, providing that information on pre-occluder motion was available. This indicates that the observers, rather than simply responding when they saw the target, tended to anticipate its reappearance (Experiment 1). The new finding is that, with a fixation mark indicating the center of the invisible trajectory, a linear relationship between the physical and judged occlusion duration is found, but not without it (Experiment 2) or with a fixation mark varying in position from trial to trial (Experiment 3). We interpret the role of central fixation in the differences in distinguishing trajectories smaller than 0.3 deg, by suggesting that it reflects spatiotemporal computation and motion-tracking. These two mechanisms allow visual imagery to form of the point symmetrical to that of the disappearance, with respect to fixation, and then for the occluded moving target to be tracked up to this point. INTRODUCTION The visual experience of motion elicited by an object moving behind a stationary occluder has often attracted the attention of psychologists because of the paradoxical fact that the object persists in being "seen" as continuously moving behind the occluder through time, even though it is no longer projected onto the retina. One of the first demonstrations of occluded ("invisible") motion is given by Michotte (Michotte et al., 1964. Within this acceptation, invisible motion is another example of a motion phenomenon that involves the subjective impression of an object following a path even in the absence of any physical stimulus, such as during apparent motion (Wertheimer, 1912). Within this framework are the studies that conceive invisible motion as equivalent to an amodal filling-in and as involving neural activation to visible motion (Michotte et al., 1964(Michotte et al., , 1991Pessoa and Neumann, 1998;Horowitz et al., 2006;Komatsu, 2006). Empirical evidence comes from the finding that distractors moving over the occluder interfere with invisible motion (Lyon and Waag, 1995). At the neurophysiological level, Barborica and Ferrera (2003) have provided direct evidence of the existence of velocity sensitive neurons in the frontal eye fields that fire during periods of occlusion. A different and very accredited model for processing occluded motion investigated by DeLucia and Liddell (1998) and expanded upon by Makin and Poliakoff (2011) regards the tracking hypothesis. They claim that the position of a hidden moving object is "extrapolated" by tracking the position of the target through the shift of the spotlight of visuospatial attention, which is guided by the motion pursuit system. Furthermore, they posit that, when the target disappears, visible velocity information stored in short-term velocity memory guides pursuit eye movements across the temporal intervals during which the target is occluded (Bennett and Barnes, 2006;Makin and Chauhan, 2014). Indeed, invisible motion is affected by factors affecting perceived visible speed before occlusion such as, for example, changes in the target's contrast, size (Battaglini et al., 2013), prior adaptation (Gilden et al., 1995;Battaglini et al., 2015) and previously viewed velocity (Makin et al., 2008). In Makin and Poliakoff 's model, it is irrelevant whether the eyes follow the hidden moving object or not, thus absorbing into the model the evidence that premotor pursuit commands do not need pursuit execution to be active (Rizzolatti et al., 1994;Barnes et al., 1997;Eimer et al., 2007). In its complete account, the model posits that "velocity store and premotor modules guide tracking of occluded targets during motion extrapolation, even if fixation is maintained" (Makin and Poliakoff, 2011). From this account, visuospatial attention seems to rely exclusively on the memory of visible motion. However, in particular, the results of Lyon and Waag (1995) and Barborica and Ferrera (2003) suggest that motion information that is also acquired during the occluded trajectory may be used to judge target reappearance. If this were the case, then the imagery of an occluded target in motion could guide pursuit eye movements across the temporal intervals during which the target is occluded (Lu and Sperling, 1990;Sears and Pylyshyn, 2000;Shioiri et al., 2000;Huber and Krist, 2004;de'Sperati and Deubel, 2006;Jonikaitis et al., 2009). The internal model of the moving target can be tracked smoothly, even though the target is not physically present, allowing the target position to be updated very precisely at every (very close) local image point along the occluded trajectory (Shioiri et al., 2000). Shioiri et al. (2000) indeed showed that observers judge the apparent location of a target in invisible motion relative to an imaginary cue with high precision, suggesting that the target motion behind the occluder can be tracked and that any position of the target along the occluded trajectory can be precisely judged, providing that this point is made salient by visual imagery. Spatiotemporal computation is needed to form an internal representation of a moving object. Thus, rather than using remembered speed to track one speed dimension (location) to judge the other (time), motion-tracking uses remembered speed to track the two dimensions combined (motion) and to infer time (Cavanagh, 1992;Verstraten et al., 2000;Shioiri et al., 2002). Rather than exploiting information achieved by spatial filtering, motion tracking exploits information provided by spatiotemporal filters, i.e., filters devoted to spatiotemporal computation underlying the coding of speed by the motion system (see Burr and Thompson, 2011;Mather et al., 2012, for a review). Doherty et al. (2005) showed that when pre-occluder motion generated expectations concerning the where and when of reappearance, reaction times to reappearances are shortened, especially when spatial and temporal expectations combine. These differences may reflect a difference with respect to the way covert-attention is deployed during occlusion: attention directed to space and time combined (motion) may be more efficient than visuospatial attention directed to space alone. To assess the role of motion tracking we need to demonstrate that the time of arrival is judged on the basis of space and time combined, rather than on the computation of a separate motion dimension-either space or time. To this end, we made the occluder invisible and its shape unpredictable (as Figure 1 shows, it was an irregular hendecagonal polygon with bilateral symmetry in all directions), and abolished the reappearance cue that is typically used in experiments on motion extrapolation. In these conditions, spatiotemporal computation was precluded and observers were forced to respond either when they actually saw the target reappear or when they predicted its reappearance by "learning" the average trajectory length (spatial cue) or the average duration of occlusion (time cue). However, by placing a spatial cue centered on the invisible occluder we created the conditions for spatiotemporal computation. Indeed, occlusion duration can be combined with trajectory length (from disappearance to the cue centered on the occluder) to judge precisely when the target reaches the central cue. Assuming the lengths of the trajectory before and after the central cue are equal, reappearance can be "visualized" by imagery to allow spatiotemporal computation and motion pursuit from the central cue to reappearance. If the fixation mark is not central, motion tracking would never allow reappearance to be judged precisely. The same outcome is expected if the fixation mark is absent. To establish the role of the spatiotemporal computation underlying motion tracking and evaluate its precision, we FIGURE 1 | Illustration of the trial. A moving circle traveled through an invisible occluder (the black line is shown in the figure only for illustrative purposes) with an irregular polygon shape. The target (circle) started from eight different places at one of two different distances from the occluder. The participants had to press a response button as soon as the target reappeared. The RT was the interval between the key press and when the leading edge of the target reached the edge of the invisible occluder. need evidence that anticipation errors also occur to guarantee that reappearance is anticipated. Most importantly, we need evidence of a linear relationship between the estimated time to reappearance (TTR estimated ), calculated from the moment in which the target is in the center of the invisible occluder to the button press) and the actual duration of the half-trajectory length (TTR physical ). To sum up, predictions depend on whether stimulus conditions allow motion tracking or not: (a) If the visible speed, occluder shape (irregular and invisible), and reappearance point are unknown, then observers cannot predict (anticipate) the target reappearance behind the occluder and are forced to respond when they actually see the target. We predict a linear relationship between TTR physical and TTR estimated , with no anticipation errors. (b) If the visible speed is known but not the occluder shape (irregular and invisible), and there is no reappearance cue, and the central cue is either absent or not central, then the exact reappearance point is unknowable. However, reappearance may be predicted, based on inferred unprecise occluder shape and using as a cue for predicting reappearance an average duration of the trajectories. In this case, anticipation errors may occur but TTR physical and TTR estimated are not positively related because the average trajectory length differs from individual trajectory lengths. Note that if an observer use an average strategy for judging the duration of occlusion we should obtain a flat slope when plotting 2 × TTR estimated against 2 × TTR physical . However, since we considered (see Analysis Section) on the y axis the duration estimated from the center of occlusion, we removed also ½ of the entire physical duration on the x axis that (obviously) is different according to the different trajectory lengths: smaller for a short trajectory and larger for a long trajectory. This way, when plotting TTR estimate against TTR physical we should obtain a negative slope when people predict target reappearance using an average value of the occlusion lengths. Moreover, to confirm that observers estimate an average duration of occlusion from the different trajectory lengths, a linear relationship between the RT (TTR estimated -TTR physical ) and the TTR physical with a negative slope is also expected. (c) If the visible speed is known but not the occluder shape (irregular and invisible) and there is no reappearance cue, but there is a visible cue centered on the occluder, then this may allow a spatiotemporal computation and the formation of an internal representation of the occluded moving target so that it can be "tracked" during its trajectory from disappearance to the central cue and from there to reappearance, "visualized" as symmetrical to the disappearance with respect to fixation. In addition to anticipation errors, a linear relationship between TTR physical and TTR estimated is expected. Thus, the crucial finding to infer that motion tracking has occurred, based on spatiotemporal computation, is the linear relationship between TTR estimated and physical duration. EXPERIMENT 1 Experiment 1 aims to disentangle outcome (a) from outcomes (b) and (c). Whereas pre-occluder motion allows participants to anticipate the target reappearance, this is impossible without preoccluder motion, and observers can only respond when they see the target. That is, in this second baseline condition we do not predict anticipation errors without pre-occluder motion, whereas TTR estimated should depend on trajectory length. Participants Seven students from the University of Padova (4 female, 3 male; age 19-22 years) participated voluntarily in Experiment 1. The participants remained unaware of the true aims of the experiment until they completed the task. All of the participants gave written informed consent in accordance with the Declaration of Helsinki. Stimuli, Apparatus, and Procedure The participants were placed in a dark room, seated 57 cm away from the display screen. The viewing was monocular, and both eyes were tested. Stimuli were generated with Matlab Psychtoolbox (Brainard, 1997;Pelli, 1997) and displayed on a 19-inch Asus monitor with a refresh rate of 60 Hz. The screen resolution was 1920 × 1080 pixels. Each pixel was subtended ∼1.5 arcmin. The luminance of the background was 0.7 cd/m 2 . The target was a small circle that was 0.5 degree of visual angle (deg) in diameter whose motion remained invisible when the disk passed behind an invisible irregular hendecagonal polygon. A fixation cross 0.3 deg long and 0.1 deg wide (60 cd/m 2 ) was placed in the center of the occluder. Both had a luminance (as measured by a Minolta LS−100 photometer) of 90 cd/m 2 . In one block, the target initiated a linear trajectory after a randomly chosen interval of 0-2000 ms from an acoustic cue either 7.5 or 10 deg from the center of the screen and terminated 4 deg after reappearance. In the other block, the visible preoccluder trajectory was removed and the target motion started from the center of the occluder (the target was invisible behind the occluder). In this block, the observers knew where but not when the hidden trajectory started. The target speed (either 3 or 6 deg/sec) was randomly selected within each block. The direction was randomly chosen within each block. In the condition with pre-occluder motion available, the trajectory could begin from either side of the screen, from one of eight specified directions, . Each block consisted of 64 trials: 2 repetitions of each direction, speed and starting position (7.5 or 10 deg). In all of the blocks, the participants were required to fixate on the central cross. A chin-rest was used to limit head movement. The participants' task was to respond as quickly as possible when the target "just reappeared." Analysis The physical time to reappearance (TTR physical )for each of the four trajectory lengths of 4.7, 4.8, 4.9, and 5 deg corresponded to 783, 800, 816, and 833 ms with a low-speed target and 391, 400, 408, and 417 ms with high speed, respectively [TTR physical : (invisible trajectory length/2)/speed of the target. TTR physical was calculated from the center of the occluder because in one block of Experiment 1 the target started from the center]. We considered three dependent variables: (a) estimated TTR (TTR estimated ), which corresponded to the response time measured from the center of the occluder to key press: TTR estimated = TTR physical + RT. (b) RT that is equal to the estimation of the entire duration of occlusion minus the entire physical duration of occlusion, corresponding to: (TTR estimated + TTR physical ) − 2 × TTR physical , i.e., half of duration estimated (that include the entire RT plus half of the physical duration) minus the entire physical duration of occlusion. The result is equal to TTR estimated − TTR physical . (c) anticipation errors (negative RTs). Individual regression lines were fitted to evaluate the relationship between TTR physical and TTR estimated, and between the RT and the TTR physical . We used either t-tests or ANOVA to compare the individual slopes obtained in the condition with fixed central cue with those obtained in the control condition of each experiment. Results The results are shown in Figures 2, 3. In the pre-occluder motion condition, there were more individual anticipatory errors, which were inversely related in a linear way to individual mean RTs (Figure 2). Figure 2 shows also that the individual mean RT are shorter with pre than without pre-occluder motion, indicating that short RT can be another measure of the tendency of the participants to anticipate target reappearance. Most importantly, in both conditions, TTR estimated was directly related to TTR physical , indicating an isomorphic relationship between these two variables, a result implying that trajectory length/duration was judged with high precision (Figure 3). One-sample t-tests revealed that the anticipation errors (negative RTs) differed from 0 (no errors) in the condition in which the pre-occluder trajectory was present [t (6) = 3.151; p < 0.02] but not when it was absent [t (6) = 1.14; p < 0.2]. Moreover, the regression lines fitted to the anticipation errors obtained as a function of RTs revealed a significant negative slope in the pre-occluder motion condition (slope = −0.75, R 2 = 0.4) but not when the pre-occluder motion condition was absent (slope = 0.05, R 2 = 0.03). The average TTR data showed a linear relationship between TTR estimated and TTR physical , both in the condition with pre-occluder motion (slope = 0.99 and R 2 = 0.71) and in the baseline condition, without pre-occluder motion (slope = 1.19, R 2 = 0.90). A t-test executed to evaluate the difference between the individual slopes obtained with pre-occluded motion either present or absent was not significant [t (6) = 1.5; p = 0.17]. The results demonstrate that without pre-occluder motion, the observers responded when they saw the target. With pre-occluder motion present, the observers anticipated the target reappearance, and the FIGURE 2 | The squares represent the proportion of errors (negative RTs) as a function of mean individuals (n = 7). The filled squares refer to the "pre-occluder" motion condition, and the empty squares represent the "no pre-occluder motion" condition. The linear regression lines are fitted to the "no pre-occluder motion" data (dotted line) and to the "pre-occluder motion" data (continuous lines). In the "pre-occluder motion" condition, the relationship between TTR estimated and TTR estimated was linear, as it was in the baseline condition, in which the participants were forced to respond when they saw the target reappearing. This reflects the distinction between individual trajectory lengths rather than response to average length. evidence that TTR estimated was isomorphic to TTR physical indicated that they do so by a very precise spatiotemporal computation. EXPERIMENT 2 We ran a second experiment to confirm the hypothesis that, whereas anticipation errors may result from a computation of average trajectory length, the linear relationship between physical and judged trajectory duration does not. Shioiri et al. (2000) have shown that participants can precisely judge the apparent location of a target in invisible motion relative to an imaginary cue. We asked whether the participants could exploit this ability to judge target reappearance. They could "track" the target's motion from disappearance to when it reached the position behind the occluder marked by a visible cue (the central fixation) and then, by symmetry, from there to when it reached an imaginary cue signaling the point of reappearance, positioned symmetrically to the point of disappearance with respect to the central fixation (Figure 1). To test this possibility in Experiment 2, we compared the condition in which the cue indicating the center of the trajectory was available, thus allowing spatiotemporal computation, with the condition in which it was absent. In the first case, participants could "follow" the moving target behind the occluder for the first part of its trajectory up to when it reached fixation; for the second part, its length was isomorphic to the first, so visual imagery of the reappearance point was then available by motion-tracking. Conversely, when there was no cue and the trajectory length was not constant, the participants were either obliged to respond when they saw the target reappearing or to learn an average trajectory length or occlusion duration. Two groups were tested: the first was instructed to maintain fixation at the central cue, while the second could follow the moving target with their eyes. All of the participants gave written informed consent in accordance with the Declaration of Helsinki. Stimuli, Apparatus, and Procedure This experiment was a replication of Experiment 1 (in terms of stimuli, apparatus, and procedure), with the difference being that pre-occluder motion was present in both conditions. However, in one condition, we removed the spatial cue (fixation cross) that indicated the center of the invisible trajectory. To narrow this experiment, the starting position of the target was always 7.5 deg from the center of the occluder and only one eye (dominant) was tested. The 14 participants were divided in two subgroups of seven subjects each: one subgroup performed the task while fixating on the center of the hidden trajectory; the other did not have any instruction to fixate. In the first group, to ensure fixation without a central mark, a circle (1.5 deg; 120 cd/m 2 ) was placed over the blind spot (the participants were instructed that for correct fixation to occur, the circle should remain not visible); in the other condition, the central fixation was present. Although blind spot is an imperfect method for detecting small saccades, it helps observers to follow the instruction of maintaining central fixation rather than following with the eyes the hidden moving target. Results The results of Experiment 2 are shown in Figures 4, 5. With respect to when the central cue was absent, its presence produced a larger number of anticipatory errors, which was inversely related to RTs (Figure 4). Moreover, TTR estimated was only isomorphic to TTR physical with the central cue ( Figure 5). Moreover, the relationship between RTs and TTR physical were not linearly related with fixation present (suggesting that TTR estimated but not RTs depend on the duration of occlusion: slope = 0.2, R 2 = 0.09) and there is a weak linear (negative) relationship when the fixation was absent (slope = −0.75, R 2 = 0.24). The mixed-design ANOVA on the number of errors having group and central fixation (present vs. absent) as factors revealed that the effect of group was not significant [F (1,12) = 0.42, p = 0.53, η 2 p = 0.34], while the effect of central fixation was significant [F (1,12) = 4.75, p = 0.049, η 2 p = 0.28], indicating that the number of errors (Figure 4) was higher in the central cue condition [t (13) = 2.23, p = 0.04]. The slope of the regression line fitted to the errors plotted as a function of the RTs indicated a larger slope with the central cue present (slope = −1.77, R 2 = 0.79) than absent (slope = −0.88, R 2 = 0.34). Most importantly, the relationship between the physical and average TTR estimated (Figure 5) was linearly positive when the central cue was present (slope = 1.68, R 2 = 0.89) but not when absent (slope = 0.56, R 2 = 0.08). The ANOVA executed to evaluated the difference between the individual slopes in the two cue conditions (present vs. absent) revealed a significant effect of group (p = 0.01) and condition [F (1,12) = 4.74, p = 0.049, η 2 p = 0.28], indicating higher slopes with cue present and higher slopes in the group that did not receive instructions to fixate. Frontiers in Psychology | www.frontiersin.org FIGURE 5 | The mean TTR estimated data averaged across speed are plotted as a function of TTR physical , separately for the "central cue" (filled symbols) and "no central cue" conditions (empty symbols). The regression lines are fitted to the "central cue" (continuous lines) and "no central cue" conditions (dotted line). Only in the condition with a central cue did the regression line (continuous line) reflect a linear relationship between TTR estimated and TTR physical , indicating a temporal distinction between the individual trajectory lengths, rather than response to average length. This suggests that with and without a central mark, the judgment of target reappearance may be based on different information. Under the assumption that a linear positive relationship between TTR physical and TTR estimated reflects motion tracking, mediated by spatiotemporal computation during occlusion, this information is only available with the central cue. In the absence of a central cue, the anticipation of reappearance may rely on a "learned" average trajectory length/duration. However, this would produce negative slopes when TTR estimated are plotted as a function of TTR physical and when RTs are plotted as a function of TTR physical . It was not found any strong or medium correlation, therefore it is unlikely that observers use as a cue for predicting target reappearance an average duration of occlusion when the fixation cross is not present. Furthermore, the fixation strategy does not affect qualitatively the effect due to the presence of the central cross, although the individual slopes were steeper in the subgroup in which fixation was not needed. This suggests that the information coming from the oculomotor system can improve accuracy but does not affect the isomorphic relationship between TTR estimated and TTR physical . EXPERIMENT 3 In the last experiment, we further sought to confirm the role of spatiotemporal computation in judging reappearance. This was done, as in Experiment 2, by evaluating the role of the central visible cue to "visualize" the point of reappearance, positioned symmetrically to the point of disappearance with respect to the central cue (Figure 3). To this end, we replicated the conditions of Experiment 2 (same stimulus, apparatus, and procedure) with the central cue available and compared it with a new condition, in which we randomly varied the position of the central cue from trial to trial, either to the left or to the right with respect to the center. In this second case, the lengths of the two half-trajectories were not equal in most trials, so the central cue could not be used to correctly infer the target reappearance. Therefore, the participants could either respond when they saw the target reappear or "learn" the average occlusion duration of the invisible trajectory by forming a visual representation of the occluder shape. The two conditions are presented in separate blocks. Participants Twelve students (6 women, 6 males; age 21-33 years) from the University of Padova participated in this experiment. All of the participants gave written informed consent, in accordance with the Declaration of Helsinki. Stimuli, Apparatus, and Procedure In this experiment, we replicated the stimuli, apparatus, and procedure used in Experiment 2 with the following differences: in one of the two blocks, presented in counterbalanced order, the visible cue was positioned centrally or either behind or ahead with respect to the center (3 levels) of the occluder at a distance of 0.3 deg from it (variable fixation condition), whereas in the other block, the central cue was fixed (fixed condition) at the center of the invisible trajectory. The participants performed 96 trials in each block (in the first one, there were 2 repetitions × 8 target directions × 3 fixation conditions × 2 speeds; in the second block, there were 4 repetitions × 8 target directions × 1 fixation condition × 2 speeds). The viewing was binocular, and the participants were requested to fixate on the visible cue. Results The results are shown in Figures 6, 7. There were more anticipatory errors in the fixed fixation condition. With a central cue (both in the fixed and variable conditions), there was a linear, negative relationship between the errors and RTs (Figure 6). Moreover, a linear positive relation between TTR estimated and TTR physical was only found in the fixed condition (Figure 7). The anticipation errors were analyzed with a repeatedmeasures ANOVA with the condition (variable: behind, ahead, and central vs. fixed cue) and speed of the target (3 vs. 6 deg/sec) as factors. The results reveal that the anticipatory errors were affected by speed [F (1,11) = 5.79, p = 0.035, η 2 p = 0.35] and condition [F (1.097,12.066) = 16.62, p = 0.001, η 2 p = 0.6]. Post-hoc t-tests with Bonferroni correction revealed that the number of errors was greater with a fixed than with a variable position of the central cue (fixed vs. behind: p = 0.01; fixed vs. central: p = 0.008; fixed vs. ahead: p = 0.012). However, the RTs and errors were linearly related in both conditions with the central cue (fixed condition: slope = −1.69, R 2 = 0.69; variable condition: slope = −1.44, R 2 = 0.63). Most importantly, the analysis on TTRs revealed that the relationship between TTR physical and average TTR estimated was linear positive in the fixed condition (slope = 1.63, R 2 = 0.8) but not in the variable one (ahead: slope = 0.75, R 2 = 0.12; central: slope = 0.15, R 2 = FIGURE 6 | Individual proportion of errors (negative RTs) as a function of mean RT, fitted by regression lines. The squares plus continuous line refer to the "fixed condition." The three variable-fixation conditions are represented by triangles plus broken lines (fixation ahead the center), diamonds plus dotted lines (fixation behind the center), and circles and broken dotted lines (fixation central), respectively. FIGURE 7 | Regression line fitted to the mean TTR estimated data of Experiment 3, averaged across speed. Fixed fixation data: squares plus continuous line; variable fixation data: triangles plus broken lines (fixation ahead), diamonds plus dotted lines (fixation behind), and circles plus broken dotted lines (fixation central). In the fixed condition, the relationship between TTR estimated and aTTR physical was linear, indicating a temporal distinction between individual trajectory lengths rather than response to average length. 0.005; behind: slope = 0.67, R 2 = 0.15). The ANOVA executed to evaluate the differences between the individual slopes in the four different cue conditions (fixed central, variable ahead, variable central, and variable behind) revealed a significant effect of condition [F (3,33) = 4.9, p = 0.006, η 2 p = 0.31]. Post-hoc Bonferroni corrected t-tests indicated higher slopes with the fixed central cue than with the cue having a variable position: ahead, p = 0.022; central, p = 0.049; behind, p = 0.047). The flat slopes obtained when the cue is in variable positions suggest that participants did not use an average value of occlusion to predict target reappearance. Moreover, it was tested whether RT are inversely associated with the TTR physical when the cue is not fixed and it was found very weak correlation for each condition (ahead: slope = −0.33, R 2 = 0.04; center: slope = −0.84, R 2 = 0.15; behind: slope = −0.25, R 2 = 0.01) confirming that is unlikely that participants use an average value to estimate target reappearance. Note that, since fixation was available in both conditions, the effect of condition found in Experiment 3 cannot be accounted for by different fixation strategies in the two conditions (a possible confounding variable of Experiment 2, even though this confounding variable should have been eliminated or limited by allowing free eye movement in one subgroup). DISCUSSION We used a new paradigm to investigate invisible motion. We abolished any info on occluder size and shape, and abolished any cue that could signal when and where the target would reappear. We asked whether the observers would anticipate reappearance and produce a time-to-reappearance (TTR estimated ) isomorphic to the TTR physical , whose duration varied randomly from trial to trial. Anticipation errors were found in all conditions except when information on pre-occluded motion was not available, in the baseline condition of Experiment 1 (Figure 2). Regarding the relationship between TTR physical and TTR estimated , Experiment 1 shows that it was linear (positive) with pre-occluder motion as well as without, when the participants were forced to respond when they saw the target (Figure 3). Experiment 2 showed a linear positive relationship between these two variables when a cue indicating the middle of the hidden trajectory was present, but not when it was absent. In Experiment 3, the linear positive relationship between TTR physical and TTR estimated was only found when the position of the central cue was fixed, and not when it varied randomly within the block. These results support the spatiotemporal computation hypothesis: to judge trajectory length and anticipate reappearance, participants must first judge when the target reaches the position behind the occluder marked by the central fixation and then, by symmetry, when it reappears in an opposite symmetrical position relative to the point of disappearance. We suggest that spatiotemporal computation allows motion tracking and a very precise visual imagery of the point of reappearance. In sum: (I) When the observers do not have visible motion available as in the baseline condition of Experiment 1, in which the target appears from behind an occluder without knowing where and when its trajectory started, the participants respond when they actually see the target; the response is a true reaction time without anticipation errors and reflects a linear relationship between TTR physical and TTR estimated . (II) When the fixation mark is either absent or not fixed in the center of the hidden trajectory, anticipation errors may occur but the relationship between TTR physical and TTR estimated is not linear. This result suggests that although the point of reappearance is unknown, observers predictnot precisely-the target reappearance. One hypothesis is that observers can implement a strategy in which they estimate an average duration of occlusion from the different trajectories length. However, the absence of negative slopes in the linear regressions obtained when plotting TTR estimated against TTR physical and the RT against the TTR physical did not support this hypothesis. Furthermore, also subjective reports of the occluder shape does not support the previous hypothesis. Indeed, about half of the observers reported that the occluder was a circle and then a figure with equal trajectories lengths, but the other half reported that the occluder was an ellipse or a square, a figure with different trajectories lengths (however none of them describe the occluder as being a hendecagonal polygon). Another possible strategy is tracking the current spatial position of the target with the shift of the visuospatial attention (Makin and Poliakoff, 2011). Indeed, attention has been shown to be independent of the strength of the stimulus (Doherty et al., 2005;Boynton, 2009), and its effects have been seen in the absence of visual stimulation (Kastner et al., 1999;Murray, 2008) and to empty regions of space (Serences and Boynton, 2007). For simple attentive visuo-spatial tracking a central cue is not needed, and when it is present a saccade-like shift of attention may be favored (Cave and Bichot, 1999;Chastain, 1992a,b). (III) When there is a central fixation, this leads to many anticipation errors (negative RTs) and to a linear positive relationship between TTR physical and TTR estimated . Considering that the four values of trajectory lengths range between 4.7 and 5 deg and that the observers experience, for each speed, only 4 trials for each randomly presented trajectory, it is "impossible" to learn this difference so precisely to justify the linearity found between the dependent and independent variables. Obviously, the anticipation of target reappearance here involves more low-level computation mechanisms than spatial attention or memory. The crucial role of central fixation suggests that spatiotemporal computation behind the occluder occurs and that the output of spatiotemporal filtering mediates precise motion-tracking along the hidden trajectory. Because occluded motion prevents sensory input from reaching the visual system, we posit that visual imagery of the moving stimulus must be formed to extract motion information behind the occluder, so that the stimulus can be tracked from disappearance to fixation and then from fixation to a symmetrical position to disappearance. Based on motion tracking, reappearance can be judged almost as precisely as if the target were visible (Shioiri et al., 2000). Indeed, imagery is not very different from weak sensory stimulation, as both produce perceptual effects and accumulate over time (Raymond, 2000;Pearson and Brascamp, 2008). Motion tracking of visual imagery during occluded motion share some similarities with amodal filling-in (amodal completion) (Ferree and Rand, 1912;Casco and Morgan, 1984;Ramachandran and Gregory, 1991;Ramachandran, 1993;Grassi and Casco, 2010;DeStefani et al., 2011), by which neural activation spreads at the point of reappearance or, retrospectively, from the imagined reappearance point (interpolation) (Hogendoorn et al., 2008). Both operations allow the use of a set of discrete spatial positions to form an internal model of the moving target. However, imagery-motion tracking is more likely to be mediated by feature-based attention, whereas filling-in does not necessarily involves attention (Komatsu, 2006). Therefore, our results unveil the role of motion tracking during occluded motion. Indeed, previous studies using stimuli involving motion tracking found results compatible with ours (Shioiri et al., 2000). One similarity is the use of a set of discrete spatial positions to form an internal model of the moving target, which allows its motion to be tracked across intermediate spatial positions. In addition, motion tracking is linear, consistent with our results of an isomorphic relationship between TTR physical and TTR estimated . Moreover, motion tracking produces location judgments, as it does for continuous motion. Therefore, it may well account for anticipation of the reappearance of the moving target. In addition, motion tracking occur at relatively long SOAs (Shioiri et al., 2000) and the duration of the invisible trajectory from disappearance to the position marked by central fixation is indeed ∼800 ms at low speed. Finally, Shioiri et al. (2000) showed that the critical factor for motion tracking is SOA and not speed; indeed, in the present work, we found similar results at low and high speed (see Figure 7). It is possible to argue that the location of reappearance could be used to predict the time of reappearance. Therefore, motion tracking, involving the visibility of objects to be tracked (Cavanagh, 1992) would not be strictly necessary. Time to reappearance from fixation could be predicted by simply waiting for the same duration as passed. However, our knowledge of how attention to moving objects works suggests spatiotemporal computation. Shioiri et al. (2002) showed that attention does not simply select a location for enhanced processing, but rather predicts the future location of the object of interest based on its velocity. Cavanagh (1992) showed that motion tracking provides accurate velocity judgments. Verstraten et al. (2000) showed that if temporal frequency is not too high (temporal limit is 4-8 Hz) tracking involves localization in both the spatial and temporal domain as motion tracking does. Moreover, there are at least two pieces of evidence supporting our claim that motion tracking (implicating spatiotemporal computation) is involved in the conditions with central cue. One is in the psychoacoustic domain. Matthews and Grondin (2012) showed that the Weber fraction for duration discrimination of paired of sound is around 4% when the baseline stimulus is presented for 1 second (40 ms). In our paradigm the minimum differences in duration of the entire invisible trajectory is 40 ms for a low-speed target (3 deg/s) and 20 ms for a high-speed target (6 deg/s). It is then highly unlikely that participants use a timing strategy rather than motion-tracking and this is even more unlikely knowing the higher temporal resolution to the auditory with respect to the motion system. More direct is the evidence that with visible cues of reappearance present and a subjective task, the judgements of reappearance are imprecise. For example, in DeLucia and Liddell (1998) the observers had to judge whether a target object reappeared in time or not. When the reappearance error was 0 the accuracies were around 70%. In conclusion, the evidence presented in the present study is consistent with an active process underlying occluded motion that produces an internal spatiotemporal model of the moving target, mediated by high-resolution visual mechanisms (Koenig-Robert and VanRullen, 2011). We do not only emphasize attentional tracking, time processing, or visuospatial updates of the attention spotlight (Tresilian, 1995(Tresilian, , 1999Makin and Poliakoff, 2011;Makin and Chauhan, 2014). Instead, we want to highlight that the elaboration of occluded motion is an active process that, in appropriate conditions, is coupled with visual imagery. To our view, motion tracking does not substitute for but is additive to visuospatial tracking, in the sense that it only works in appropriate conditions. However, given its relevance, it should be incorporated into the models of motion extrapolation. Visuospatial tracking and motion-tracking are indeed complementary processes. Indeed, visuospatial attention selects spaces, whereas motion tracking may select the imagery of a visual dimension like direction of motion, speed, or motion path. Without denying the importance of shifting attention between different locations of the occluded target to track the target's location along its trajectory, the operation of tracking particular visual features of the invisible motion (speed, direction, or spatial-temporal frequency combined) may be the prerequisite to judge reappearance with high precision, not only experimentally but also in daily life.
2017-05-05T09:01:39.245Z
2016-09-14T00:00:00.000
{ "year": 2016, "sha1": "6ea85d0eeb59f134d0634718c1d19c423a55154c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fpsyg.2016.01369", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6ea85d0eeb59f134d0634718c1d19c423a55154c", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
258985109
pes2o/s2orc
v3-fos-license
Risk Factors for Medication-Related Osteonecrosis of the Jaw—A Binomial Analysis of Data of Cancer Patients from Craiova and Constanta Treated with Zoledronic Acid MRONJ (Medication-Related Osteonecrosis of the Jaw) is a condition observed in a subset of cancer patients who have undergone treatment with zoledronic acid in order to either prevent or treat bone metastases. The primary aim of this research was to establish the importance of risk factors in the development of medication-related osteonecrosis of the jaw in cancer patients receiving zoledronic acid therapy for bone metastases. The present study is an observational retrospective investigation conducted at two university centers, namely, Craiova and Constanța, and included cancer patients treated with zoledronic acid. The medical records of the patients were obtained over a four-year timeframe spanning from June 2018 to June 2022. The data analysis was carried out between January 2021 and October 2022. Patients were treated for cancer, bone metastases, and MRONJ according to the international guidelines. The research investigated a cohort of 174 cancer patients (109 females and 65 males) aged between 22 and 84 years (with a mean age 64.65 ± 10.72 years) seeking treatment at oncology clinics situated in Craiova and Constanța. The study conducted a binomial logistic regression to analyze ten predictor variables, namely, gender, age, smoking status, treatment duration, chemotherapy, radiotherapy, endocrine therapy, presence of diabetes mellitus (DM), obesity, and hypertension (HT). The results of the analysis revealed that only five of the ten predictor variables were statistically significant for MRONJ occurrence: duration of treatment (p < 0.005), chemotherapy (p = 0.007), and hypertension (p = 0.002) as risk factors, and endocrine therapy (p = 0.001) and obesity (p = 0.024) as protective factors. Introduction The condition known as BRONJ (Bisphosphonate-Related Osteonecrosis of the Jaw) [1,2], MRONJ (Medication-Related Osteonecrosis of the Jaw) [3,4], or DIONJ (Drug-induced Osteonecrosis of the Jaw) [5] is characterized by debilitating symptoms, with profound effects J. Clin. Med. 2023, 12, 3747 2 of 22 on patients' quality of life [6,7]. MRONJ occurs in cancer patients at risk of bone metastases, who have undergone treatment with antiresorptive medications such as bisphosphonates or denosumab, as well as anti-angiogenic agents, monoclonal antibodies, or other drugs [3,4]. MRONJ is defined by the AAOMS (American Association of Oral and Maxillofacial Surgeons) as a necrotic bone exposure in the maxillofacial region that persists for at least eight weeks in a patient who has been subject to antiresorptive or antiangiogenic drugs and without any history of metastases or radiation therapy in the cervical-facial region [3,4]. Bisphosphonates (BF), drugs with antiresorptive properties, significantly reduce fracture risk in patients with benign bone disease and have demonstrated effectiveness in addressing skeletal-related events (SREs) in individuals with bone metastases (BM) [8]. Bisphosphonates are stable synthetic derivatives of pyrophosphate, a chemical compound characterized by the presence of two phosphate groups linked to a carbon atom by esterification. Similar to pyrophosphate, their natural analog, bisphosphonates have a very high affinity for bone minerals since they establish bonds with hydroxyapatite crystals [9,10]. By inhibiting the decomposition of hydroxyapatite, bisphosphonates can effectively suppress bone resorption, which makes them effective in treating bone metastasis in cancers [11][12][13]. Bisphosphonates act on osteoclasts by inhibiting the enzyme farnesyl-pyrophosphatesynthase, reducing differentiation and cellular activity, and increasing cellular apoptosis depending on drug concentration and half-life [14]. Zoledronic acid is a bisphosphonate administered intravenously (IV) to cancer patients, usually at a dosage of 4 mg at 4-week intervals for the purpose of preventing bone metastases [15,16]. It has a 100-1000 times greater effect than pamidronate [17]. The incidence of MRONJ in individuals diagnosed with cancer who received intravenous zoledronic acid for the prevention and control of bone metastases varies between 0 and 18%, with a cumulative risk below 5% [4]. This large variation is explained by different durations of follow-up between 1 and 10 years, in various studies [4]. The risk of developing MRONJ in a cancer patient who received intravenous zoledronic acid is 2-10 times higher than in a patient who received a placebo [4]. The main risk factor in the occurrence of MRONJ with zoledronic acid is represented by the treatment duration [4]. The risk of MRONJ in cancer patients who received intravenous zoledronic acid to prevent bone metastases was 0.5% after one year of administration, 1% after two years of administration, and 1.3% after three years of administration, being lower than that for denosumab [18]. However, in a review published in 2020, Limones et al. showed that the risk was higher, reaching 1.6% after one year, 2.1% after two years, and 2.3% after three years [19]. In a more recent review, the risk of MRONJ in cancer patients after taking zoledronic acid was even higher, ranging between 1.6 and 4% after two years of treatment and 3.8 and 18 % after more than two years of treatment [20]. In general, it has been observed that MRONJ appeared after a period of approximately two years of monthly administration of zoledronic acid, being correlated with the administration of zoledronic acid at intervals of less than five weeks [21]. The local factors that contribute to the occurrence of MRONJ are tooth extraction in a large percentage (70%) and mandibular localization in 75% [22]. According to the findings of a systematic literature review, the factors contributing to systemic risk for medication-related osteonecrosis of the jaw are age and gender [23][24][25], as well as cancer treatments (including chemotherapy, novel molecules, and corticotherapy) [24,[26][27][28][29][30][31] and comorbidities such as hypertension, anemia, ischemic heart diseases, diabetes mellitus, dementia, and renal failure [26,[32][33][34], and smoking [35]. The most frequently reported risk factors were chemotherapy, corticosteroid treatment, and smoking [26]. Healing from MRONJ takes longer in patients with diabetes and those treated with corticosteroids [33]. The occurrence of MRONJ has been observed with greater frequency in the elderly population as compared to younger individuals. Furthermore, this phenomenon has been reported to occur more frequently in patients receiving intravenous (IV) bisphosphonates as opposed to oral medications, particularly at higher dosages, especially when utilizing zoledronic acid. Additionally, a higher incidence of MRONJ has been noted in cancer patients undergoing chemotherapy and corticosteroid therapy [24]. The objective of the current study was to determine the systemic and local risk factors for MRONJ in a group of cancer patients treated with zoledronic acid for bone metastases and to establish the risk for MRONJ through a binomial regression analysis. Study Design The retrospective study used databases from two hospitals, County Clinical Emergency Hospital of Craiova and County Clinical Emergency Hospital of Constanta, and a cancer ambulatory treating center, Oncolab Craiova, all from Romania. The database contains information on patients' demographic data (age, sex, residency, environment), cancer diagnosis and treatment, comorbidities, complications, inpatient and outpatient care in hospital, and ambulatory services. Data from the study were collected from March 2019 to December 2022. The study was approved by the Ethics Committee of the University of Medicine and Pharmacy of Craiova, no. 59/22.03.2019. Patients Patients included in the study were cancer patients treated with bisphosphonates approved in Romania for bone metastases from June 2018 to June 2022 in two geographical regions from Romania, South-West Oltenia, and South-East Dobrogea. The inclusion criteria were the following: patients previously diagnosed with various types of neoplasms and treated with 4 mg IV zoledronic acid administered once a month. The exclusion criteria were patients younger than 20 years, patients with only oral bisphosphonate treatment, patients treated with radiotherapy in the maxillo-facial area, patients treated with bisphosphonates for osteoporosis, and patients with oral cancers. Data analysis was performed between January 2021 and October 2022 and the results are in compliance with STROBE guidelines [36]. All participants in this study signed an informed consent form prior to their medical admission for treatment. Outcome The retrospective study compared risk factors encountered in cancer patients with MRONJ (the study group) to risk factors in cancer patients without MRONJ (control group). The criteria used to define MRONJ published by AAOMS in a position paper from 2022 [4] were as follows: 1. Current or previous treatment based on antiresorptive or antiangiogenic agents; 2. The presence of exposed bone or an intraoral or extraoral fistula in the maxillofacial region that has persisted for more than eight weeks; 3. Patients with no history of radiotherapy to the jaws or obvious metastatic disease of the jaws. Patients diagnosed with MRONJ were identified according to the above-mentioned criteria as established by position papers of AAOMS in 2014 and 2022 [3,4] and to the diagnostic codes and data from medical charts pertaining to surgical treatment of MRONJ performed in the two oral and maxillofacial surgery clinics from the two aforementioned geographical regions of Romania: the Oral and Maxillofacial Surgery Clinic of the University of Medicine and Pharmacy of Craiova, and Oral and Maxillofacial Surgery Clinic of the "Ovidius" University of Constanta, from Constanta. The surgical procedures employed to manage MRONJ comprised bone curettage involving sequestrectomy or jaw resection, with the objective of achieving a clinically viable bone. The data collected pertaining to the enrolled patients comprised the interval between the first administration of bisphosphonate to the end of the observation period or to the culmination of the study (June 2022), whichever came first. The sample size was computed using G*Power 3.1.9.7, Heinrich Heine University Düsseldorf, Germany. There were several parameters considered: a significance level α of 0.05, a power 1-β equal to 0.85, and a medium effect size value of 0.3. Consequently, the analysis led to a minimum enrolment of 160 patients for the study. Data Acquisition In the study, demographic data, comorbidities, and oncological data were extracted from the clinical records for each patient individually. The demographic data included the medical center, gender, age, residency, and smoking status. Comorbidities were assessed in terms of the presence or absence of bone metastasis, cardiovascular diseases, hypertension, diabetes mellitus, obesity, anemia, and renal diseases. With regard to oncological data, the primary diagnosis of the underlying disease was determined, along with the associated treatment, which may have involved chemotherapy, endocrine therapy, immunotherapy, radiotherapy, or corticotherapy. Furthermore, specific BF-related data such as the administration of zoledronic acid or other resorptive treatments, the type of administration, the duration of BF administration, and the presence or absence of osteonecrosis were recorded. The data collected for all the subjects with MRONJ included the following: MRONJ detailed location (upper or lower jaw, or both), stage (according to the American Association of Oral and Maxillofacial Surgery (AAOMS) [3,4], the trigger factor (periodontal disease, periapical lesion, or extraction), subsequent treatment (bone resection, sequestrectomy, and curettage), and the presence of denudated bone or hypoesthesia. Statistical Analysis The data collected from the patients' medical charts were initially processed using Microsoft Excel 365 (San Francisco, CA, USA). Consequently, this led to a basic distribution of the study group into subgroups. Continuous variables were presented as mean ± standard deviation and were compared using Kendall's tau-b or Mann-Whitney U test for nongaussian distributions. The categorical variables were expressed as numerical values and percentages, and their association was evaluated with either the Chi-square test or the Fisher Exact test. All statistical tests were employed using Statistical Package for Social Sciences (SPSS), version 20 (IBM Corp., Armonk, NY, USA). The acquired information was incorporated into a binomial logistic regression model to assess the likelihood of developing osteonecrosis in relation to the following variables: gender, age, smoking status, previous or current chemotherapy, endocrine therapy or cortico-therapy, duration of BF treatment, and smoking status. Linearity of the continuous variables with respect to the logit of the dependent variable was assessed via the Box-Tidwell (1962) procedure. A Bonferroni correction was applied using all thirteen terms in the model resulting in statistical significance being accepted when p < 0.003846. The α threshold was set to 5%, and the value p < 0.05 was considered statistically significant. Patients' Characteristics The initial study group comprised 178 patients ( Figure 1). After the exclusion criteria were applied, in the study group remained 174 patients (Table 1), 109 females and 65 males, aged between 22 and 84 years old, with an overall average age of 64.6 ± 10.7, as the group included mostly elderly patients. . Med. 2023, 12, x FOR PEER REVIEW 5 of 23 After the exclusion criteria were applied, in the study group remained 174 patients (Table 1), 109 females and 65 males, aged between 22 and 84 years old, with an overall average age of 64.6 ± 10.7, as the group included mostly elderly patients. More than half of the patients were from Constanta (101 patients, 58.1%), while 73 patients (41.9%) were from Craiova. Patients were distributed in age groups, expressed as the following: 22-54 (young adults), 55-64 (mature adults), 65-71 (young old), and 72-84 years old (old old) (Table S1). For our study group, the most frequent comorbidities were cardiovascular diseases in 81 patients (46.5%, specific for elderly participants), hypertension in 64 patients (36.8%), nutritional diseases in 56 patients (32.2%), diabetes mellitus in 18 patients (10.4%), obesity in 16 patients (9.2%), renal diseases in 31 patients (17.8%), and anemia in 18 patients (10.4%). Comorbidities' distribution by age group and gender is presented in Table 2. All 174 patients were previously diagnosed with neoplasms, of which breast and prostate cancers were predominant in our study group. Overall, 82 females had breast cancer (47.2%), 46 males had prostate cancer (26.4%), while the rest of the 46 patients had other neoplasms: pulmonary, myeloma, genital, digestive, renal, cerebral, bladder, spinal cord, pharynx, or thyroid. Bone metastases were diagnosed in 154 patients (88.5%, almost two-thirds being females, mostly encountered for ages above 65 years old). Neoplasms' general distribution is presented in Table S2. From the entire study group, more than threequarters of patients received chemotherapy (alone or in association with molecular-targeted therapy) (86.8%, 76 females with breast neoplasm, 34 males with prostate neoplasm, and 41 patients with other neoplasms) (Appendix A), while 39.1% underwent radiotherapy (29 females with breast neoplasm, 20 males with prostate neoplasm, and 19 patients with other neoplasms), 20.7% received endocrine therapy (12 females with breast cancer, and 24 males with prostate cancer), 5.8% corticosteroids (6 females with breast cancer, 2 males with prostate cancer, and 2 patients with pulmonary neoplasms, 1 female and 1 male), and only 1.2% underwent immunotherapy (2 females with renal and pulmonary neoplasms). Patients' distribution by treatment is presented in Table 3. More than 50% of patients underwent one single therapy type, 36.2% (63 patients) received two types of therapy, and 6.9% (12 patients, 6 females with breast neoplasm and 6 males with prostate neoplasm) experimented with three different types of neoplasm therapy, while 2 patients (1.2%, both females with breast neoplasm) underwent 4 different types of therapy. Among the cohort of patients who received chemotherapy, 133 had bone metastasis, 68 had cardiovascular diseases, from which 52 had hypertension, 49 had nutritional diseases (17 patients had diabetes mellitus, 16 were obese), 24 had renal diseases, and 17 were anemic. Endocrine therapy was recommended mostly for more than half of all males with prostate cancer and with few other comorbidities, the most common being cardiovascular and renal diseases. Patients with radiotherapy had a similar distribution regarding the neoplasm type; however, in Craiova, this type of therapy was mostly recommended for females with breast cancer. All 10 patients who underwent corticotherapy were from Craiova, mostly females aged less than 64 years old, with very few comorbidities except bone metastasis (5 patients with nutritional diseases, 2 with renal diseases, and 1 female with cardiovascular diseases). Immunotherapy was used only in two cases, one in Craiova, and one in Constanta, for elderly females, both with cardiovascular diseases and other comorbidities. Bisphosphonate Treatment The entire cohort within the study was subjected to the therapeutic intervention of BF-zoledronic acid, administered intravenously every month with a dosage of 4 mg. Regarding the duration of treatment, 29.9% had less than 12 months of BF treatment, 34.5% had between 12 and 24 months of treatment, 19.5% had between 2 and 3 years of treatment, while 16.1% received BF treatment for more than 3 years. The average treatment duration for each age group is presented in Table 4. MRONJ Distribution Analysis Approximately half of all patients (90 patients, 51.7%) who developed osteonecrosis of the jaw had an average treatment duration with zoledronic acid of 29.0 ± 12.1 months; the rest of the 84 patients (48.3%) who did not develop MRONJ had an average treatment duration of 19.6 ± 17.9 months). Of the 90 patients with MRONJ, 53 were from Craiova (58.9%), and only 37 were from Constanta (41.1%). A Chi-square test for homogeneity was conducted between the medical center and MRONJ development. There was a statistically significant difference regarding the proportion of patients with MRONJ from Craiova compared to patients with MRONJ from Constanta, χ 2 (1) = 21.9, p < 0.0005. Kendall's tau-b correlation was run to determine the relationship between age and MRONJ presence amongst all 174 participants. There was a moderate, positive association between age and osteonecrosis development (mean age 65.5 ± 9.5 for patients with MRONJ, and 63.7 ± 11.9 for patients without MRONJ), which was not statistically significant, τ b = 0.055, p = 0.382. In addition, a Chi-square test of homogeneity was conducted between the two groups with and without MRONJ in relation to the patients' age group, with an adequate sample size established according to Cochran (1954). There were no differences in proportions between the four age groups, χ 2 (3) = 4.985, p = 0.173. Observed frequencies and percentages of age groups for each group are defined in Table 5. There were no statistical differences in developing MRONJ regarding gender, residency, or smoking status. For this study group, two types of neoplasm therapies were associated with MRONJ: chemotherapy (as 94.4% of patients developed MRONJ), and corticotherapy. In this case, all treated patients developed MRONJ. Endocrine therapy seems a protective factor. Of the 90 patients with MRONJ, 71 patients (representing 78.9%) had stage 2 MRONJ, and 19 patients (21.1%) had stage 3 MRONJ (Table 6). A chi-square goodness-of-fit test was conducted to determine whether an equal number of patients for both MRONJ stages were represented in the study. The minimum expected frequency was 45. The chi-square goodness-of-fit test indicated that the number of patients with stages 2 and 3 was statistically significantly different (χ 2 (1) = 30.044, p < 0.0005), with more than three-quarters of patients having stage 2 MRONJ. Kendall's tau-b correlation was run to determine the relationship between age and MRONJ stage amongst all 90 participants. There was a very weak, negative association between age and stage (mean age 65.9 ± 8.8 years old for patients with stage 2, and 64.4 ± 12.0 years old for patients with stage 3), which was not statistically significant, τ b = −0.025, p = 0.778. Similarly, there were no statistical differences in the MRONJ stage regarding residency, gender, smoking status, or the medical center. Bisphosphonates treatment duration was recorded in months, and the following categories were identified: 1-12 months (52 patients, 29.9% from the entire study group), 13-24 months (60 patients, 34.5% from the entire study group), 25-36 months (34 patients, 19.5% from the entire study group), and >36 months (28 patients, 16.1% from the entire study group) ( Table 7). The cumulative incidence of MRONJ patients according to the duration of BF treatment is presented in Figure 2. Bisphosphonates treatment duration was recorded in months, and the following cat egories were identified: 1-12 months (52 patients, 29.9% from the entire study group), 13 24 months (60 patients, 34.5% from the entire study group), 25-36 months (34 patients 19.5% from the entire study group), and >36 months (28 patients, 16.1% from the entir study group) ( Table 7). The cumulative incidence of MRONJ patients according to the du ration of BF treatment is presented in Figure 2. A Mann-Whitney U test was run to determine if there were differences in treatmen duration between MRONJ and non-MRONJ groups. Distributions of duration were no similar, as assessed by visual inspection. Median treatment duration was statistically sig nificantly higher for the MRONJ group (24 months) than for the non-MRONJ group (1 months), U = 5638, z = 5.620, p < 0.0005. Treatment duration was not related to the MRON stage (U = 704, z = 0.300, p = 0.764). The following trigger factors were identified in MRONJ development: tooth extrac tion (51 patients, 56.7% of all MRONJ patients), periapical disease (26 patients, 28.9%), and periodontal disease (13 patients, 14.4%). A chi-square goodness-of-fit test was conducted to determine whether an equal number of patients for the three trigger factors were rep resented in the study. The minimum expected frequency was 45. The chi-square goodness of-fit test indicated that the number of patients with various trigger factors was statisti cally significantly different (χ 2 (2) = 24.867, p < 0.0005), with more than half of patients hav ing extraction as a trigger factor. A Mann-Whitney U test was run to determine if there were differences in treatment duration between MRONJ and non-MRONJ groups. Distributions of duration were not similar, as assessed by visual inspection. Median treatment duration was statistically significantly higher for the MRONJ group (24 months) than for the non-MRONJ group (12 months), U = 5638, z = 5.620, p < 0.0005. Treatment duration was not related to the MRONJ stage (U = 704, z = 0.300, p = 0.764). The following trigger factors were identified in MRONJ development: tooth extraction (51 patients, 56.7% of all MRONJ patients), periapical disease (26 patients, 28.9%), and periodontal disease (13 patients, 14.4%). A chi-square goodness-of-fit test was conducted to determine whether an equal number of patients for the three trigger factors were represented in the study. The minimum expected frequency was 45. The chi-square goodness-of-fit test indicated that the number of patients with various trigger factors was statistically significantly different (χ 2 (2) = 24.867, p < 0.0005), with more than half of patients having extraction as a trigger factor. For patients from the first age group, the main trigger factor was periodontal disease (with 50% from this group), compared to the other age groups, where it was the least representative factor. Periapical disease was predominant for groups 55-64 and 72-84 years old, while extraction was the main factor for patients with ages above 54 years old. Overall, the differences between age groups regarding the trigger factor were statistically significant, χ 2 (6) = 15.752, p = 0.015. Of the 90 patients with MRONJ, 58 patients (representing 64.4%) developed MRONJ in the lower jaw, compared to only 31.2% (28 patients) in the upper jaw, while 4 patients (4.4%) presented osteonecrosis in both jaws. A chi-square goodness-of-fit test indicated that the number of patients for each location was statistically significantly different (χ 2 (2) = 48.800, p < 0.005), with approximately two-thirds of the patients presenting mandibular osteonecrosis. Age group analysis revealed significant differences between patients with ages below 55 years old, with predominant upper jaw MRONJ (60% from the entire group), and the other three age groups, with predominant lower jaw MRONJ (73.1% for group 55-64, 51.9% for group 65-71, and 81.5% for group 72-84 years old), χ 2 (6) = 13.348, p = 0.038. Surgical treatment for patients with osteonecrosis of the jaw was represented by sequestrectomy (64 patients, 71.1% from the entire study group), resection (21 patients, 23.3% from the entire study group), or curettage (7 patients, 7.8% from the entire study group). Among this study lot, sequestrectomy was the election surgical procedure for more than two-thirds of patients; for statistical purposes, these patients have been grouped with patients with curettage, with the differences between the number of patients with and without these procedures being statistically significant (χ 2 (1) = 4.003, p < 0. 0.045). Resection was performed for less than a quarter of patients, reflecting statistically significant differences between the number of patients with and without resection (χ 2 (1) = 25.600, p < 0.0005). The majority of MRONJ patients underwent one surgical procedure (76 patients, 84.4%), eight patients suffered a relapse and underwent a second surgical intervention (8.9%), while six patients (6.7%) were not surgically treated. With a minimum expected frequency of 30 for each category, a chi-square goodness-of-fit test reflected statistically significant differences between the number of patients with one or more surgical procedures (χ 2 (2) = 77.400, p < 0.0005), with around three-quarters of patients being the subject of a single surgical procedure. Several common comorbidities have been analyzed for this study group. Following a Chi-Square test, only cardiovascular diseases (mostly hypertension) may be considered a risk factor for developing MRONJ (Table 6), with χ 2 (1) = 9.443 and p = 0.002. The analysis of the MRONJ stage in relation to comorbidities revealed similar results. Binomial Logistic Regression From the binomial logistic regression performed to ascertain the effects of gender, age, smoking status, treatment duration with zoledronic acid, chemotherapy, radiotherapy, endocrine therapy, presence of DM, obesity, and hypertension, on the likelihood that patients develop osteonecrosis of the jaw, the results indicate that all continuous independent variables were found to be linearly related to the logit of the dependent variable (Box-Tidwell procedure). There were three standardized residuals with values around three standard deviations, which were kept in the analysis. The logistic regression model was statistically significant, χ 2 (10) = 62.406, p < 0.0005. The model explained 40.2% of the variance in osteonecrosis development and correctly classified 76.4% of cases. Sensitivity was 81.1%, specificity was 71.4%, positive predictive value was 75.26%, and negative predictive value was 77.92%. The area under the ROC curve was 0.824 (95% CI, 0.761 to 0.886), which is an excellent level of discrimination. Of the ten predictor variables, only five were statistically significant: duration of treatment, chemotherapy, endocrine therapy, obesity, and hypertension (Table 8). Cancer patients undergoing chemotherapy had 7.53 times higher odds to develop osteonecrosis of the jaw, while endocrine therapy was associated with a reduced frequency of MRONJ. Patients with hypertension had 3.79 times higher odds to develop osteonecrosis of the jaw than the patients with normal blood pressure values, while obesity was associated with a reduction in the likelihood of developing osteonecrosis. Longer zoledronic acid treatment duration was associated with an increased risk for MRONJ. Discussion Since MRONJ is a complication with a major negative impact on the quality of life of cancer patients [7,[37][38][39], a risk factors evaluation for each patient before the treatment initiation with bisphosphonates or other drugs to control bone metastases is necessary. In order to enable customized therapy for both bone metastases and oral diseases that may cause MRONJ lesions, it is advisable to establish an individual risk profile of the patient [27]. Several studies have made correlations between systemic and local risk factors and MRONJ [27,28,32], and among them, the study by Marciano et al. [27] showed that by knowing the patient's risk profile, risk stratification can be achieved, and plans can be made for performing elective dental procedures in safe conditions, thus preventing MRONJ. Like the study by Marciano et al. conducted in an Italian population [27], the current study presents a descriptive statistical analysis of a part of a Romanian cancer population from two geographical regions who were treated with zoledronic acid, as well as the results of the binomial logistic regression analysis through which several risk factors were associated with the occurrence of MRONJ. As some researchers have pointed out [32], the geographical origin of the population studied is important since genetic and environmental factors could be different [40][41][42]. Our study groups belong to two different geographical regions of the country situated 450 kilometers apart, one with a sea-side opening (Dobrogea, with its municipality in Constanta, South-East Romania), and the other, in the plain area close to Bulgaria (Oltenia, with its municipality in Craiova, South-West Romania). The incidence of MRONJ in zoledronic acid-treated cancer patients varies between 0.2% and 9.9%, reaching 15-20% in some case studies [27,38,43]. The present study aimed to explore the risk factors associated with MRONJ by studying two groups of patients almost equal in number treated with zoledronic acid, with the study group who developed MRONJ and the control group, without MRONJ. The present study described the association of MRONJ with various risk factors: demographic factors, cancer type and cancer treatment, co-morbidities, and duration of bisphosphonate treatment. MRONJ characteristic associations with risk factors were performed for the MRONJ stage, trigger factor, lesion location, denudated bone presence, hypoesthesia presence, and surgical treatment type. The results of the binomial logistic regression performed on ten predictor variables (gender, age, smoking status, treatment duration, chemotherapy, radiotherapy, endocrine therapy, diabetes mellitus, obesity, and hypertension) pertaining to the likelihood of MRONJ occurrence, showed that only five were statistically significant, namely, chemotherapy, hypertension, duration of treatment, endocrine therapy, and obesity. Romania is among the top 10 European countries with the highest mortality rates due to cancer [15,44]. The studied groups included patients residing in two geographical areas of the country with a high prevalence of cancer. The highest prevalence of cancer in Romania in 2019 was recorded in the S-W Oltenia region (of which Dolj county is a part) (3283.64/10,000) followed by the prevalence in the S-E region (Dobrogea) (3033.3/10,000) [15,[45][46][47]. Although more than half of the patients were from the S-E Dobrogea region (101 patients, 58.0%), compared to 73 patients (42.0%) from the S-W Oltenia region, out of 90 patients with MRONJ, the majority (53, 58.9%) were from S-W Oltenia and only 37 were from S-E Dobrogea (41.1%) p < 0.05. This can be correlated with the fact that the highest prevalence of cancer in Romania in 2019 was recorded in the S-W Oltenia region (3283.64/10,000) followed by the prevalence in the S-E region (Dobrogea) (3033.3/10,000) [46,47], with most new cancer cases in 2019 being reported in the S-West Oltenia region, (367.8/10,000) [45][46][47]. In a recent study, a weak association was identified between geographic location and the development of MRONJ in patients with cancer [32]. The majority of the patients in the studied groups were women (over 62%), over 55 years old (over 82%), more from the region of S-E Dobrogea, Constanta (over 58%), and more from the urban environment (over 71%). Most of the population studied (over 73%) were diagnosed with breast or prostate cancer. Another study published in 2020 regarding the incidence of MRONJ in the period 2009-2018 in patients from Craiova (S-W Oltenia) treated with bisphosphonates showed that most patients with MRONJ were women; the age of the patients was higher in men compared to women and the origin was urban for most patients [48]. In a study conducted by Ishimaru et al., in 2021, middle-advanced age (65 to 74 years old) was correlated with the occurrence of MRONJ in cancer patients, especially in men [32]. Rodriguez-Archilla et al., in 2019, performed a review (meta-analysis) on studies from the PubMed database regarding predictive risk factors for the occurrence of MRONJ. In this meta-analysis, an examination of twenty-five studies regarding predictive risk factors of MRONJ revealed the identification of certain factors, such as advanced age and female sex, as posing increased risk [49]. Most of the patients in our study group were from an urban environment, which is consistent with another previous study [48]. Statistical data indicated that the incidence of cancer was significantly higher in the urban environment compared to the rural environment [15,[45][46][47]. The addressability to the oncologist, expressed as hospitalized morbidity (patients who were hospitalized for cancer treatment), was lower in the rural areas than in the urban areas, being the lowest in the S-E and S-W Oltenia regions [45][46][47]50]. Most patients from the urban environment came from the region of S-E Dobrogea, Constanta (over 58%). Although almost three-quarters of patients from the study group (125 patients, representing 71.8%) lived in urban areas (with a male/female ratio of approximately onethird), and 49 patients (28.2%) lived in rural areas (with a similar gender distribution); residency did not have a significant risk association with MRONJ. One identified risk factor that has been linked to a higher incidence of MRONJ is the type of cancer, namely, breast and prostate cancers [21,22]. Breast and prostate cancers are among the cancers with high mortality in Romania. Breast cancer has a mortality of 15.5/100,000 inhabitants and prostate cancer of 10.6/100,000 inhabitants [43]. In the current study, the majority (over 73%) of the patients in the group were diagnosed with breast or prostate cancer, with more women than men in the group. Breast cancer is the seventh leading cause of death in Romania, while prostate cancer ranks 10th among the leading causes of death in Romania, as shown by the World Health Rankings [51,52]. Breast cancer is characteristic of middle-aged women, while prostate cancer is characteristic of elderly men; the incidence gradually increasing until the last age group where it is at its maximum [45][46][47]. In our study, 54.9% of breast cancer occurred in women between 55 and 71 years of age, and 76.1% of prostate cancer occurred in men between 65 and 84 years, with a much higher frequency after 72 years (45.6%). Another retrospective pilot study conducted retrospectively from 2012 to 2017 revealed a substantial incidence of medicationrelated osteonecrosis of the jaw (MRONJ) in patients receiving intravenous (IV) zoledronic acid as a treatment for bone metastases associated with breast or prostate cancer [50]. In a review regarding MRONJ, Anastasilakis et al. [8] showed that the risk for MRONJ was much higher in patients with advanced malignancies compared to those with benign bone diseases, due to higher doses and the more frequent administration of antiresorptive agents in people with compromised general health, together with the concomitant administration of other drugs that predispose to MRONJ. In the study by Hata et al. [53], the cumulative incidence of MRONJ in breast cancer, prostate cancer, and multiple myeloma was found to be related to the frequency of antiresorptive drugs and the duration of treatment. Patients with cancers with median survival times of less than 10 months did not develop MRONJ. In renal cancer, the cumulative incidence of MRONJ increased early, with a median survival time of 12 months. Overall, these studies emphasize the importance of dental prophylaxis and maintenance of good oral hygiene for the prevention of MRONJ in patients treated with bisphosphonates and denosumab and highlight important factors in the risk of development and re-occurrence of this condition [8,54]. A study that followed the cumulative incidence of MRONJ after 3 years, in patients with bone metastases treated with zoledronic acid, showed that the type of cancer, oral health, and frequency of antiresorptive use were associated with the risk of MRONJ [21]. Moreover, patients undergoing treatment with denosumab or zoledronic acid for bone metastases from breast, multiple myeloma, or prostate cancers have a higher risk than those with lung cancer of developing MRONJ [27]. Several studies have suggested that comorbidities may constitute potential risk factors for the development of MRONJ [32,34]. Comorbidities affect patients older than 50 years, with most of the studied patients having at least one comorbidity. Cardiovascular disease was found especially in the two groups of elderly patients, almost a third of them, and hypertension had a similar distribution. Binomial regression analysis performed in the present study revealed that patients with hypertension had 3.79 times higher odds of developing medication-related osteonecrosis of the jaw than the patients with normal blood pressure values, while obesity was associated with a reduced risk of developing osteonecrosis. The first two causes of death in Romania are represented by coronary heart disease and stroke, a complication of hypertension [51]. Occupying fifth place among European countries in terms of high cardiovascular risk, according to ESC statistics, Romania has hypertension as the main risk factor (39.1%), along with hypercholesterolemia (39.1%), followed by smoking (26.7%) and obesity (21.3%) [16,54]. In Romania, campaigns have been carried out with the aim of involving authorities and the media in increasing awareness regarding diabetes prevention and control [55]. In the present study, nutritional diseases (diabetes, obesity) were distributed almost equally in the last three age groups. Consequently, the patients were affected by these diseases starting at the age of 55 (after 50). In Romania, the prevalence of diabetes is estimated at 11.6% in the population aged between 20 and 79 years. Newly reported cases represent 20.7% [55]. Patients with type 2 diabetes mellitus (T2DM) are at higher risk of cardiovascular disease, and age strongly predicts cardiovascular complications [56]. In the study conducted by Ishimaru et al., dementia and renal diseases related to renal cancer, which was the most common form of cancer associated with MRONJ in the patients studied, were identified as significant comorbidities [32]. In the current study, kidney diseases were found in the last two age groups of patients, thus affecting only 31 of the 174 patients. According to the latest statistical reports, the prevalence of kidney diseases correlates with the prevalence of hypertension [57]. Anemia is found in all four groups of patients (this is the anemia that the patient had at the time of the first visit, because is known that chemotherapy treatment produces anemia) [58]. Several studies have identified chemotherapy as an important risk factor for MRONJ [24,[26][27][28][29][30][31]59]. Chemotherapy is the treatment of choice for neoplasms in Romania, and of the total number of cancer patients studied, 86.78% received this treatment. From the results of binomial regression analysis, in the present study, patients with chemotherapy had 7.53 times higher odds of developing MRONJ, while endocrine therapy was associated with a reduced frequency of MRONJ. In the study by Kawahara et al., chemotherapy was associated with MRONJ at a rate of 39.7%, followed by corticotherapy (24.6%) [59]. Among the other cancer therapies, radiotherapy was used more frequently, while endocrine therapy was used less often as it is reserved especially for less serious cases of breast and prostate cancer. Immunotherapy was very rarely encountered [16,51,52,[60][61][62]. In the binomial regression analysis from the present study, a longer zoledronic acid treatment duration was associated with an increased risk for MRONJ. The average duration of treatment with bisphosphonates was longer in Craiova than in Constanta, longer in patients with prostate cancer, followed by breast cancer, and then by the other types of cancer. MRONJ appeared in a significantly higher percentage of patients from Craiova, who had a longer duration of treatment with BF (2 months longer on average). The duration of treatment with bisphosphonates is presented as the main risk factor for MRONJ in several studies [27,39,61,62]. In the present study, the median treatment duration was statistically significantly higher for the MRONJ group (24 months) than for the non-MRONJ group (12 months), p < 0.05. A duration of more than 12 months (an average of 18 months, meaning one year and a half) doubles the risk of MRONJ development, while a duration of more than 24 months (an average of 32 months, meaning two years and a half) triples it. The relationship between the risk of MRONJ development and the duration of treatment with antiresorptive agents in cancer patients was also analyzed in other studies [38,63,64], reporting a median duration of treatment of 17.5 months [63] or 19 months [64]. Treatment duration was not related to the MRONJ stage. In other studies, the average duration of treatment with BF was between 12 and 55 months [61]. According to several studies, the duration of treatment with bisphosphonates of more than 10 doses was a significant risk factor for the occurrence of MRONJ in patients treated with zoledronic acid [27,39,62]. In the current study, the average duration of treatment with bisphosphonates was the highest in the 55-64 age group, followed by the older age groups (almost equal values-approx. 2 years of treatment), with the young age group having the shortest treatment period, under one and a half years. The duration of treatment with zoledronic acid is influenced by the bone metastases, the evolution of the underlying disease, and the duration of survival/healing of the patients [27,39,62]. Several studies have shown that patients treated with zoledronic acid or denosumab for more than 18 months have an increased risk of MRONJ recurrence and that the number of doses of zoledronic acid or denosumab administered, exposure to new chemotherapeutic compounds, and the type of cancer, are important factors in the risk of developing MRONJ [27,38,62]. According to Kemp et al., among the risk factors for medication-related osteonecrosis of the jaw in oncological patients treated with zoledronic acid is the monthly administration rate, as well as the lack of dental control, surgical procedures (especially tooth extraction), and smoking. In Kemp's study, the localization of MRONJ in the upper jaw predominates; unlike in our study, where most patients had MRONJ in the lower jaw [39]. Studies have also been carried out regarding the association of risk factors with the characteristics of MRONJ, such as the stage of MRONJ, the trigger factors, the location, the presence of exposed bone, the presence of hypoesthesia, and the type of surgical treatment. In the present study, the most important trigger factor for MRONJ was tooth extraction, followed by periapical disease and periodontal disease. More than half of the patients had extraction as a trigger factor (p < 0.0005). Age groups were correlated with the trigger factor, and the differences between age groups regarding the trigger factor were statistically significant (p = 0.015): extraction was the main risk factor for patients with ages above 54 years old, periapical disease was predominant for groups 55-64 and 72-84 years old, and periodontal disease was the main trigger factor for patients from the first age group. There were also other studies that reported dental extraction as a trigger factor of MRONJ in a proportion of 61.7-75% [39,59]. The incidence in patients diagnosed with MRONJ after dental extractions was 2.28 per 100,000 people/year [32]. According to the study conducted by Wick et al., local inflammation is considered the main trigger of MRONJ, while the suggestions of Aguirre et al. indicate that oral risk factors lead to osteocyte necrosis in MRONJ through TNFα/TNFR1 signaling and enhance the inflammatory response [65,66]. Another risk factor for MRONJ was the localization of the lower jaw lesion. Thus, in our study, 64.4% of the patients presented MRONJ in the mandible compared to only 31.1% in the maxilla. A small number (4.4%) of patients had osteonecrosis in both jaws. Similar rates regarding the localization of MRONJ were also recorded in the study by Kawahara et al. in 2021 [59]. Some studies have shown that there were even greater differences between the two jaws in terms of MRONJ localization: the lower jaw being affected in over 71% of patients compared to less than 22.5% in the upper jaw [61,67]. An explanation of the recorded differences could be due to less vascularity and a thinner mucosa in the mandible compared to the maxilla [68]. The most common location of the MRONJ area in the mandible would be the mandibular ramus, followed by the mandibular body and mandibular symphysis [69][70][71]. However, there are studies that have shown that the upper jaw was more affected by osteonecrosis than the mandible, but the number of reported patients was much lower [39]. In our study, the most frequent MRONJ localization was in the posterior area both in the maxilla and in the mandible, without any statistical significance. The same was observed in the study by Feng et al. [61]. A correlation between the location of osteonecrosis and the age of the patients was noticed. Mandible osteonecrosis was encountered for patients over 55 years old while maxillary osteonecrosis was predominant (60%) in patients with ages below 55 years old. One of the clinical characteristics of MRONJ with a significant statistical value was the exposed bone, in a proportion of 96.67%. Hypoesthesia was present in only 17.78% of MRONJ patients. Other studies showed that focal and diffuse bone sclerosis and the occurrence of bone sequestrations could be observed more frequently in patients with exposed bone, compared to patients without exposed bone [33]. Most of the patients analyzed in this study (78.90%) had MRONJ stage 2, and only 10% were diagnosed with MRONJ stage 3. The MRONJ stage determines the choice of treatment method. Although in 2014, the AAOMS considered that surgical treatment is not recommended for patients with stage 1 and 2 MRONJ and should be limited to patients with stage 3 lesions or stage 2 lesions unresponsive to non-surgical treatment; other studies suggest that the surgical removal of necrotic bone may be an effective treatment for all stages of MRONJ [13,72]. Another study suggests that only patients in stages 2 and 3 of MRONJ should be admitted for surgical treatment [61]. A surgical approach may be considered for any necrotic bone exposed when conservative treatment has failed. A conservative surgical approach can be achieved through sequestrectomy associated with the administration of antibiotics and rinsing with antiseptic solutions [6,71]. The present study showed that sequestrectomy was the surgical procedure used in more than two-thirds of patients, with the differences between the number of patients with and without this procedure being statistically significant. Conservative surgery can be combined with other treatments such as ozone therapy or local application of PRF (plasma-rich fibrin). Studies have shown that ozone therapy has a lower percentage of positive results compared with local PRF applications [67]. Although the treatment of patients with an established diagnosis of drug-induced osteonecrosis of the jaw (MRONJ) should be approached with a pragmatic multidisciplinary treatment, prioritizing the patient's quality of life and the management of their skeletal disease, sometimes a complete resection of the necrotic bone (i.e., a surgical interventionextensive way) is necessary to achieve complete healing [13,72]. More recent studies have shown that stages 2 and 3 of MRONJ could be treated both surgically with or without adjuvant therapies, as well as conservatively [67,69,[73][74][75][76]. Conservative treatment is the main treatment method of MRONJ, and although it does not always completely heal the lesion, it can provide long-term relief of symptoms [73]. Stages 1 and 2 of MRONJ can be completely cured by surgical treatment, while stage 3 is partially cured with an MRONJ stage regression according to the AAOMS [74]. In the present study, resection was performed for less than a quarter of patients, reflecting statistically significant differences between the number of patients with and without resection (p < 0.05). More than three-quarters of patients underwent only one surgical procedure, while 8.9% of patients underwent two different surgical procedures, and 6.70% of patients were not surgically treated (p < 0.05). Although in stage 3 MRONJ, the treatment of choice was most often surgical, the general rate of complications was high, with relapses being recorded [71]. Surgical treatment of MRONJ in the jaw can be an effective method, recording a rate of healing increase according to the study by Okuyama et al. [75]. The strengths of this study are reflected by a sufficient follow-up time (of 4 years) for all 174 patients residing in the two geographical regions, allowing the analysis of possible risk factors regarding demographic factors, cancer type and cancer treatment, comorbidities, period duration of bisphosphonate treatment use, and also the MRONJ stage, trigger factor, overall lesion location, denudated bone presence, hypoesthesia presence, and surgical treatment type. Binomial regression analysis performed for ten predictor variables indicated the statistically significant factors: chemotherapy, hypertension, duration of treatment, endocrine therapy, and obesity. Chemotherapy increased 7.53 times the odds of developing osteonecrosis of the jaw of the cancer patients, while endocrine therapy was associated with a reduction in the likelihood of developing osteonecrosis. Hypertension increased 3.79 times the odds of developing MRONJ in the cancer patients, while obesity was associated with a reduction in the likelihood of developing osteonecrosis. Increasing treatment duration was associated with an increased likelihood of developing MRONJ (p < 0.005). The limitations concern the lack of data regarding the oral health of the subjects before BF treatment initiation and regular oral monitoring during the treatment. Conclusions Risk factors identified for MRONJ correlated with zoledronic acid were chemotherapy, hypertension, and duration of treatment. Cancer patients with chemotherapy or hypertension had higher odds of developing MRONJ, while endocrine therapy and obesity were associated with a reduced frequency of MRONJ. Increased treatment duration with zoledronic acid was associated with an increased risk of developing MRONJ. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/jcm12113747/s1, Table S1: Distribution of patients by medical center and age groups. Table S2: Distribution of patients by neoplasm, age groups and gender. Data Availability Statement: The authors declare that the data of this research are available from the corresponding authors upon reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
2023-05-31T15:05:16.026Z
2023-05-29T00:00:00.000
{ "year": 2023, "sha1": "ec26c74a299bed5297e6336ee78ccc0ad51f474d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/jcm12113747", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "580a87f0776e62bb3878b48d45aeb9ae81596e8c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247574253
pes2o/s2orc
v3-fos-license
Electronic medical records increasingly take thinking away from spine surgery is is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-Share Alike 4.0 License, which allows others to remix, transform, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms. ©2022 Published by Scientific Scholar on behalf of Surgical Neurology International Editorial I have been practicing Medicine for over 40 years in both private and academic settings, and have published over 400 peer-reviewed articles/chapters. I started when computers in medicine were limited; doctors "talked with each other" about cases, and spent time talking with and examining patients. Most physicians were in the private practice of medicine which I had known from my father, Joseph A. Epstein MD, a neurosurgeon, and my uncle Bernard S. Epstein, M.D. (one of the first neuroradiologists and author of several textbooks). What I now see evolving is what I am writing about in this editorial. I would, along with Surgical Neurology International, be interested in comments from other spine surgeons about this editorial. e electronic medical record (EMR) is increasingly taking the thinking out of performing spine surgery, thus putting patients at increased risk. EMRs, by automatically populating subsequent notes, allow mistakes made by spine residents and/or attending surgeons to permeate the chart, potentially leading to wrong level, wrong side, and wrong site surgery. Further, few spine residents/attending surgeons have integrated the culture of "talking" to colleagues, which shrinks rather than expands differential diagnoses, more often leading to missed diagnoses. Finally, when operative notes become increasingly "templated, " what actually happened at surgery (i.e. especially errors), are no longer accurately reported. MR/CT REPORTS IN THE EMR "AUTOMATICALLY POPULATE" NEXT NOTES LEADING TO MISTAKES Typically, the electronic medical record (EMR) automatically populates subsequent notes in spine and other patients' charts. is does not, however, mean that their content is necessarily read and/or evaluated/reassessed by the next resident or spine surgeon to come along. Rather, this often leads to events/findings not being reported if there is no free text option, or if the content simply does not fit into any predetermined pull-down menu. is increasingly leads spine surgery residents to summarize findings or events while often leaving out critically important details. ese errors then populate the EMR and subsequent computer generated notes, thus removing the impetus to think independently and/or actually go back and read and/or reinterpret radiologists'/ neuroradiologists' reports. For example, an initial MR/CT report that cites the wrong level of a disc herniation may permeate the entire EMR, and result in a wrong-level/wrong side/wrong site surgery. Further shortcomings of the EMR automatically populating the record may include the failure to consider or reconsider other critical differential diagnoses, and thus leave patients with www.surgicalneurologyint.com Surgical Neurology International Editor-in-Chief: Nancy E. Epstein, MD, Clinical Professor of Neurological Surgery, School of Medicine, State U. of NY at Stony Brook. fixed neurological injuries that could have been avoided had the correct diagnosis been established, and had a potentially necessary operation been performed in a timely fashion. Such instances classically include patients with epidural spinal abscesses, where initial emergency room evaluations fail to consider this amongst the differential diagnoses for back pain, thus leading to the failure to order appropriate MR/CT studies, and lack of timely surgical intervention. SPINE SURGEONS RARELY SPEAK WITH RADIOLOGY/NEURORADIOLOGY ABOUT MR/ CT FINDINGS Before performing spine operations, spine surgeons used to speak with radiology/neuroradiology in addition to reading the MR/CT reports and reviewing the films themselves. Such talking between professionals led to ordering more appropriate preoperative studies, considering additional differential diagnoses, along with consideration of different treatment options. e simple phone call or direct personperson encounter better defined the pathology, the significance of disease, the need for surgery, along with pin-pointing the correct level, side, and site of disease. Now, the electronic medical records (EMR) with increasing time constraints allotted for each patient evaluation have largely eliminated "thinking, " and have created a generation of spine surgeons focused on "regurgitating" prior radiographic reports, and summarizing "surgical diagnoses. " Why is this happening? Is it due to a lack of interpersonal relationships, particularly for a younger computer-raised generation of physician who did not learn those skills growing up? Or is it time or money limitations? is you don't even need to ask; I certainly learned this quickly changing from working in a private neurosurgical practice to transitioning to working full-time for a healthcare system that rigidly imposed greater time constraints. IT'S NOT MY FAULT, IT WAS WHAT WAS IN THE EMR/CHART Too often, attending spine surgeons, particularly in academic centers, blame the residents for not "knowing the patient" and/ or "performing the wrong operation. " Nevertheless, attending spine surgeons are still the "captains of the ship", and, as such, are primarily responsible for the patients regarding any surgery-related decisions and/or errors. is is most prominent where MR/CT reports contain mistakes; how often have you seen that the official reports cite a specific level, site or side in the text for a disc herniation, but the final summary cites a different location? is is precisely why the films are supposed to be available in the operating room according to the Joint Commission protocols (i.e., time outs). Further, now that you have the hospital PAC system in the operating room (i.e., PACS defined as the "picture archiving and communication system (PACS)), spine surgeons are much more limited as to how many different studies they may simultaneously view. So you get to the operating room table, and time outs are performed. But who did their homework? e resident? e attending? No one? And therein lies the problem. Despite the EMR and PACS system, the attending spine surgeon must still have "ownership" of the patient's individual case, and be responsible for performing the right operation on the right patient at the right level for the right indications. ALTHOUGH INITIAL MR/CT RADIOLOGY READINGS MAY BE CORRECT, THE EMR MAY POPULATE WRONG INTERPRETATIONS BY RESIDENTS AND/OR ATTENDING SPINE SURGEONS Spine surgeons must carefully select and care for patients to ensure they receive optimal treatment. e initial patient evaluation is so critical for discerning whether the patient does or does not require surgery, and/or if there is a medical or neurological problem. Certainly, performing a complete history and neurological exam are important, as many spine surgeons have seen patients misdiagnosed with spine disease but in fact have neurological disorders (i.e., Multiple Sclerosis or Amyotrophic Lateral Sclerosis, etc.). Patients with potential surgical disease are often sent initially for MR (i.e., CT studies reserved for those with pacemakers, etc.). Once that study is read, in addition to the treating surgeon reviewing the study, how many then routinely speak with the radiologist? e major benefit at this point is that the surgeon has the clinical information that can change/alter the radiologists' interpretation of studies. In short, putting two heads together adds a layer of combined expertise and protection; it both expands and then contracts/focuses two different specialty clinicians to arrive at the correct differential diagnosis. Failure to maintain this avenue of direct communication may result in missed diagnoses, and even the wrong surgical procedure. SPINE SURGEONS' TEMPLATED OPERATIVE EMR NOTES MAY INACCURATELY REFLECT WHAT WAS DONE DURING SURGERY When attending spine surgeons dictate templated operative notes, their reports may fail to accurately indicate what was actually done. Rather, you have to read between the lines and look at the postoperative sequelae to discern whether what was described was actually performed and/or whether an "unreported" mistake was made. One example of this took place years ago when I saw a patient who came in for a second opinion after having a lumbar diskectomy performed 1 year previously. at patient's MR showed no significant peridural scarring at the operative level. During the second operation, the incision was found to be just skin deep; nothing else had been done. Nevertheless, the patient's operative report from the prior surgery went on in great detail for four pages. Many other examples are now found in medicolegal cases, where the described events either never happened, or "mistakes/ errors" were totally omitted. SUMMARY e art of thinking by spine surgery residents and attendings is increasingly being threatened by the electronic medical record (EMR). By the EMR automatically populating future notes, initial MR/CT mistakes may not get corrected even prior to surgery. Further, the failure to talk to radiology/neuroradiology colleagues -particularly about complex cases -the failure to explore potential additional correct differential diagnoses, likely has resulted in more misdiagnosed cases and wrong operations. Finally, relying on largely templated operative notes to figure out what was actually done at surgery has proven to be increasingly misleading. Declaration of patient consent Patient's consent not required as there are no patients in this study. Financial support and sponsorship Nil. Conflicts of interest ere are no conflicts of interest. Commentary Nancy, you describe WHAT is happening in neurosurgery, but not WHY is it happening. Is it the fault of EMRs? Or is it a deeper reason? Mindless adoption of technology which has good values is allowing us to transmit information between others easily, search large data bases of information about our patients, decrease repetitive tasks, and make us more time efficient. However, technology's downsides are that it is not perfect, is in fact flawed, and is not available in all sites. Further, many of these systems deliberately do not communicate with each other (i.e. because companies have their own systems to make money and keep them unique rather than being commonly accessed and user friendly). e technology takes doctors personal time from patients, and the result is that the EMR contains much useless information which is expanded and not condensed (i.e. as was done in the past by successive physician input). Insurance systems do not reward thinking and experience. A doctors' compensation is based on his/ her detailed record keeping. And now doctors have scribes who enter this information into the computer system, adding more chances for error. Further, technology is not about people; it is about people interacting with a screen, and not a real live patient. It has depersonalized our civilization. Are we becoming robots? Is this yet another example of that? Comments from James I Ausman, M.D., Emeritus Editor-In-Chief, Surgical Neurology International How to cite this article: Epstein NE. Electronic medical records increasingly take thinking away from spine surgery. Surg Neurol Int 2022;13:97.
2022-03-21T15:16:26.894Z
2022-03-18T00:00:00.000
{ "year": 2022, "sha1": "806ba5c9938413556ed3090bcf0b1d4bb7ec89f3", "oa_license": "CCBYNCSA", "oa_url": "https://surgicalneurologyint.com/wp-content/uploads/2022/03/11450/SNI-13-97.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9e90842087a99d34c7b983e7155c64e491488bf1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
202545068
pes2o/s2orc
v3-fos-license
Local-field Theory of the BCS-BEC Crossover We develop a self-consistent theory unifying the description of a quantum Fermi gas in the presence of a Fano-Feshbach resonance in the whole phase diagram ranging from BCS to BEC type of superfluidity and from narrow to broad resonances, including the fluctuations beyond mean field. Our theory covers a part of the phase diagram which is not easily accessible by Quantum Monte Carlo simulations and is becoming interesting for a new class of experiments in cold atoms. Quantum gases keep building up considerable interest, as combined experimental-theoretical platforms where the borders between condensed matter, fundamental physics and cosmology can be crossed, with mutual fertilization under the extremely controlled experimental settings and microscopic modeling of atomic physics [1].Bright examples include a new class of precision measurements [2], Hamiltonian coding inspired by Feynman's idea of quantum simulators [3][4][5] for real-time dynamics [4,6], and the quantum phases of Bose/Fermi-Hubbard models relevant to condensed matter [7][8][9]. Developed by Leggett [12] and Nozières and Schmitt-Rink [13], its relevance to high-temperature superconductivity (HTSC) was pointed out by Uemura et al. [14] in a celebrated universal plot, explained in terms of the correlation length [15], and has become a timely concept for the quantum chromodynamics phase diagram [16] and the equation of state in neutron stars [17,18]. The advent of Fermi gases [19][20][21][22] has turned the crossover physics from a phenomenological approach to gain insight on microscopic theories, into a paradigm to be explored under microscopic mechanisms.Among the latter is the Fano-Feshbach (FF) resonance concept [23,24], where scattering length a and contact interaction strength U = 4πh 2 a/m can be varied at will.The resonance originates from the coupling between a free scattering state of two atoms (open channel) and their bound (closed channel) state (see Fig. 1).FF resonances can be classified as narrow (broad) depending on the coupling strength being weak (strong) on the Fermi energy scale ε F ≡ h2 k 2 F /(2m).Alternatively, the energy dependence of scattering processes can be embodied in the effective range r 0 of the interactions, so that narrow (broad) resonances imply k F |r 0 | 1 ( 1).MEAN FIELD [44,45] 1 PARAMETRIZED FUNCTIONAL RG [50] QMC [30][31][32][33] GG0 APPROX [48] RG [34] Figure 1: Conceptual map of BEC-BCS crossover theories in the relevant parameter space defined by −(kF a) −1 , driving the crossover between BEC and BCS limits, and (kF r0) −1 , driving the resonance width from narrow to broad [38].Sketched are the general model-frameworks (left), i.e. one or two-channel, and the theoretical or Quantum Monte Carlo (QMC) methods (right) used to explore the crossover in narrow (red stripe), intermediate (orange), and broad (green) region (see text).This work bridges the gap of intermediate-tolarge values of (kF r0) −1 , including fluctuations via a unifying local-field theory of the boson-fermion hamiltonian. The conceptual map in Fig. 1 summarizes theories developed so far in this scenario for cold gases.One-channel models build on the BCS Hamiltonian using U as unique parameter, thus suited to describe broad resonances, where the interparticle spacing is the only relevant parameter.Broad resonances have been the norm so far in experiments, very well explored via self-consistent theo-ries including pairing fluctuations [25][26][27][28][29] and Quantum Monte Carlo (QMC) simulations at zero and finite temperature [30][31][32][33] and by Renormalization Group methods [34].Intermediate resonances are becoming available in quantum gases experiments [35][36][37].Besides, superfluidity in neutron stars [17], is characterized by k F |r 0 | 1.Their theoretical treatment, however, still leaves a number of open questions, stemming from the need of encapsulating the finite width as a second parameter [38][39][40].Though QMC results are available [41] in a onechannel model mimicking the finite width via well-barrier potentials (Fig. 1), they are limited to (k F |r 0 |) −1 1. Two-channel, boson-fermion (BF) models instead explicitly include the resonant (boson) state composed by two fermions, embodying the original FF mechanism.Introduced in the HTSC context [42,43], the BF model has been proposed for ultracold atoms in a mean-field formulation [44,45], developed within a Random-Phase-Approximation (RPA) [46,47], and upgraded to different forms of self-consistent RPA [48,49].Inclusion of particle-hole fluctuations suited to treat a wide range of FF resonance widths, has been performed within the powerful Functional Renormalization Group (FRG) approach, though in a parametrized manner [50][51][52].As a matter of facts, the intermediate regime bridging from narrow to broad FF resonance is devoided of simulational methods and largely unexplored by unifying theoretical methods that include fluctuations beyond mean field. Here we contribute to fill up this theoretical gap.At variance with [50], we develop a theory hinging on a single approximation.With respect to the largely explored broad limit, we predict sizeable effects in T c at intermediate resonance widths, now accessible in current experiments [35][36][37], both at unitarity and in the BCS limit.We take inspiration from local-field dielectric theories [53], in particular Singwi-Tosi-Land-Sjölander (STLS) [54,55] formalism, developed in the 70s to describe the low-density normal electron liquid.In our theory, the superfluid-state symmetries are naturally built in and the Gor'kov and Melik-Barkhudarov screening corrections [56] recovered.We discuss applications to current experiments and its potential extensions to describe exotic phases away from the superfluid phase.The theory-We consider the boson-fermion (BF) grandcanonical Hamiltonian [42]: The operator c † k,σ (b † k ) creates a spin-1/2 fermion (spin-less boson) with momentum k.The first two terms represent the fermion and boson kinetic energies ε k = k 2 /(2m) − µ and ε B q = q 2 /(4m) − 2µ + 2ν, in terms of the fermionic mass m, chemical potential µ and energy 2ν of the resonant state.The factors 4m and 2µ in the dispersion account for the bosons being composed of two fermions.2ν is the crucial parameter driving the system from the Fermi limit at large detunings ν ε F , where bosons exist only as virtual states, to the pure Bose limit ν −ε F with a real macroscopic occupation of the resonant state.Bosons and fermions are hybridized via the coupling with strength g, converting two fermions into a boson and viceversa, related to the effective range r 0 of the scattering potential via r 0 = −8πh 4 /(mg) 2 [38,57].The theory embodies two independent physical parameters, k F r 0 and k F a, in the model tuned via g and ν.To which the background scattering length a bg (U bg ≡ 4πh 2 a bg /m) joins to account for scattering away from resonance.The method-Our method hinges on the concept of local field in dielectic-function theories, introduced to study the density and spin response of the electron liquid in lowdensity metals, where the Coulomb interaction dominates over the kinetic energy [54,58].While referring to the Supplemental Material (SM) [59] for details, here is the concept essence.The system response is determined by introducing the exchange and correlation (xc) potential in terms of the so-called local-field factor G L generated by the polarization density locally induced in the medium and describing the hole dug in around a given particle by xc processes.In the STLS scheme [54], G L is determined by the xc-generalized force driven by the fluid and weighted over the static probability of finding a particle at distance r, measured by the pair-correlation function g( r) [53,59].The equations set is closed by relating g( r) to the structure factor, and the latter to the imaginary part of the response via the fluctuation-dissipation theorem.In different language, the choice of G L amounts to define the irreducible interaction determining the vertex corrections.Inspired by these physical ideas, we now turn to implement them in the BF theory.As detailed in the SM [59], our method naturally embodies the spin-SU (2) and time-reversal symmetries dictated by the Hamiltonian (1), and those emerging from gauge transformations, like the Hugenoltz-Pines theorem [60] ensuring that the excitation spectrum be gapless while the Goldstone mode sets in.The resulting complex formalism can be represented in a quite compact form, but in order to comprehensively reveal the essence of the theory, we reduce it to a minimum by first focusing on the calculation of the superfluid transition temperature T c .Since the Thouless criterion states that the divergence of the the pairing susceptibility Π (q, ω) is related to the divergence of the particle-particle scattering vertex, we start by evaluating where the operator C q ≡ k c −k+q/2,↓ c k+q/2,↑ annihilates a fermion pair with total momentum q and averages are meant at equilibrium as in linear response.We perturb the pairing fields by acting with the source term J q (t) explicitly breaking the U (1) symmetry, i.e. adding q J * q (t)C q (t) + J q (t)C † q (t) to (1).We then compute the system linear response by the equation of motion method.In fact, the generalized Wigner distribution function , turns out to be more practical to work with, than C ( †) .After Fourier transforming in frequency domain, we obtain: q ) is driven by the bare contact and the exchange of a resonant boson.In the above equation, we have omitted the ω-dependence of f k,q for the sake of simplicity.Average over the unperturbed system yields D α k,q = n k δ q,0 , with n k = 2T iωn G(k, iω n )e iωn0 + the momentum distribution, that can be computed from the fermionic Green's function G(k, iω n ) once the self-energy is known.Postponing this task, we begin by approximating The third term is more complicated, being an average of four operators.The equation of motion for it would contain higher-order terms in an infinite hierarchy [61].We close the equation set by generalizing the STLS idea [54] to the pairing channel.To gain physical insight, we revert back to real space and approximate the connected average as α=± δ The core of our approximation is the Cooperpair correlation function describing the correlations occurring whenever a Cooper pair is destroyed at R and a second one created at x.As an equilibrium average, g corr depends only on |R − x| and not on time.We remark that this is the only approximation in our theory.At variance with other approaches [50], once performed all the rest fully consistently follows. Applying the extended STLS decoupling and transforming back to q space [59], the static pairing structure factor S(q) naturally appears, related to g corr (r) by Fourier transform.Solving for f * k,q and using C † q = k f * k,q = Π(q, ω)J * q , the pairing susceptibility reads [59]: in terms of the non-interacting the local field factor.In (4), the function Π 0 (q, q ; ω) is obtained after replacing q → q only in the numerator of the definition of Π 0 (q, ω).Finally, S(q) is related to Π(q, ω) via the fluctuation-dissipation theorem: The presence of the static S(q) in G(q, ω) can be alternatively derived by extending to the particle-particle channel the Niklasson calculation [61]: in (3), one can write the equations of motion for the last average and show that S(q) appears while ω → +∞.Eqs. ( 3)-( 5) form a closed set, extending the STLS approach to the presence of a pairing field driven by the microscopic Fano-Feshbach mechanism.Given T and µ, their self-consistent solution provides the pairing susceptibility beyond mean field, and therefore all the fluid properties.We now proceed to determine the evolution of T c in the crossover.The Thouless criterion amounts to require that the denominator in (3) vanishes: (6) Notice that (6) can be viewed as the conventional RPA equation for T c , with the interaction corrected by [1 − G(0, 0)].We will comment on the physics later on.We now need an equation for µ, deriving the corresponding number equation from a diagrammatic argument.Indeed, (3) can be viewed as a RPA resummation of diagrams as in Fig. 2 (a)-(b), consisting on N bubbles connected by N − 1 interaction lines, the latter corresponding to the free boson propagator D 0 (q, ω) plus the bare U bg corrected by [1 − G].In essence, the local-field approximation amounts to estimate the particle-particle irreducible vertex in the pairing channel as Summing up all the closed ring diagrams as in Fig. 2(c), we get the interaction correction δΩ to the noninteracting grand-canonical potential Ω 0 [59].We obtain the same δΩ by the running-coupling constant method, after neglecting the intrinsic dependence of G on g and U bg .Thus, we expect this approximation to be quantitatively reliable for small to intermediate values of g and U bg .We then derive µ from Eqs. ( 4)-( 7) are the closed set describing the critical behavior in the crossover for narrow-to intermediate FF resonances.For their self-consistent solution, one iterates an initial guess for G (e.g.G = 0) until convergence. Once discussed the essence of the theory, we now relax the G 0 approximation: we relate the scattering ver- and the G constant during the loop.Here, Q n = (ν n , q) (K n = (ω n , k)) is the 4-vector with a bosonic (fermionic) Matsubara frequency.This the analogue of a GW approximation [53].Limiting cases-Despite the equations complexity, we can extract relevant analytical limits.We first need to regularize the (otherwise diverging) non-interacting susceptibilities.This requires [57,62] to renormalize U bg , g and ν into U R , g R and ν R , exactly as in the two-body problem [59].From now on we drop, for simplicity, U bg .In the BCS limit with ν R ε F and (4).Thus, at fixed ν R , the local field correction suppresses T c with respect to its mean-field value.This result is reminiscent of the celebrated Gor'kov and Melik-Barkhudarov (GMB) correction in one-channel calculations [56], stating that in the BCS limit particlehole processes suppress T c and superfluid gap by a factor 2.2 [63].In our theory, these particle-hole corrections show up in the g renormalization.Indeed, evaluating in the BCS limit ν R ε F [10] the BF vertex diagram Λ BF k,ω (q, Ω) to lowest order as in Fig. 3, we get: Replacing From the structure of our equations, using local field factors amounts to neglect the (k, ω) dependence in the full boson-fermion vertex Λ BF k,ω (q, Ω), like in electron liquids [53].This defines the perimeter of our theory in capturing the T c suppression effects.On the BEC limit with ν −ε F , eq. ( 6) yields µ ν and from ( 7) one obtains the BEC T c of n/2 bosons with mass 2m: Implications for current experiments-Self-consistent calculations in the narrow-resonance case with k F |r 0 | 5 in the BCS limit, yields a more limited suppression with our theory, so that T c is enhanced by a factor up to 10% than the (perturbative) T c,GMB .The resonance turns out to be characterized by a maximum at T c [64], that reduces towards the narrow limit [65]: in the broad resonance limit with k F r 0 0.5 we get T c /T F 0.22, comparable with the QMC value T c /T F = 0.24(2) by Bulgac et al. [31].At unitarity, varying the resonance width in the range 0.5 < k F |r 0 | < 5 by one order of magnitude yields variations of the maximum T c up to 8% [65].Conclusions-We have developed a unifying fully selfconsistent theory of superfluidity with pairing fluctuations beyond mean field, hinging on the original Fano-Feshbach microscopic resonant mechanism.Our theory bridges the description of the BCS-BEC crossover from narrow to broad FF resonances, in a region so far devoided of simulational methods and largely unexplored by theoretical methods.We brush up the old-fashioned concept of local field, successfully developed in electron liquids, and demonstrate its so-far unexplored methodological power to access a complex phase diagram where density, spin, and amplitude/phase fluctuations of a superfluid order parameter can be treated on equal footing with only one physical approximation.Intermediate resonance widths are becoming accessible by a new class of experiments, like with fermionic Er atoms [66], Fermi-Hubbard simulators [36], or Fermi-Bose mixtures [37], that can provide a test-bed for our theory.A systematic study of relevant observables like T c and superfluid gap at T T c , requires a full numerical solution, that is under way [65].Effects up to 10% found in T c with respect to the broad resonance case, open up unexplored physics.Interest is also building up on the equation of state in neutron stars, where observational data are compatible with k F a −13 and k F |r 0 | 2, though on the fully different fm length scale [16]. This document contains details on local-field factor theories, and the extension of the theory developed in the main text, to the case with finite background scattering length. LOCAL-FIELD FACTOR THEORIES IN A NUT-SHELL exchange-correlation field δn(q, ω) = χ(q, ω)V ext (q, ω), χ(q, ω) = χ 0 (q, ω) 1 − v q [1 − G(q, ω)]χ 0 (q, ω) Figure 1: Schematic picture of the local-field factor approximation.In a classical perspective, once an electric field is applied to a dielectric material, the field locally felt by a single electron is reduced by the polarization induced by the other electrons surrounding it.This is a pure correlation effect.In a quantum many-body system, the potential felt by an electron is composed by adding up the external, the Hartree, and a local correlation fields.The Hartree field, as a first approximation, describes correlations accounting the presence of the other electrons under a self-consistent mean-field.The correlation field accounts for the fact that the electron itself is actively participating to the creation of the mean-field.In local-field approximation, this last last term is written as a purely local one (in frequency and momentum space). The local-field factor theories for the dielectric function [1] have originally been developed in the 70s to describe the low-density normal electron liquid.We here illustrate the basics of the theory for the simpler case of the density response in a normal liquid, taken to be the electron liquid as in the original formulation.With this strategy, we wish to make easier for the reader tha task of following the formalism for the more complex case treated in the main text and detailed in the next section, that is characterized by many variables (density, spin, and amplitude and phase of the order parameter) and by broken symmetry within a boson-fermion Hamiltonian. The essence of local-field factor theories is well understood from the analogy with the electrodynamic response of a medium in a Lorentz cavity, which we now remind.The concept is sketched in Fig. 1, which in the following we refer to.While the response of the (non-interacting) system to the external potential V ext ( r, t) driven by a corresponding electric field amounts to the Lindhard approximation, the Random-Phase Approximation (RPA) represents the response to V ext ( r, t) + V H ( r, t), where V H ( r, t) = d rv(| r − r |)δn( r , t) is the Hartree mean-field potential sourced by the charge δn( r , t) induced at the cavity boundaries in the presence of the two-body interaction v(| r − r |).In fact, RPA is known to badly overestimate the effects of interactions among the particles in the medium, since they are allowed to be closer than they really do.For a beyond-mean field treatment, one therefore has to consider also the potential sourced by the polarization charges that are locally induced in the medium.This can be expressed as ), in terms of the so-called local-field factor G L .In a quantum arXiv:1908.10648v1[cond-mat.quant-gas]28 Aug 2019 fluid, this in fact describes the effects of the hole surrounding a given particle dug in by exchange and correlation processes.The local field G L can be related to the structure of the fluid, and the latter -via the fluctuation-dissipation theorem -to the response function expressed in terms of G L .This procedure provides a closed set of equations for the response function of the system, to be solved self-consistently. The whole point then amounts to how G L can be related to the structure of the fluid.A number of approximations have been developed and tested for the electron liquid, starting from the Hubbard approximation, where a form of G L is built up that interpolates between known infrared and ultraviolet behaviors [1].A popular approximation has been developed by Singwi, Tosi, Land, and Sjölander (STLS) [2] and extended to the quantum regime by Hasegawa and Shimizu [3].Improvements on the long-wavelength behavior of the STLS theory have been developed by Vashishta and Singwi [4], to build in the compressibility sum rule by including the density dependence of the pair correlation function.Notwithstanding their simplicity and differences, these models were capable of successfully describing the ground-state properties in remarkable agreement with Quantum Monte Carlo simulations, and have been extended to treat multicomponent and spin-polarized systems, and current and transverse spin response [1] in normal systems.We have taken inspiration from the STLS scheme, that is known to represent an optical trade-off between performance and simplicity for most properties. In the STLS scheme, G L is determined by noting that the generalized force due to exchange and correlation in the fluid can be expressed as that is after weighting the bare force ∇ r v(| r − r |)δn( r , t ) over the probability [g( r − r ) − 1] dictated by the paircorrelation function g( r) [1] and then summing up over all the fluid slices.Notice that the explicit choice has been performed of confining all the dynamical effects into the induced density, so that the static g( r) is involved.In momentum space, eq. ( 1) can be recast in terms of the static structure factor, related to the pair correlation function g(r) by Fourier transform: with n the system density and v( q) the Fourier transform of the bare interaction potential appearing in (1).Given the relationship between G L and the response function χ( q, ω) expressed in Fig. 1, the set of equations is then closed after relating S( q) to the imaginary part of the response function via the fluctuation-dissipation theorem: with χ (q, ω) the imaginary part of the response function.If we had to reason within the self-consistent integralequations methods for the Green's functions, the choice of G L would amount to define the irreducible interaction which, along with the single-particle Green's function, determines the vertex-function correction.This in turn, closing the self-consistent loop, leads the proper response, the effective potential and the single-particle self-energy and Green's function. CALCULATION OF THE INTERACTION CORRECTION TO THE GRAND-CANONICAL POTENTIAL The interaction correction δΩ to the non-interacting grand-canonical potential is calculated via the ring diagrams in fig. 2 in the main text.The resulting expression reads: where Q n ≡ (q, iν n ), ν n = 2πnT is a bosonic Matsubara frequency.The e iνn0 + factor is needed for the Matsubara sum convergence, and as usual G, Π 0 and D 0 are calculated on the imaginary axis.The equation for the density n results where n B labels the Bose distribution and the factor 2 accounts for each boson being composed of two fermions. RENORMALIZATION OF THE COUPLINGS In order to make the non-interacting pairing suscptibility Π 0 convergent, one has to renormalize the couplings.The simplest choice reads [7,8]: with γ ≡ k m/k 2 .This simple choice is sufficient to cancel the divergence of the integrals defining Π 0 (q, ω), Π 0 (q, q ; ω).In the following, we drop U bg for simplicity, though it can easily be restored. EQUATIONS IN THE SUPERFLUID STATE WITH FINITE BACKGROUND SCATTERING LENGTH In order to focus on the essence of the theory, in the main text we have derived the equations for the critical temperature T c , where the complex formalism is slightly reduced.Here, we extend the theory to the superfluid state.The corresponding theory equations can be obtained in a similar manner as those for T c : the main difference consists in adding to Hamiltonian (1) in the main text a "mean-field" term containing the pairing gap.We start by performing a mean-field decomposition and Nambu transformation, i.e. on the Boson-Fermion Hamiltonian (1) in the main text.We get Here, we have made use of the definitions with τ i the Pauli matrices, h k = ε k τ 3 − ∆τ 1 ( ∆ is the pairing gap) and U eff = U + g 2 /(2ν − 2µ).We then define the generalized Wigner distribution functions (in Nambu basis): and add to the Hamiltonian (8) a source term of the form q,σ,σ Now, we then write down the equations of motion for the Wigner distribution functions by commuting them with the perturbed Hamiltonian.We linearize them at first order in the sources, Fourier transform in time, and eventually obtain the expression ω δf σσ p,q = h σs p+q/2 δf sσ p,q − δf σs p,q h sσ p−q/2 + J σs q (ω) n sσ p−q/2 − n σs p+q/2 J sσ q (ω) + Here, a summation over repeated indexes is intended.In addition, n σσ k δ q,0 = Ψ † k+q/2,σ Ψ k−q/2,σ 0 , with the average performed on the equilibrium system, and U eff (q, ω) = −U bg + g 2 D 0 (q, ω), with D 0 (q, ω) = [ω − ε B q ] −1 the noninteracting boson propagator.The last term in eq. ( 12) can be decomposed into a connected and an unconnected average, i.e. Neglecting the first (connected average) term would lead to the RPA form of the pairing susceptibility, valid for small g and U values. If we were in the normal state described in the main text to calculate the critical temperature T c , one would get physical insight by transforming eq. ( 3) in the main text into the equations of motion for the f ) in real space.Then, one would plug in the STLS approximation to the connected average, i.e. with x α ≡ R + αr/2, and transform back to momentum space, getting the linearized and decoupled form of the equations of motion (3) in the main text: The presence of the static S(q) arises from the equilibrium value of c c + in the linearized equation of motion and clarifies the choice of g corr for decomposing the connected average as in the STLS approximation.Neglecting the interaction terms in ( 14) leads to the asymptotic limits for the local field factor, exactly as performed by Niklasson [5] and Zhu and Overhauser [6] for the electron liquid. Going back to the general case ( 12), we write the connected average in real space basis and we approximate it with the help of generalized pair correlation functions: Here, where the subscript 0, C means that the connected average is performed over the unperturbed system.We then transform back the equations of motion in momentum space and decompose the Wigner distribution function in its Pauli components: Each Pauli component has a precise physical meaning: indeed, components 1 and 2 describe amplitude and phase fluctuations of the order parameter, respectively, while components 3 and 4 are associated with density and spin fluctuations.After performing the same transformation on the sources J, we notice that the response of the system to the different sources J i will be given by a 4 × 4 matrix response function Π. Inverting the equations of motion, we then get an expression for Π (all multiplications and divisions must be intended in a matrix sense): Π(q, ω) = Π 0 (q, ω) 1 − Π 0 (q, ω) ÛR,eff (q, ω) [W − G(q, ω)] , with W = diag{1, 1, 0, 0} and ÛR,eff (q, ω) the (rotated) matrix effective interaction diag{−U + g 2 D 0 (q, ω), −U + g 2 D 0 (−q, −ω), 0, 0}.In particular, this is rotated by the unitary matrix Π 0 (q, ω) is the response function calculated only with the mean-field Hamiltonian, whose expressions can be found, for example, in [9].It turns out that all the components of Π 0 connecting a spin fluctuation with another kind of excitation are zero.Indeed, by means of symmetry principles, it can be proved at all orders that spin fluctuations completely decouple from the others [10].The local field factor is now a matrix defined as G(q, ω) ≡ − q W Π 0 (q, q , ω; ∆ = 0) W W Π 0 (q, ω; ∆ = 0) W S 11 (q − q ) + S 22 (q − q ) , ( where S 11 and S 22 are the amplitude-amplitude and phase phase structure factors, respectively.They are related to the 11 and 22 components of the response functions by the fluctuation-dissipation theorem.Inside the summation in eq. ( 20), the quantity Π 0 (q, q , ω; ∆ = 0) appears.The dependence of the latter on two momenta has the following meaning: since Π 0 can be expressed as the integral of a fraction, we assign two different momenta in the numerator and denominator, similarly to what we have done in the main text.Finally, since approximation ( 15) is expected to be valid in the high frequency limit, we can neglect ∆ (ω ∆) in the calculation, so that W Π 0 (q, q , ω; ∆ = 0) W and W Π 0 (q, ω; ∆ = 0) W precisely equal to their analog in the normal state.We have constrained this approximation to provide a response function satisfying all the fundamental spin-flip and time-reversal symmetries of the Hamiltonian (8) [11].The renormalized boson propagator is: D(q, ω) = D 0 (q, ω) 1 − D 0 (q, ω) Σ STLS (q, ω) , where the self-energy equals Σ R,STLS (q, ω) = g 2 R −1 W Π 0 (q, ω) [W − G(q, ω)] 1 + U Π 0 (q, ω) [W − G(q, ω)] W R. Figure 2 : Figure 2: Local-field approximation: Feynman diagrams picture.Upper row: two-particle irreducible vertex in the particle-particle channel.Dashed line: exchange of a bare boson plus background interaction.Shaded triangle: renormalized vertex correction, depending on one frequency and one momentum.Central row: pairing susceptibility (3) as an RPA resummation of bubble diagrams connected by twoparticle irreducible vertices.Lower row: RPA-like diagrams for the grand-canonical potential. Figure 3 : Figure 3: Lowest order contribution to the irreducible bosonfermion vertex Λ BF K (Q), showing the coupling constant g is renormalized by particle-hole processes (dashed lines are bare bosons).
2019-08-28T11:25:43.000Z
2019-08-28T00:00:00.000
{ "year": 2019, "sha1": "3928c5ac28cff5ba1d8f2bdc8528d493ec1035bb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3928c5ac28cff5ba1d8f2bdc8528d493ec1035bb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
56041637
pes2o/s2orc
v3-fos-license
Impurity states and Localization in Bilayer Graphene: the Low Impurity Concentration Regime We study the problem of non-magnetic impurities adsorbed on bilayer graphene in the diluted regime. We analyze the impurity spectral densities for various concentrations and gate fields. We also analyze the effect of the adsorbate on the local density of states (LDOS) of the different C atoms in the structure and present some evidence of strong localization for the electronic states with energies close to the Dirac point. The problem of adatoms in graphene has been the subject of an intense activity for they could be used to modify and control the electronic properties of the material. Diluted adatoms or molecules necessarily generates disorder [1] and in some regimes may lead to strong localization of the electronic states [2,3,4,5,6,7]. Electron localization in graphene is quite peculiar, Dirac fermions tend to elude localization in systems with Anderson-type disorder. However impurities leading to short range disorder at the atomic scale generate inter-valley mixing and break the symplectic symmetry opening the route to strong localization. The problem of adatoms and electron localization in bilayer graphene (BLG), although considered by several groups [11,12], received much less attention [10,13]. In the most common structure of BLG, known as the Bernal stacking, only one of the two non-equivalent sites (A, B) of the honeycomb lattice of the top layer lies on top of a site of the bottom layer. The resulting structure, shown in Fig. 1, induces a weak coupling of the two layers. The unit cell has four carbon atoms leading to four bands, two of them having a parabolic dispersion relation around the K and K points of the Brillouin Zone, touch each other at the Fermi energy. In most of the experimental setups, BLG lies on top of a substrate and the impurities are adsorbed on the top layer only. Due to the difference of the A and B sites of the layer, there is a small difference in the absorption energy on the two inequivalent sublattices. This difference favors absorption on the B sites and in what follows we assume that all impurities are on the B sublattice. A very interesting aspect of BLG is its response to a gate field [14,15,16,17,18]. An electric field perpendicular to the layers opens a gap at the Fermi level an effect that can be used to modify the impurity states. Here we study the problem of non-magnetic impurities adsorbed on BLG in the diluted regime. We analyze the impurity spectral densities for various concentrations and gate fields. The effect of the adsorbate on the local density of states (LDOS) of the different C atoms in the structure is analyzed and we present some evidence of strong localization for the electronic states with energies close to the Dirac point. The Hamiltonian of the system includes the bilayer Hamiltonian H BLG , the impurity contribution H imp and the hybridization term H hyb . In what follows, as there are no spin effects we ignore the spin index. here a jk and b jk destroy electrons with wavevector k in sublattices A and B respectively, the subindex j = 1 (j = 2) refers to the top (bottom) plane. V is the potential induced by the gate voltage, t and t ⊥ are the intra and inter-plane hoppings, respectively and φ(k) = δ e ik·δ where {δ} are the three vectors connecting one site with its neighbors in the same plane. We consider adatoms that are bounded to a single C atom, H imp = l ε 0 f † l f l with f l the destruction operator of an electron on the impurity orbital of the adatom at site l, ε 0 is the energy of the orbital and the sum runs over the sites of the carbon lattice having an impurity on top. Finally The parameters used to describe the BLG are t = 0.25 eV, t ⊥ = t/9 and we take γ = 2t [19,20]. Three values of the bias voltage V are taken; V = −0.02t, V = 0eV, V = +0.02t and without any loss of generality we take ε 0 ≥ 0. We first present results for the impurity contribution ρ imp (ω) to the total density of states (DOS). We use the Chebyshev polynomials method which has proven to be very efficient to deal with realistic impurity concentrations [10,21,22]. The average impurity spectral density is then given by ρ imp (ω) = − 1 π ImG ll avg where G ll is the retarded impurity propagator and . . . avg indicates the configurational average over the impurities. The results for three different concentrations are shown in Fig 1. There, the three columns correspond to different values of the gate voltage V , the rows to different impurity concentrations. Interestingly, for V = 0 the impurity spectral density shows a gap close to the Dirac point that increases with increasing impurity concentration. This effect is reminiscent of the gap induced in monolayer graphene when impurities are adsorbed in a single sublattice [23]. For a non-zero gate voltage V , the pristine BLG develops a gap at the Dirac point (indicated by vertical lines in the figure). For positive V the gap is partially filled by impurity states. For large impurity concentrations the gap closes, while for small concentrations a reduced gap remains. In the thermodynamic limit and for all cases discussed above, we expect the gaps to be just pseudogaps with exponentially small DOS [24]. The case of negative V is completely different: the gap of the pristine BLG remains unaltered for small or moderate values of |V |. This effect can be understood by looking at the response of a single impurity to the gate field [25]. In the single impurity case and for ε 0 = 0 and small values of V a bound state appears in the gap only for one polarity of the field. To better understand the nature of the electronic states and the effect of the gates, we calculate the LDOS at the different C atoms, in the upper and lower layer. Results for different concentrations and polarities of the bias field are shown in Fig 2. In all cases the impurity contribution to the LDOS for small energies is large at the A1 sublattice, an effect that is also characteristic of impurities adsorbed on top of a single B1-carbon atom in monolayers [20]. For low concentrations the LDOS of the other sublattices-namely B1, A2 and B2-present only small modifications. In particular for V > 0 the valence band remains essentially unaltered with its 1D-like van Hove singularity in the A2 sublattice. This strongly suggests that at least for this polarity there is no strong localization of the electronic states in the valence band. We could draw similarly conclusions from the structure of the LDOS of the A2 sublattice for V < 0 as illustrated in the upper panels of Fig 2. As the concentration increases (lower panel of the figure), the modifications of the LDOS become more important and it is necessary to look for a better and more qualitative criterion for localization. To this end we evaluate the function R(ω) = ρ typ (ω)/ρ(ω) where [26] Here N is the number of impurities and ρ l (ω) = − 1 π ImG ll is the spectral density of the l th impurity. When, for a given energy ω, the states are extended ρ l (ω) has small fluctuations and ρ typ (ω) ≈ ρ(ω) resulting in R(ω) ≈ 1. In contrast, if states are localized ρ typ (ω) is dominated by small values of ρ l (ω) and R(ω) 1, with R(ω) → 0 as N → ∞. We stress that our analysis is based only on the properties of the impurity spectral densities and gives a very qualitative estimation of the energy range where we may expect strong localization effects. While, as mentioned above, the Chebyshev polynomials method is very efficient to evaluate the average spectral densities, it becomes numerically costly to evaluate a large number of ρ l (ω). To estimate R(ω) we then resort to the method described in Ref. [6]. The impurity propagator matrix G with matrix elements G ij (ω) satisfies the Dyson equation where I is the unit matrix andg is a matrix whose elements are the propagators of pristine biased BLG, g B1i,B1j (ω), between impurity sites adsorbed on the B1 sublattice. For large distances and low frequencies we evaluate g B1i,B1j (ω) in the continuous limit [27]. The required spectral densities are the imaginary part of the diagonal terms of the matrix G. Results for R(ω) are presented in Fig. 1. They show that strong localization effects are to be expected for energies very close to the Dirac point. In the case of positive bias, when the gate induced gap is filled by impurity states, these states in the gap are strongly localized. Our results suggest that, independently of the bias and for large impurity concentration, localization effects are also to be expected in the energy window close to the maximum of the impurity LDOS. This last effect is also observed in monolayer graphene although the localization length may be quite different in the two systems. Away from these energies, only weak localization effects are likely. A more quantitative estimation of the localization phenomena requires evaluation of the localization length. As a final remark we mention that for the parameters used in the present work, that are suitable to describe fluorine on graphene, and the small bias voltages of the figures, a single impurity on an A1 sites does not generate a bound state inside the gap for positive V , for negative V there is a bound state exponentially close to the gap edge. As a consequence, a small amount of impurities added to the A1 sublattice would not change the results obtained for small energies (within the field induced gap). In summary, we have presented results for diluted impurities on BLG with and without gate voltages that open a gap in the pristine sample. We have shown the existence of drastic effects of the polarity of the electric field. For impurity parameters appropriate to describe fluorine on BLG, the gap induced in the pristine sample for positive polarity is filled with strongly localized states. Conversely, for negative polarity the gap remains. In all cases, strong localization occurs only at low energies.
2018-12-07T11:49:14.970Z
2014-12-08T00:00:00.000
{ "year": 2014, "sha1": "720133b0953060ab07aeeaa672bfaefb88246e4c", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/568/5/052003", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "720133b0953060ab07aeeaa672bfaefb88246e4c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
8633491
pes2o/s2orc
v3-fos-license
Bone-marrow mesenchymal stem cells reduce rat intestinal ischemia-reperfusion injury, ZO-1 downregulation and tight junction disruption via a TNF- α -regulated mechanism AIM: To investigate the effect of bone-marrow mesenchymal stem cells (BM MSCs) on the intestinal mucosa barrier in ischemia/reperfusion (I/R) injury. METHODS: BM MSCs were isolated from male Sprague-Dawley rats by density gradient centrifugation, cultured, and analyzed by flow cytometry. I/R injury was induced by occlusion of the superior mesenteric artery for 30 min. Rats were treated with saline, BM MSCs (via intramucosal injection) or tumor necrosis factor (TNF)- α blocking antibodies (via the tail vein). I/R injury was assessed using transmission electron microscopy, hematoxylin and eosin (HE) staining , immu-nohistochemistry, western blotting and enzyme linked immunosorbent assay. RESULTS: Intestinal permeability increased, tight junctions (TJs) were disrupted, and zona occludens 1 (ZO-1) was downregulated after I/R injury. BM MSCs reduced intestinal mucosal barrier destruction, ZO-1 downregulation, and TJ disruption. The morphological abnormalities after intestinal I/R injury positively correlated with serum TNF- α levels. Administration of anti-TNF- α IgG or anti-TNF- α receptor 1 antibodies attenuated the intestinal ultrastructural changes, ZO-1 downregulation, and TJ disruption. CONCLUSION: Altered serum TNF- α levels play an important role in the ability of BM MSCs to protect against intestinal I/R injury. ischemia/reperfusion INTRODUCTION Digestive organ transplantation and other abdominal surgical procedures can result in different degrees of intestinal ischemia/reperfusion (I/R) injury, which can delay patient recovery and lead to systemic organ failure. Therefore, intestinal I/R injury is an important clinical issue. The small intestine is composed of labile cells that are easily injured by I/R; however, the mechanisms responsible for intestinal I/R injury are unclear. Previous studies have reported that the serum level of tumor necrosis factor (TNF)-α is elevated in patients with severe intestinal I/R injury [1] . TNF-α is a cytokine with broadspectrum physiological and pathological responsiveness, which is primarily secreted by monokaryons and macrophages. In addition to participating in the humoral and cellular immune responses, TNF-α also plays an important role in diseases such as severe hepatitis, septic shock and inflammatory bowel disease [2][3][4][5][6] ; however, it is not known whether TNF-α affects the intestinal barrier function during I/R injury. Bone-marrow mesenchymal stem cells (BM MSCs) are fibroblast-like, pluripotent adult stem cells. BM MSCs can adhere to plastic and grow readily in the laboratory. BM MSCs give rise to mesoderm cells [7,8] , and have been reported to differentiate into all three germ cell lines [9] , liver and neural cells [10,11] , which have potential to be used for the treatment of various diseases. Allogeneic MSCs were transplanted into primates via an intravenous route and distributed to the gastrointestinal tract where they proliferated [12] . MSCs have also been shown to have immunomodulatory capabilities due to the secretion of several growth factors [13,14] . BM MSCs reduce intestinal I/R injury in rats [15] . Studies in I/R rodent models have demonstrated that MSCs can beneficially produce paracrine growth factors and anti-inflammatory cytokines [16] . It should be noted that MSCs respond to TNF-α, but do not produce TNF-α [17] . The intestinal mucosa is the physical and metabolic barrier against toxins and pathogens in the gut lumen. Tight junctions (TJs) are the main structures responsible for restricting the paracellular movement of compounds across the intestinal mucosa. Structurally, TJs are composed of cytoplasmic proteins, including the zona occludens proteins, ZO-1-3 [18,19] and two distinct transmembrane proteins, occludin and claudin [20,21] , which are linked to the actin-based cytoskeleton [22] . TJs function as occlusion barriers by maintaining cellular polarity and homeostasis, and by regulating the permeability of paracellular spaces in the epithelium [23] . ZO-1, a member of the membrane-associated guanylate kinase family of proteins, acts as a scaffold for the organization of transmembrane TJ proteins, and also recruits various signaling molecules and the actin cytoskeleton to TJs [24] . Although previous studies have provided an insight into the molecular structure of TJs, much less is known about TJ functionality under physiological or pathophysiological conditions. Few studies have described the intestinal mucosa ultrastructure or changes in TJs during I/R injury. In this study, we used a rat model of intestinal I/R injury to investigate the effect of BM MSCs on intestinal mucosa ultrastructure, with an emphasis on the mechanisms of intestinal barrier dysfunction. Animals and I/R injury model Male Sprague-Dawley rats (180-200 g) were obtained from the Military Medical Science Academy of China People's Liberation Army (PLA; Beijing, China), housed at a constant temperature and humidity, and provided with food and water ad libitum. All animal experimental procedures were approved by the Ethics Committee of the Military Medical Science Academy of the PLA before commencement of the study. One-hundred and eight male rats were fasted for 12 h with free access to water before surgery and randomly assigned to five experimental groups. The operative procedures were performed using standard sterile technique under general anesthesia using 5% chloral hydrate (10 mL/kg). All rats were subjected to laparotomy using a midline incision that was approximately 3 cm, and the principal branches of the superior mesenteric artery (SMA) were identified. In the Sham group, the SMA was isolated using blunt dissection, without clamping the vessel. In the BM MSCs + I/R injury group, the SMA was occluded for 30 min using an atraumatic microvascular clamp. Immediately after the clamp was released, 1 × 10 7 male rat BM MSCs suspended in 0.5 mL serumfree Dulbecco's Modified Eagle's Medium (DMEM) were injected into the intestinal submucosa at five different locations. Animals in the normal saline (NS) + I/R injury group underwent I/R followed by the injection of 0.5 mL normal saline into the intestinal submucosa at 10 different locations. The anti-TNF-α + I/R injury group and the anti-TNF-αR1-IgG + I/R injury group were administered with anti-TNF-α IgG (1000 µg per rat; United States Biological, Swampscott, MA, United States) or anti-TNF-α R1 antibody (1000 µg per rat; R and D Systems, Minneapolis, MN, United States), respectively. Injections were given via the tail vein after induction of I/R injury. The abdomen was closed and the animals were allowed to recover with free access to tap water and standard pellet rat chow. Rats in the I/R injury, BMSCs + I/R injury and Sham groups were euthanized at 2, 6, 24, 72 and 144 h after I/R injury (n = 6 at each time point). Rats in the anti-TNF-α IgG + I/R and anti-TNF-α R1 antibody + I/R injury groups were euthanized at 6 h after I/R injury (n = 6 each). Blood samples and approximately 5 cm of the ileum were collected from each rat. The plasma was separated by centrifugation and stored at -80 ℃ until analysis. The intestinal samples were fixed for histopathological analysis and transmission electron microscopy. Isolation and characterization of BM MSCs BM MSCs were isolated from the femur and tibia of male Sprague-Dawley rats (100-120 g). Red blood cells were lysed using 0.1 mol/L NH4Cl, and the remaining cells were washed, resuspended, and cultured for 4 wk in DMEM/F12 (Gibco, Carlsbad, CA, United States) containing 100 U/mL penicillin, 100 mg/mL streptomycin, and 15% fetal bovine serum. BM MSCs were cultured in an incubator at 37 ℃, 5% CO2 with saturated humidity. The medium was changed every 72 h. Histological measurement of intestinal mucosal injury Serial 2-cm samples were taken from the terminal ileum and fixed with 10% neutral formalin. Tissues were processed, embedded, and stained with hematoxylin and eosin. Three paraffin sections were prepared from each tissue sample. Two pathologists who were blinded to the ated goat anti-rabbit IgG (1:300 in PBS; Histostain-Plus kit, Zymed Laboratories, South San Francisco, CA, United States) for 2 h at room temperature, rinsed in PBS, rinsed in distilled water, then the staining was developed using 3,3'-diaminobenzidine and the sections were counterstained using hematoxylin. Statistical analysis SPSS version 10.0 (SPSS, Chicago, IL, United States) was used for the statistical analysis. Normally distributed data were shown as the mean ± SD. Different groups of data were compared by analysis of variance (ANOVA). The degree of relationship between TNF-α and the Chiu risk score was evaluated by a bivariate correlation. The results was statistically significant when P < 0.05, and was highly significant when P < 0.01. Culture of BM MSCs The cells were confirmed as BM MSCs based on their spindle-shaped morphology, adherence to plastic ( Figure source of the slides analyzed. The degree of histopathological changes was graded semiquantitatively using the histological injury scale previously described by Chiu et al [25] , as follows: 0, normal mucosal villi; 1, development of a subepithelial space, usually at the apex of the villi with capillary congestion; 2, extension of the subepithelial space with moderate lifting of the epithelial layer from the lamina propria; 3, massive epithelial lifting down the sides of the villi and ulceration at the villous tips; 4, denuded villi with dilated capillaries and increased cellularity of the lamina propria; and 5, degradation and disintegration of the lamina propria, hemorrhage, and ulceration. A minimum of six randomly chosen fields from each rat were evaluated and averaged to determine the degree of mucosal damage. Serum D-lactate, diamine oxidase and TNF-α assay The serum levels of TNF-α, D-lactate and diamine oxidase (DAO) were determined using enzyme linked immunosorbent assay kits (R and D Systems) according to the manufacturer's protocol. Detection and observation of intestinal mucosal ultrastructure Ultrathin (70-nm) intestinal sections were prepared using standard techniques and examined using a transmission electron microscope (Hitachi H-600, Tokyo, Japan). Immunohistochemical detection of ZO-1 in frozen tissue sections Frozen intestinal tissue sections (5 µm) were fixed on glass slides by incubation in acetone for 10 min at 4 ℃, and then incubated with 3% H2O2 for 20 min at room temperature, blocked in goat serum for 30 min at 37 ℃, and then indirectly immunolabeled with a rabbit anti-mouse polyclonal ZO-1 antibody (1:50; Santa Cruz Biotechnology) using an ABC kit at 4 ℃ overnight (Takara, Dalian, China), according to the manufacturer's instructions. For the negative controls, the primary antibody was replaced with PBS. The sections were then incubated in biotinyl- C B A 1A and 1B), ability to differentiate hepatocytes in vitro (data not shown), and flow cytometry results ( Figure 2). Most of the third-passage adherent cells were positive for CD90, CD29 and RT1A, and negative for the MSC markers, CD45, CD34 and RT1B. Furthermore, over the first three passages, the percentage of CD90 + and CD45cells rapidly increased from 80% to > 98% (Figure 2), which was in agreement with a previous study [26] . Confirmation of donor-derived BM MSCs Labeled BM MSCs homing to the intestine were visible 2, 6, 24, 72 and 144 h after transplantation ( Figure 1C). After the intestine was washed repeatedly with PBS, the labeled BM MSCs were still visible ( Figure 1D), which indicated that the transplanted BM MSCs could home to the intestine and survive long term. Histopathological examination The histopathological findings showed intact villi with no epithelial disruption in the Sham groups. In the NS + I/R injury group, massive destruction of the villi and inflammatory cell infiltration into the lamina propria were evident. In contrast, intestinal samples in the BM MSCs + I/R injury group (BM MSCs group) had significantly less damage in the small intestine. Major pathological changes observed were slight hyperemia, edema, and inflammatory cell infiltration in the mucosa and submucosa, with most of the intestinal villi intact (Figures 3 and 4). Chiu's grade scores of the three groups are shown in Table 1. Serum D-lactate and DAO The levels of D-lactate and DAO significantly increased, reaching a peak at 6 h in the NS + I/R injury and BM MSCs + I/R injury groups, compared to the Sham group. This confirmed that I/R injury increased intestinal permeability. The serum D-lactate and DAO levels in the NS + I/R injury group increased more than twofold compared to the Sham group at 2, 6 and 24 h after I/R (P < 0.01). However, the serum DAO levels in the BM MSCs + I/R injury group were significantly lower than in the NS + I/R group at 2, 6 and 24 h, and the serum D-lactate levels in the BM MSCs + I/R injury group were significantly lower than in the NS + I/R group at 6 and 24 h. At 6 h, the serum D-lactate and DAO levels in the anti-TNF-α + I/R injury and anti-TNF-αR-IgG + I/R injury groups were lower than in the NS + I/R injury group (P < 0.01; Table 2). At 72 and 144 h, the serum DAO levels in the NS + I/R and BM MSCs + I/R injury groups had reduced, but remained higher than in the Sham group, whereas D-lactate levels were not significantly different in the NS + I/R, BM MSCs + I/R and Sham groups at 144 h. These data indicate that serum DAO is a more sensitive marker of intestinal permeability than D-lactate, and also that the administration BM MSCs or TNF-α block- Figure 4 Histopathology of ileum sections of different groups at 6 h after ischemia/reperfusion injury (hematoxylin and eosin, × 100). A: In the ischemia/reperfusion (I/R) injury group there was marked intestinal mucosa injury at 6 h, with intestinal mucosa degradation and disintegration of the lamina propria, hemorrhage, and ulceration. B: In the bone-marrow mesenchymal stem cells + I/R injury group at 6 h, the damaged mucosa had recovered and there was extension of the subepithelial space with moderate lifting of the epithelial layer from the lamina propria, massive epithelial lifting down the sides of the villi, and ulceration at the villous tips. C and D: In the anti-tumor necrosis factor (TNF)-α + I/R injury group and the anti-TNF-αR1-IgG + I/R injury group at 6 h, the damaged mucosa had almost recovered to resemble that in the Sham control group. All values are mean ± SD (n = 6, three paraffin sections were prepared from each tissue sample. Two pathologists who were blinded to the source of the slides analyzed each slide); b P < 0.01 vs the Sham group, d P < 0.01 vs the saline (NS) + ischemia/reperfusion (I/R) injury group, f P < 0.01 vs bone-marrow mesenchymal stem cells (BM MSCs) + I/R injury group. TNF-α: Tumor necrosis factor-α. ade reduced the permeability of the small intestine and accelerated the recovery of intestinal barrier function after I/R injury in rats. Ultrastructural characteristics of the intestinal mucosa Compared to the Sham group, we observed obvious ultrastructural changes in the intestinal mucosa after I/R injury in the rats from the NS + I/R group. Epithelial cell microvilli were sparsely distributed, disarranged and distorted, and the epithelial cells were swollen or shrunken. The mitochondrial matrices were swollen, cristae were broken, and numerous TJs were disrupted. There was no disruption of TJs in the BM MSCs + I/R injury group, and only swelling of the epithelial cells was observed. The ultrastructural pathological changes in the groups treated with anti-TNF-α and anti-TNF-αR-IgG were also less severe than in the NS + I/R injury group ( Figure 5). Expression of ZO-1 protein Immunohistochemical analysis revealed strong ZO-1 expression in the intestinal tissue of the Sham group. In the intestinal tissue of the NS + I/R injury group, ZO-1 was expressed at low levels 2 h after injury, slightly in- All values are mean ± SD (n = 6). a P < 0.05, b P < 0.01 vs Sham group; d P < 0.01 vs the saline (NS) + ischemia/reperfusion (I/R) injury group. Figure 5 Bone-marrow mesenchymal stem cells and tumor necrosis factor-α blockade prevent ultrastructural pathological damage after intestinal ischemia/reperfusion injury. Transmission electron microscopy of the rat intestine after ischemia/reperfusion (I/R) injury. A: Epithelial cells and tight junctions (TJs) (arrows) were intact in the Sham group, × 30000; B: At 2 h after I/R injury, epithelial cells were swollen and shrunken, microvilli and organelles were normal, and TJs (arrows) were disrupted in the saline (NS) + I/R injury group, × 25000; C: At 6 h after I/R injury in the NS + I/R injury group, some microvilli were loose, TJs (arrows) were disrupted, and organelles were swollen with reduced electron density, × 30000; D: At 6 h after I/R injury and administration of bone-marrow mesenchymal stem cells, the microvilli and mitochondria of the endothelial cells were almost normal and TJs (arrows) were not disrupted, × 30000; E and F: TJs (arrows) between endothelial cells were intact 6 h after I/R injury in rats that received anti-tumor necrosis factor (TNF)-α IgG + I/R antibody (E, × 25000) or anti-TNF-α R1 antibody; (F, × 30000) before I/R injury. creased at 6 and 24 h, and by 72 h, ZO-1-positive signals were detected throughout the entire intestine ( Figure 6). Western blot analysis confirmed that ZO-1 expression decreased more significantly in the NS + I/R group than the BM MSCs + I/R injury group, particularly at 6 h ( Figure 7). Consistent with the immunohistochemical results, western blotting indicated that ZO-1 expression was significantly higher in the BM MSCs + I/R injury group and the two antibody-treated groups at 6 h than in the NS + I/R injury group (Figure 8). Effect of I/R injury on serum TNF-α levels Compared to the Sham group, the serum TNF-α levels increased significantly, peaking at 6 h, in the NS + I/R injury group. The serum level of TNF-α was significant-ly lower in the BM MSCs+ I/R injury group at 6 and 24 h than the NS + I/R injury group (P < 0.05, Table 3). The morphological abnormalities after intestinal I/R injury were positively correlated with serum TNF-α levels ( Table 4). DISCUSSION I/R injury to the gut is a common event in a variety of clinical conditions, such as trauma, burn injuries, septic shock, heart and aortic surgery, and liver and small bowel transplantation [27,28] . Intestinal I/R results in edema, apoptosis, necrosis of epithelial cells, disruption of mucosal integrity and small intestine function, which in turn increases mucosal and vascular permeability, bacte- rial translocation, as well as the risk of systemic inflammation response syndrome, multiple organ dysfunction and death [29,30] . Until recently, no effective treatments existed for intestinal I/R injury. Research has suggested that BM MSCs could possibly play a role in the treatment of I/R injury in the heart, kidney and brain [31][32][33] ; however, studies of the effects of BM MSCs in intestinal disorders are scarce. In the current study, the therapeutic potential of BM MSCs was evaluated in an experimental rat model of I/R injury, which led to disruption of intestinal mechanical barrier function. The results of this study suggest that BM MSCs can effectively reduce both the intestinal permeability and pathological damage associated with I/R injury. BM MSCs have the potential for multidirectional differentiation. They participate in colonic mucosal regeneration [34] . In this study, intestinal I/R injury lead to necrosis and the loss of a large number of intestinal epithelial cells. BM MSCs could reduce I/R injury and protect the intestine. Stem cell homing processes are thought to play a crucial role in the success of cell therapy for organ function disorders. Intravenous or intra-arterial infusions of BM MSCs often result in the entrapment of the administered cells in organ capillary beds, especially in the lung and the liver [35] . The transplantation of BM MSCs by intravenous or intra-arterial routes usually results in a low engraftment rate; therefore, increasing the number of MSCs within the injured area would improve the efficacy of cell therapy. Zhang et al [36] used gene-modified MSCs to enhance the homing rate of BM MSCs to the irradiated intestine by 20% using an intravenous delivery route. However, using viral vectors to transfect MSCs may decrease the viability of MSCs. In this study, we directly injected MSCs into the wall of the intestine after I/R injury, which significantly increased the homing of MSCs into the I/R damaged intestinal mucosa. This indicated that the direct injection of BM MSCs into the intestine may provide a better method to enhance the homing rate. The intestinal mucosal barrier is composed of mucosal fluid, microvilli, epithelial mucosal cell TJs, and other special structures. TJs are the most important structures in the mucosal barrier. The mechanisms responsible for intestinal I/R injury include cytotoxic effects and alterations in the structure of the intestinal mucosa [15] . However, few studies have examined the intestinal mucosa and TJ ultrastructure during I/R injury, and the role and mechanism of action of BM MSCs in intestinal I/R injury are unclear. In the present study, we found that severe intestinal mucosal damage occurred 2, 6 and 24 h after I/R injury. The morphological alterations to the intestinal mucosa included the shedding of epithelial cells, fracturing of villi, fusion of adjacent villi, mucosal atrophy and edema. Disruption of TJs between enterocytes, and damage to the mitochondria and endoplasm were also observed. Although damage to the intestinal mucosa plays a significant role in the permeability of the intestine, the mechanisms which cause this damage are poorly characterized. Moreover, we observed that the intestinal permeability increased 2, 6 and 24 h after I/R injury, with simultaneous disruption in TJ integrity. Additionally, the administration of BM MSCs significantly attenuated the histological damage due to I/R injury ( Figure 4) and reduced intestinal permeability (Table 2), compared with the NS + I/R injury group. Therefore, we hypothesized that changes in intestinal permeability may occur due to the disruption of TJs between intestinal mucosa epithelial cells. To understand the mechanism of TJ disruption, we investigated the expression of ZO-1. ZO-1 was the first TJ-related protein to be identified [37] , and it connects the actin cytoskeleton to the transmembrane occludin proteins [38] . ZO-1 plays a vital role in the maintenance of intestinal mucosal barrier integrity and TJs during pathological insults [39] . In this study, ZO-1 expression in the intestinal mucosa significantly decreased after I/R injury; thus, we concluded that decreased ZO-1 expression lead to TJ disruption and possibly increased gut permeability. Next, we examined the mechanism of TJ disruption and reduced ZO-1 protein expression during I/R injury. We observed that TNF-α increased at 2, 6 and 24 h after I/R injury, and correlated with ZO-1 downregulation and TJ disruption. The pathophysiological processes of I/R injury in vivo are complex, and it is thought that TNF-α may play an important role. Inflammation involves the sequential activation of signaling pathways which result in the production of pro-and antiinflammatory mediators during I/R injury. Amongst the proinflammatory mediators, the TNF-α and TNF-αR1 systems play central roles in the physiological regulation of intestinal barrier function [40,41] , and both TNF-α and interferon (IFN)-γ can induce intestinal epithelial barrier dysfunction [42] . Some cytokines can induce endocytosis [43] and internalization of epithelial TJ proteins [44] . In mice with fulminant hepatic failure, reduced expression of occludin in intestinal epithelial cells was linked to increased TNF-α production [4] . TNF-α can also induce an increase in Caco-2 cell TJ permeability via nuclear factor-κB activation, leading to downregulation of ZO-1 protein expression and altered junctional localization [38,45] . We hypothesize that TNF-α acts as an initiator, which can induce expression of other cytokines such as IL-6 and IFN-γ, which then initiate and aggravate the development of I/R injury, and disrupt intestinal TJs. After the transplantation of BM MSCs, the serum TNF-α level significantly decreased, the damaged mucosa recovered, ZO-1 expression increased and intestinal permeability significantly improved. TNF-α is known to inhibit the expression of ZO-1 [44] , and if the TJs are damaged, intestinal barrier dysfunction will occur. Research has confirmed that BM MSCs can inhibit the generation of TNF-α in dendritic cells in vitro [46,47] , and therefore we hypothesized that BM MSCs could repair intestinal I/R injury by inhibiting the release of TNF-α. In order to further study the role of TNF-α, we used anti-TNF-α and anti-TNFR antibodies. The TNF-α antibody neutralizes TNF-α, whereas the anti-TNFR antibody blocks the binding of TNF-α to the TNF-α receptor. TNF-α blockade significantly decreased the severity of I/R injury, which indicates that TNF-α is an important mediator of intestinal mucosa damage during I/R injury. These findings suggest that I/R injury increases TNF-α, leading to downregulation of ZO-1 protein expression, whereas BM MSCs can inhibit pro- All values are mean ± SD (n = 6). b P < 0.01 vs Sham group; d P < 0.01 vs the saline (NS) + ischemia/reperfusion (I/R) injury group. BM MSCs: Bone-marrow mesenchymal stem cells. The degree of relationship between tumor necrosis factor (TNF)-α and the Chiu risk score was evaluated by bivariate correlation. The results were statistically significant when P < 0.05, and was highly significant when P < 0.01. I/R: Ischemia/reperfusion. duction of TNF-α, leading to increased expression of ZO-1 and reduced intestinal mucosa damage. These effects were observed over a relatively short observation period, and long-term studies are required to elucidate if TNF-α exerts long-lasting effects during I/R injury. In summary, this study demonstrates that the submucosal infusion of BM MSCs decreased intestinal permeability and preserved intestinal mechanical barrier function after I/R injury in rats, in a mechanism linked to reduced serum TNF-α levels and the increased expression of the intestinal TJ protein ZO-1. Future studies using exogenous or autologous BM MSCs to prevent or modulate intestinal I/R injuries are required to assess the clinical potential of BM MSCs. The mechanisms by which BM MSCs and TNF-α blockade protect against I/R-induced disruption of intestinal barrier function remain to be further investigated. Disruption of the intestinal mucosa and the consequent increase in permeability after I/R injury may be due to reduced levels of the TJ-associated protein, ZO-1. BM MSCs restored the epithelial structure, promoted the recovery of intestinal permeability, increased ZO-1 protein expression and protected against intestinal I/R injury. TNF-α plays an important role in the ability of BM MSCs to protect against intestinal I/R injury, as the epithelial structure remained normal, and changes in intestinal permeability and ZO-1 protein expression were reduced when rats were treated with anti-TNF-α IgG antibody or anti-TNF-α R1 antibodies before I/R injury. This study confirms that high levels of TNF-α damage TJs and downregulate ZO-1 protein expression in vivo. The mechanism of TNF-α-induced change during I/R injury is complex and requires further study. Background Digestive organ transplantation and other abdominal surgical procedures can result in different degrees of intestinal ischemia/reperfusion (I/R) injury, which can delay patient recovery and lead to systemic organ failure. Therefore, intestinal I/R injury is an important clinical issue. Bone-marrow mesenchymal stem cells (BM MSCs) can protect against I/R injury; however, the mechanism is unclear. Although previous studies have provided an insight into the molecular structure of tight junctions (TJs), much less is known about TJ functionality under physiological or pathophysiological conditions. Few studies have described the intestinal mucosa ultrastructure or changes in TJs during intestinal I/R injury. In this study, the authors used a rat model of intestinal I/R injury to investigate the effect of BM MSCs on the intestinal mucosa ultrastructure, with an emphasis on the mechanisms of intestinal barrier dysfunction. Research frontiers In this study, the authors demonstrated that the submucosal infusion of BM MSCs decreased intestinal permeability and preserved intestinal mechanical barrier function after I/R injury in rats, in a mechanism linked to reduced serum tumor necrosis factor (TNF)-α levels and the increased expression of the intestinal TJ protein zona occludens (ZO)-1. Altered serum TNF-α levels play an important role in the ability of BM MSCs to protect against intestinal I/R injury. Innovations and breakthroughs Recent reports have highlighted the importance of BM MSCs reducing intestinal I/R injury in rats. Although previous studies have provided an insight into the molecular structure of TJs, much less is known about TJ functionality under physiological or pathophysiological conditions. Few studies have described the intestinal mucosa ultrastructure or changes in TJs during intestinal I/R injury. This is believed to be the first study to report that BM MSCs reduce rat intestinal I/R injury, ZO-1 downregulation, and TJ disruption via a TNF-α-regulated mechanism. Applications By understanding how BM MSCs reduce rat intestinal I/R injury, this study may represent a future strategy for therapeutic intervention in the treatment of patients with digestive organ transplantation and other abdominal surgical procedures that result in different degrees of intestinal I/R injury, which can delay patient recovery and lead to systemic organ failure. Terminology TJs are the main structures responsible for restricting the paracellular movement of compounds across the intestinal mucosa. Structurally, TJs are composed of cytoplasmic proteins, including ZO-1-3 and two distinct transmembrane proteins, occludin and claudin, which are linked to the actin-based cytoskeleton. ZO-1, as a scaffold for the organization of transmembrane TJ proteins, also recruits various signaling molecules and the actin cytoskeleton to TJs. Peer review This paper shows the impact of BM MSCs on rat intestinal I/R injury. This study will be of interest and the paper is clearly written.
2018-04-03T00:06:00.844Z
2013-06-21T00:00:00.000
{ "year": 2013, "sha1": "aeb00ff63494da7a68c9654a4d7e47fc86853b4e", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v19.i23.3583", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "19898dcccf5b674a363efceafbe231ad119d074d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15677740
pes2o/s2orc
v3-fos-license
The Effect of Social Context and Social Scale on the Perception of Relationships in Monk Parakeets Social relationships formed within a network of interacting group members can have a profound impact on an indi-vidual's behavior and fitness. However, we have little understanding of how individuals perceive their relationships and how this perception relates to our external measures of interactions. We investigated the perception of affiliative and agonistic relationships at both the dyadic and emergent social levels in two captive groups of monk parakeets (Myiopsitta monachus, n = 21 and 19) using social network analysis and playback experiments. At the dyadic social scale, individuals directed less aggression towards their strong affiliative partners and more aggression towards non-partner neighbors.At the emergent social scale, there was no association between relationships in different social contexts and an individual's dominance rank did not correlate with its popularity rank. Playback response patterns were mainly driven by relationships in affiliative social contexts at the dyadic scale. In both groups, individual responses to playback experiments were significantly affected by strong affiliative relationships at the dyadic social scale, albeit in different directions in the two groups. Response patterns were also affected by affiliative relationships at the emergent social scale, but only in one of the two groups. Within affiliative relationships, those at the dyadic social scale were perceived by individuals in both groups, but those at the emergent social scale only affected responses in one group. These results provide preliminary evidence that relationships in affiliative social contexts may be perceived as more important than agonistic relationships in captive monk parakeet groups. Our approach could be used in a wide range of social species and comparative analyses could provide important insight into how individuals perceive relationships across social contexts and so-The presence, type, and strength of an individual's social relationships can have profound effects on its behavior and fitness. Social relationships can form and operate within different social contexts and on different social scales. Relationships in different social contexts can form as some individuals interact in an affiliative context, such as grooming each other or sharing food, while others interact in an agonistic context, such as fighting with each other. Relationships within affiliative and agonistic social contexts can also form and operate on different social scales. Dyadic social relationships are those built from direct pairwise interactions or associations between two specific individuals (Hinde, 1976a; Hinde, 1976b), such as the affiliative relationships between females seen in many primate species (Seyfarth, 1977; Silk et al., 2003; Silk et … The presence, type, and strength of an individual's social relationships can have profound effects on its behavior and fitness. Social relationships can form and operate within different social contexts and on different social scales. Relationships in different social contexts can form as some individuals interact in an affiliative context, such as grooming each other or sharing food, while others interact in an agonistic context, such as fighting with each other. Relationships within affiliative and agonistic social contexts can also form and operate on different social scales. Dyadic social relationships are those built from direct pairwise interactions or associations between two specific individuals (Hinde, 1976a;Hinde, 1976b), such as the affiliative relationships between females seen in many primate species (Seyfarth, 1977;Silk et al., 2003;Silk et al., 2009). Emergent social properties are also derived from interactions among individuals, but develop at a more global level through all the direct and indirect interactions among individuals in the entire group, such as when many pairwise aggression events contribute to the formation of a group-level dominance hierarchy within which each individual holds a dominance rank (Sawyer, 2005;Bradbury and Vehrencamp, 2014). This rank becomes an emergent social attribute of the individual, and even individuals that did not interact can be referred to in terms of difference in rank. The social context and social scale in which relationships form and operate can affect the types of benefits an individual gains from its social network. For example, stable affiliative relationships improve infant survival in female baboons (Papio cynocephalus, Silk et al., 2009), and associations with group members increase access to essential resources in herds of Grevy's zebra (Equus grevyi, Sundaresan et al., 2007). Female baboons and zebras each form social relationships within their groups based on direct interactions at a dyadic scale and it is through the strength of these affiliative dyadic relationships that participants benefit. In many primate groups, winning agonistic interactions and gaining dominance in a group allows males to monopolize mating opportunities and increase their longterm reproductive success (Kutsukake and Nunn, 2006). In this case, male aggression at the dyadic scale contributes to dominance status, which emerges from the entirety of the interaction history within the whole group. Males are able to monopolize access to mating opportunities through the emergent social property of dominance. In social birds such as manakins (Chiroxiphia linearis and Pipra filicauda), affiliative relationships among males that are formed at the dyadic social scale contribute to each individual's centrality in its social network at the emergent scale, and males that achieve higher centrality have higher success (McDonald, 2007;Ryder et al., 2008). Despite these recent insights into the benefits of relationships in different social contexts and social scales, we have a limited understanding of how individuals perceive their relationships (Barrett and Henzi, 2002). Evaluating relationship perception critically depends on the underlying information used to quantify the dyadic and emergent social relationships. Network analysis is a tool that allows quantification and comparison of relationships across social contexts, such as affiliative and agonistic relationships, and across social scales, from dyadic relationships to emergent social properties (de Silva et al., 2011;Brush et al., 2013;Hobson et al., 2013;Pinter-Wollman et al., 2014;Bradbury and Vehrencamp, 2014). A researcher equipped with sophisticated analytical tools can use observed interactions to quantify the presence and strength of relationships among individuals across different social contexts or social scales. However, if the quantification of the relationship is not well correlated with the animal's perception of the presence or importance of its ties, dyadic network metrics and emergent social properties may fail to accurately predict individual behavior, social investment patterns, and the role that relationships play in fitness outcomes. Audio playback is one potential method for evaluating how individuals perceive ties. In playback experiments, test subjects are presented with acoustic communication signals from other individuals and then aspects of the response, such as the response speed or strength, are quantified. These responses can then be examined for associations with different types of relationships, allowing researchers to infer how an individual perceives the relative importance of different types of relationships with particular individuals. Playback experiments are widely used in animal behavior studies to determine whether individuals can discriminate among categories of calls. For example, playbacks have helped establish that animals preferentially respond to categories of individuals, and are able to discriminate between kin and non-kin, same-dialect and foreign dialect, associates and strangers, and mates and non-mates (Wanker et al., 1998;Wright and Dorin, 2001;Buhrman-Deever et al., 2008;Berg et al., 2011). Playback experiments have also established that emergent social relationships, such as dominance rank, are recognized by individuals in several primate species (Silk, 1999;Bergman et al., 2003;Kitchen et al., 2005;Schino et al., 2006). However, to our knowledge, playbacks have not been used to determine whether there are differences in the response patterns across both affiliative and agonistic social contexts and dyadic and emergent social scales or to evaluate how individuals perceive the relative importance of different types of social relationships. We used a combination of network analysis, network visualization, and playback experiments to assess the perception of social relationships in the monk parakeet Myiopsitta monachus across social contexts and social scales. The monk parakeet nests colonially, often in communal structures, flocks undergo frequent fissions and fusions, and groups exhibit complex social structure (Eberhard, 1998;Spreyer and Bucher, 1998;Hobson et al., 2013;Hobson et al., 2014). Previous work has demonstrated that monk parakeets form and maintain social relationships at the dyadic scale, across both affiliative and agonistic social contexts (Hobson et al., 2013), and that individuals attain a dominance rank at the emergent social scale . Here, we expand on our previous research to understand how individuals differentially respond to dyadic and emergent relationships across affiliative and agonistic social contexts. For this study, we define the 'social context' of relationships as affiliative (based on peaceful proximity) and agonistic (based on aggressive events). We describe each individual's most preferred affiliative associate(s) as 'partners' rather than 'mates' because some of the strongest associations occurred outside of a breeding or pairbond context (i.e. between two males that were affiliative but did not exhibit courtship behaviors to one another) and a few individuals had strong partnerships with more than one individual (Group 2 contained two triads that strongly interrelated, ). We describe weaker associations as 'non-partner' relationships. We define the 'social scale' of relationships as dyadic (pairwise relationships between two individuals) and emergent (social attributes that summarize each individual's societal position within the group). We define rank as an individual emergent attribute that reflects that individual's direct and indirect interactions within the group in agonistic and affiliative contexts, with dominance rank based on patterns of aggression and popularity rank based on patterns of peaceful proximity. The goals of this study were to (1) understand the association between affiliative and agonistic social relationships at both dyadic and emergent social scales, (2) develop a network visualization method that integrates across dyadic and emergent social scales to facilitate comparison between social contexts, (3) test whether responses to playback stimulus calls could be predicted by social context or social scale of the relationships, and (4) use playback response patterns to infer how individuals perceived different types of social relationships. Study site & population This study was conducted with a population of captive monk parakeets housed at the Florida Field Station of the USDA National Wildlife Research Center in Gainesville, Florida, from June through August, 2008. Individuals were given unique facial marks using permanent nontoxic pens (Sharpie, Inc.®) to facilitate individual identification and then randomly allocated to two replicate social groups (Group 1 n = 21; Group 2 n = 19; marks did not measurably affect interactions, unpub. data). Each group was introduced sequentially into a large 2,025 m 2 outdoor semi-natural flight pen that was visibly delineated into approximately 10 m 2 quadrats to facilitate collection of spatial location data. Each group occupied the flight pen for 24 days (Group 1: 14 June-07 July; Group 2: 08-31 July). All activities conducted during this study were approved by New Mexico State University Animal Care and Use Committee protocol #2006-027 (additional details available in (Hobson et al., 2013;Hobson et al., 2014). Observation methods & data restrictions Observations of social behavior were made from blinds by 1 to 4 observers between 07:00 and 19:00 using a mix of scan and all-occurrence sampling methods (Whitehead, 2008;also seeHobson et al., 2013;Hobson et al., 2014). In this study, we focused on observations of directed aggression and affiliative nearest neighbor identities. Using all-occurrence sampling, we recorded the identities of individuals involved in unidirectional dyadic aggression events, in which one individual physically supplanted or displaced another individual, to determine the winner (aggressor) and the loser (target of aggression) for each interaction (as in Hobson et al., 2014). Using scan sampling, we completed a scan at least every 10 min that identified the location of each individual within the flight pen, and recorded the identities of each individual's nearest neighbor within a single quadrat (individuals alone in a quadrat had no nearest neighbors). For this study, we restricted the affiliation and aggression data to periods following relationship stabilization. For aggressive events, we restricted our aggression data to include only the final 3 weeks of observations for each replicate group because previous results showed that aggression patterns stabilized in both groups following the first week of interactions (Hobson et al., 2013;unpublished data). For affiliative nearest neighbor observations, temporal data restrictions were not necessary because nearest neighbor dyadic tie strength stabilized quickly within the first days of group occupancy in the flight pen (Hobson et al., 2013). Quantification of dyadic social relationships We quantified dyadic social relationship strength in affiliative and agonistic social contexts using our observations of aggression and nearest neighbors. Both aggression events and nearest neighbor observations could only occur when individuals were in spatial proximity. For aggression networks, we used observations of aggressive events to determine the proportion of total aggression each individual directed towards each other individual. We used observations of nearest neighbors to determine affiliative tie strength. Grooming and proximity are often used as proxies to determine affiliative relationship strength (Von Rohr et al., 2012); because monk parakeets are highly selective in their allopreening, and generally groom only their partners , we focused on close spatial proximity between neighbors to estimate affiliative association strength. We determined which individual was nearest to each focal individual within the same quadrat. These observations resulted in directional measures of nearest neighbors, because individuals were not always nearest to each other (individual A could be nearest B from the perspective of B, even though individual B is nearest C from the perspective of C). We used only nonaggressive observations of nearest neighbors (peaceful proximity) to determine the proportion of observations for which each individual was nearest to each of its potential social associates. We constructed an aggression network and an affiliation network for each of the two replicate groups. These networks were weighted, directed, and asymmetric, and relationship strength between any two individuals reflected the proportion of an individual's total affiliative or agonistic effort directed at each other individual in the group. To determine how relationships at the dyadic social scale were correlated across social contexts, we correlated aggression proportion and neighbor proportion using the Quadratic Assignment Procedure (QAP) in the program UCInet 6.519 (Borgatti et al., 2002; 10,000 replicates). Quantification of emergent social properties We quantified two emergent social properties, dominance and popularity, by measuring each individual's centrality within agonistic and affiliative networks. Here, we define 'dominance' and 'popularity' as emergent social properties based on an individual's centrality in agonistic networks and affiliative networks, respectively. We quantified dominance and popularity using eigenvector centrality, which determines an individual's position within a social group through a recursive process that uses both direct and indirect dyadic interactions (Bonacich, 1987;Newman, 2001;Newman, 2004;Bonacich, 2007). Eigenvector centrality is one of the primary algorithms for determining consensus beliefs such as rank within a network (Flack and Krakauer, 2006;Brush et al., 2013). We used the matrices of counts of observations of aggression and nearest neighbors for all individuals in each of the two groups. We restricted the neighbor data to exclude observations where an individual was nearest neighbors to its primary partner (or partners, in the case of two closely associated triads in Group 2), because previous results showed that the pair is the fundamental unit of social structure . Including only observations of non-partner neighbors allowed us to focus on popularity among nonpartnered individuals, which better reflected an individual's emergent popularity. None of these matrices contained completely isolated individuals. We normalized the count matrices to reflect probabilities of interactions and added a very small regularizing term (10 -12 ) to ensure that all individuals had a nonzero probability of both acting and receiving an aggression or neighbor. We used these transition matrices to calculate eigenvector centrality in the R pack-age igraph for directed and weighted ties (Csardi and Nepusz, 2006). This analysis provided a continuous measure of dominance and popularity centrality and allowed us to differentiate between adjacently-ranked individuals that had similar levels of dominance or popularity centrality versus those which exhibited larger differences in centrality measures. We used these centrality measures to determine the rank order of individuals for both dominance and popularity. For dominance, centrality measures were lowest for the highest-ranked individuals: an individual with a high dominance centrality was considered a low-ranked subordinate while an individual with a low dominance centrality was high-ranked as a dominant individual. For popularity, centrality measures were highest for the highest-ranked individuals, as these were often the nearest neighbor for many other individuals. Within Groups 1 and 2, we investigated the association between these emergent social properties by testing the correlation between an individual's dominance rank and popularity rank (Spearman rank correlation test, R 3.1.1, R Core Team 2014). We expected that if dominance rank was positively associated with popularity rank, individuals that attained high dominance would also be most popular. Attribute-ordered network visualization Patterns among different types of networks can be visually compared by plotting connections among individuals in network graphs. Network graph layout is a multiobjective optimization problem, where many methods optimize for aesthetic graphs that minimize edge crossings and maximize symmetry (Coleman and Parker, 1996;Purchase, 2000). However, these methods are often inherently unpredictable, inconsistent, lack perceptual uniformity, and result in graphs that resemble "hairballs" that are difficult to interpret or compare (Krzywinski et al., 2012). Many popular layout methods are especially ineffective at plotting dense, highly-connected networks with many bi-directional weighted ties. Emergent social properties may be included in network diagrams through varying node size with individual attribute, but this method cannot effectively depict ordered attributes in a way that is easily comparable across graphs. Here, we develop a new network visualization method, "attribute-ordered networks", inspired by the hive plot (Krzywinski et al., 2012) and arc diagram (Wattenberg, 2002) layout methods. We designed our attribute-ordered network layout with the goal of plotting weighted bi-directional asymmetric association networks along with attribute-ordered individual attributes in a manner that facilitated comparison of the same set of individuals across different social contexts. We plotted three attribute-ordered networks for each of our two replicate groups: aggressiondominance (Fig.1A, 2A), affiliation-popularity (Fig.1B, 2B), and response-response strength (Fig.1C, 2C). Call recording and processing We recorded contact calls from all individuals to use as the auditory stimulus during playback trials. We focused on these calls because parrots are thought to use contact calls to maintain or regain contact with group members (Vehrencamp et al., 2003;Balsby and Bradbury, 2009;Scarl and Bradbury, 2009;Balsby and Adams, 2011;Balsby et al., 2012). Although we do not currently have data on whether monk parakeets can recognize individuals by contact call, previous work in other parrot species has shown that contact calls are individually recognizable and that individuals respond preferentially to the calls from specific associates (Brown et al., 1988;Wanker et al., 1998;Buhrman-Deever et al., 2008;Balsby and Adams, 2011;Berg et al., 2011). We recorded calls from all individuals after completion of social observation in the flight pen: individuals in Group 1 were recorded on 08-09 July 2008 and Group 2 during 03-06 August 2008. Individuals were isolated in small groups in an open-walled building visually separated from the rest of the flock for vocal recording. Vocalizations were recorded with a Sennheiser ME66 short shotgun microphone to a Marantz PMD660 solid state sound recorder at a sampling rate of 44.1 khz and saved as .wav files. Only high-quality contact calls with little background noise were candidates for selection for playback trials. All high-quality calls were batch processed with the sound analysis program Raven 1.3 (Bioacoustics Research Program, 2008) with a bandpass filter of 500-14,000 Hz and amplified to 10,000 Hz to standardize playback stimuli. We selected show how higher-ranked individuals interacted with lower-ranked individuals, while ties to the left side of networks (in red) show how lower-ranked individuals interacted with higher-ranked individuals. Attribute-ordered networks were drawn with igraph. 5 calls from each individual and randomly chose 3 to construct a stimulus call series for each playback trial. One individual in Group 2 (RNR) only provided 2 usable contact calls; we repeated the first stimulus call at the end to form a three-call series for this individual. Playback design and presentation We constructed unique playback trials for each test subject that contained calls from each of the test subject's social group members. We randomized both the order of presentation of stimulus individuals to each test subject and the order of testing for subjects. We used the program Audacity® 1.3.10 (http://audacity.sourceforge. net) to construct unique playback sound tracks for each test subject. For each track we used three contact calls from each stimulus individual, spaced 2 seconds apart to mimic natural call spacing patterns (E. Hobson, unpublished data). Call series from each stimulus individual were spaced 1 minute apart (or longer due to breaks, see below). Once constructed, the playback tracks allowed for the controlled presentation of stimulus calls in a manner that mimicked natural calling patterns, but was standardized across playback trials and avoided potential sources of researcher bias in playback delivery, as researchers were blind to the identity of stimulus individuals and did not control the rate of call delivery. Playback trials were conducted in an open-walled roofed building during August 07-11, 2008. All test subjects were habituated to playback test conditions prior to the experiment. We visually isolated individuals from the rest of the group during playback trials to reduce chances of social calling and promote contact calling in response to playback stimuli. Each test subject received stimulus call series from all of its group members: Group 1 individuals (n = 21) were presented with stimulus series from 20 group members and Group 2 individuals (n = 19) were presented with stimuli from 18 group members. In order to reduce the chances of habituation to multiple stimuli, we divided playbacks into two parts, presented on two different days. In Part 1, test subjects were presented with calls from one quarter of potential social associates (Part 1A) followed by a 3 minute break of silence where the speaker position was changed from one randomly selected side of the test room to the other. After the break we presented the second quarter of social associates (Part 1B). On the second day of testing, we presented the third (Trial Part 2A) and fourth (Trial Part 2B) quarters of social associates in the same manner. Trials were recorded with the same audio recording system as for the stimuli. Measuring playback responses We quantified response strength using on-screen analysis of playback trial recordings with Raven 1.3. During analysis we were blind to the identity of stimulus individuals within playback tracks. We defined a 3 second response window within which we considered vocalizations to be responses to stimulus calls. Calls from test subjects were scored as responses if they occurred a maximum of 3 seconds after any of the three stimulus calls within each call series. We also counted the number of calls given during playback trials to determine if Groups 1 and 2 differed in their overall responsiveness (calls given in response to playback stimuli) and/or overall vocalness (calls given during trials but outside of the allotted response window). If the subject responded with a contact call during the response window, we measured the speed of the response as the amount of time from the start of the stimulus call to the start of the response call. We quantified response strength as the difference between response lag times and the allowed response window (3 sec). Quantifying response strength in this way allowed us to include 'no response' as response strength of 0, which was a more appropriate format for use in our statistical tests. We also quantified mean response strength for each focal individual, which indicated the mean strength with which tested individuals responded to stimulus calls from focal individuals, and ranked individuals from strongest mean elicited response strength to weakest mean elicited response strength. Testing perceptions via response patterns We tested whether dyadic or emergent social relationships predicted playback response strengths using a network-based permutation-driven regression test, the Multiple Regression Quadratic Assignment Procedure (MRQAP, Dekker et al., 2003;Dekker et al., 2007). MRQAP allows simultaneous testing of multiple explanatory variables on a single response variable in a single model while controlling for the potential effects of stimulus habituation (Wey and Blumstein, 2010;Croft et al., 2011;Mann et al., 2012;Pinter-Wollman et al., 2014). We used the "Double Dekker Semi-Partialling MRQAP" approach, which is robust against multicollinearity among the explanatory variables (Dekker et al., 2003;Dekker et al., 2007). We chose to use MRQAP over other methods such as exponential random graph modeling (ERGM, as in Dey et al., 2015) and joint network modeling (as in Beisner et al., 2015) because both our predictor networks and our response network were continuous and weighted. ERGM is currently un-der development to expand its use to continuous data (Desmarais and Cranmer, 2012) but current routines can only handle a response network that is either binary or count data. The recently developed joint network modeling method (Chan et al., 2013;Fushing et al., 2014) is also currently only available for binary network ties. We chose to use the weighted data because dichotomization of weighted ties can result in the loss of important socially-relevant detail (Croft et al., 2011;Farine, 2014). We constructed our MRQAP model including three dyadic social factors (Affiliation (all), Affiliation (nonpartner), and Aggression), four emergent factors (Dominance difference, Dominance rank difference, Popularity difference, and Popularity rank difference), and two controls for habituation (Trial part and Call order), with response strength as the dependent variable. Dyadic affiliative matrices contained the proportion of peaceful proximity neighbor observations; one matrix (all) included partner observations while one matrix (nonpartner) excluded partner observations. Dyadic aggression matrices contained the proportion of aggression was directed at each potential target. For emergent social factors, we transformed individual attributes into dyadic difference matrices for all potential dyads for each of our two groups. We quantified the difference in centrality and rank between all individuals to get dyadic difference in dominance centrality, dominance rank, popularity centrality, and popularity rank. A positive value indicates that individual A had higher centrality or rank than individual B. We also constructed matrices with information on playback trial part and call order to control for the potential effects of habituation to the playback stimuli. Trial part matrices contained '1' for stimulus calls presented to an individual in part 1 of the trial, and '2' for stimulus calls presented in trial part 2. Call order matrices were based on the order in which stimulus calls from each individual were presented to each focal individual within trial parts, and were indicated as 1-10 for Group 1 and 1-9 for Group 2. Finally, we compiled a matrix of response strengths for all dyadic combinations, where rows indicated response strength of tested birds to stimulus individuals in columns. We conducted our MRQAP tests using the program UCInet 6.519 with 10,000 replicates. Relationship structure across social context and social scale We collected data on aggressive events and nearest neighbor occurrences for the two monk parakeet groups during > 323 hours of observer effort. We used these data to quantify dyadic relationship strengths for affiliative and agonistic social contexts as well as emergent dominance and popularity for each individual. We plotted these as attribute-ordered networks for agonistic and affiliative social contexts for both groups (Fig. 1, 2). We collected 1,013 observations of aggressive events in Group 1 and 1,360 in Group 2. Aggression networks were highly but not perfectly connected (Fig. 1A, 2A). Although a small percent of total dyads did not interact (non-interacting dyads: Group 1= 11%, Group 2=8%), no individual was completely isolated. Most observations of aggression involved higher-ranked individuals aggressing against lower-ranked individuals (Fig. 1A, 2A, blue ties) but rank opportunism was observed in both groups, as lower-ranked individuals occasionally aggressed against higher-ranked birds (Fig. 1A, 2A, red ties). We collected a total of 17,890 nearest neighbor observations in Group 1 and 28,875 in Group 2. Full affiliation networks including the most preferred associates (partners) were perfectly connected in both Group 1 and Group 2, with all individuals observed as neighbors of all other individuals at least once. Within the full affiliation networks, focal birds were nearest an individual other than their partner(s) in 8,674 (48.4%) observations in Group 1 and 13,747 (47.6%) observations in Group 2 (Fig. 1B, 2B). Most non-partner affiliative network ties involved less popular individuals in proximity to more popular individuals (Fig. 1B, 2B, red ties), but more popular individuals were also frequently neighbors to less popular individuals (Fig. 1B, 2B, blue ties). At the dyadic scale, we found a significant negative correlation between aggression and affiliation (including partner observations) in Groups 1 and 2 (QAP correlation test, Group 1: R = -0.0632, P = 0.0475; Group 2: R = -0.0885, P = 0.0060). This effect reversed when we excluded the partner observations, and the amount and direction of aggression and non-partner neighbor affiliation were significantly positively correlated (Fig. 1A vs. 1B, Fig. 2A vs. 2B; QAP correlation test, Group 1: R = 0.1649, P = 0.0032; Group 2: R = 0.1384, P = 0.0205). These results indicate that individuals directed less aggression towards those with which they had strong affiliative relationships (their partners), but more aggression towards frequent non-partner neighbors. At the emergent social scale, the relationship between dominance and popularity was variable across individuals. We did not find a significant correlation Fig. 2 Group 2 attribute-ordered networks depict the flow of network ties based on individual rank order for (A) aggression-dominance, (B) affiliation-popularity, and (C) response-response strength networks Network structures consistent with description in Fig. 1. between dominance rank and popularity rank in either group (Spearman rank correlation, Group 1: rho = -0.3857, P = 0.0851; Group 2: rho = -0.0632, P = 0.7979). These results indicate that an individual's emergent rank within one social context (i.e. agonistic, Fig. 1A, 2A) did not affect the rank it attained within the other context (i.e. affiliation, Fig. 1B, 2B). Playback response patterns We found wide variation in the number of stimulus individuals that tested birds responded to (Fig. 3). In Group 1, 1 individual (5% of total individuals) responded to > 75% of stimulus individuals while in Group 2, 6 individuals (32% of total individuals) responded to > 75% of stimulus individuals, including 3 birds that responded to 100% of stimulus individuals. However, some tested individuals were completely unresponsive during playback trials: in Group 1, 8 individuals (38% total individuals) did not respond to calls from any stimulus individuals while in Group 2 only 1 bird (5% total individuals) was unresponsive. In both groups, all stimulus individuals elicited a response from at least one tested individual during playback trials, but none of the stimulus individuals elicited responses from more than 75% of tested individuals. The response networks (Fig. 1C, 2C) show individuals ranked by mean elicited response strength and depict how individuals responded to stimulus calls from specific individuals. We found no evidence that Groups 1 and 2 differed in overall vocalness during playback trials: individuals in both groups gave a similar number of calls between stimulus call series during playback trials (P > 0.05). However, the two groups did differ in their responsiveness to playback stimuli; response rates were significantly higher in Group 2 than Group 1 (P = 0.0071). Our analysis of factors predicting the strength of responses during playback trials indicated that the full models significantly predicted response patterns in both replicate groups (Table 1). Habituation to the playback stimuli was present in both groups. Response strength was negatively affected by call order in Group 1 and by both call order and trial part in Group 2. Since we were able to control for the effect of habituation in the MRQAP, we were able to detect response strengths that were driven by social factors above and beyond this habituation effect. Response strength was driven by a mix of dyadic and emergent social factors in Group 1, but only dyadic factors in Group 2. At the dyadic social scale, affiliative Fig. 3 Playback responsiveness differed across individuals Graphs display the percent of binary responses for individuals responding to any call in a stimulus series for (A) Group 1 and (B) Group 2. Individuals labeled on the y-axis are "focal individuals". The light grey bars show the percent of stimulus individuals that each of the focal individuals responded to during playback trials; dark grey bars show the percent of tested individuals that responded to stimuli from each of the focal individuals during playback trials. Stars indicate individuals that did not respond to any stimulus calls during playback trials. Mean percent responses given by and received by focal individuals in each group are indicated at the top of each graph. Response strength based on the strongest response to any of the three stimulus calls in the stimulus series. Coefficients are standardized regression coefficients. Model fit coefficient is the adjusted R 2 (corrected for multiple factors). Significant results (α<0.05) are indicated in bold. neighbor networks significantly predicted response strength in both groups, but only for the full neighbor networks that included observations of partners. The direction of this effect differed between the two groups: in Group 1, neighbor effort was negatively associated with response strength, while in Group 2, it was positively associated with response strength. Aggression and non-partner affiliation networks did not predict response patterns in either group. At the emergent social scale, we found mixed results for the effect of emergent social properties on response patterns. In Group 1, popularity difference and popularity rank difference each predicted response strengths. However, neither measure predicted response strengths in Group 2. We were unable to test for interaction effects in our models because the development of network statistics is still underway, and there is not currently a statistical procedure that allows for examination of interactions among factors using MRQAP (Mann et al., 2012). Discussion We investigated how different types of social relationships affected individual responses during playback experiments with two groups of captive monk parakeets. We found that social context affected patterns of dyadic relationships but did not affect patterns of emergent relationships. We also found that affiliative relationships at the dyadic scale and, to a lesser degree at the emergent scale, affected playback response patterns, but the ways in which monk parakeets responded to these relationships differed across our two replicate groups. We discuss the extent to which these results allow us to draw inferences about how individuals perceive the importance of different relationships. Relationship structure across social context and social scale At the dyadic level, monk parakeets formed agonistic relationships with some individuals and affiliative relationships with others. We found that strong affiliative partners were not strongly agonistic with one another. These results indicate a separation between strong agonistic relationships and strong affiliative relationships. However, weaker affiliative relationships were positively associated with aggression, indicating that individuals that were often neighbors were more often aggressive with one another than with individuals with which they were rarely neighbors. Because individuals must be in close spatial proximity for aggression to occur, a moderately strong neighbor relationship, even based on peaceful proximity observations, may provide individuals with greater opportunities for aggression against these frequent neighbors. At the emergent level, we found no association between dominance rank and popularity rank. Dominant individuals were no more or less likely to be popular, and popular individuals were no more or less likely to be dominant. We developed a network layout that more effectively visualizes the structure of directed dyadic relationship networks and individual rank attributes. Our attributeordered network layout allows dyadic and emergent social information to be presented in a combined manner that reduces the cognitive load of interpreting and comparing these graphs across social contexts. Because this method is flexible and can be used to display and compare different types of social information, we expect it to be useful in a wide range of applications (R code available upon request). Playback response patterns by social context and scale Our overall model of social factors significantly predicted response strengths in both social groups. Habituation to playback stimuli affected response patterns, but this effect was controlled in the full statistical model. The regression coefficients from our model, although statistically significant, were relatively small, indicating that some amount of additional variation was unaccounted for in our model. The coefficient sizes can be partially attributed to the statistical approach we used (MRQAP) which is known to have lower regression coefficients than those from ordinary least squares regression (Krackhardt, 1988;Mann et al., 2012). Within the full model, monk parakeet responses during playback trials were predicted by dyadic affiliative relationships, but only when observations of partners were included. The direction of this association between response and affiliative association strength differed between replicate social groups: Group 1 individuals were less likely to respond to stimulus calls from their strongest affiliative associates (partners), while in Group 2, playback responses were positively associated with affiliative relationship strength. Dyadic affiliative relationship strength did not predict playback responses when observations of affiliative partners were excluded, indicating that individuals were no more or less likely to respond to calls from a stimulus individual regardless of the amount of time it spent in proximity with non-partner neighbors. Agonistic relationships at the dyadic scale did not predict playback response patterns in either of the two groups; playback subjects were no more likely to respond to calls from a stimulus individual with which it had a strong or weak agonistic relationship. At the emergent level, monk parakeet playback responses were significantly predicted by emergent popularity, but only in Group 1. Responses were not predicted by difference in dominance centrality or difference in dominance rank in either Group 1 or Group 2. Inferring individual perception of relationships Overall, the lack of general consistency in our results limited our ability to conclusively assess how individuals perceived their social relationships. However, we can use the playback response patterns to draw preliminary inferences about the perception of importance of social relationships. Response patterns in both Groups 1 and 2 were significantly driven by strong affiliative dyadic relationships when partner observations were included. If we define perception of importance of relationships based on significant predictors of playback responses, our results indicate that strong relationships in affiliative social contexts at dyadic social scales were important in driving response patterns, although the direction that responses were driven differed between our two social groups. Individuals in Group 2 appeared to perceive strong affiliative relationships as more important than weaker relationships, while this effect was reversed in Group 1 and individuals responded less strongly to those with which affiliation was stronger. Individual responses were also predicted by relationships within an affiliative social context at the emergent social scale, but only in one of the two replicate social groups: Group 1 playback responses were significantly associated with popularity difference. These results indicate that parakeets may be able to perceive emergent affiliative rank but it was not universally an important driver of playback responses. We found no evidence that individual response patterns were driven by the strength of relationships within an agonistic social context, regardless of the social scale of those relationships. Neither dyadic aggression nor differences in emergent dominance affected playback response patterns. With our definition, these results indicate that both dyadic aggression and emergent dominance relationships may be perceived as less important than affiliative relationships. Interestingly, previous research in other species has demonstrated that individuals can recognize an individual's emergent social attributes, especially within agonistic social contexts, where individuals recognize and respond to relative differences in dominance rank and rank reversal events (Cheney et al., 1995;Bergman et al., 2003;Massen et al., 2014a). In the monk parakeets, the apparent perceived importance of dyadic affiliative relationships occurs despite the parallel formation of moderately linear dominance hierarchies in the same social groups . Traditionally, studies of social structure within animal groups, particularly in birds, have focused primarily on the influence of aggression and dominance on groups (Schjelderup-Ebbe, 1922;Chase, 1974;Banks and Allee, 1975;Ketterson, 1979;Chase, 1982;Lamprecht, 1986;Bond et al., 2004;Schubert et al., 2007;Chiarati et al., 2010;Sheppard et al., 2013;Dey and Quinn, 2014;Massen et al., 2014a). Much less work has focused on the quality or benefits of affiliative relationships at both the dyadic and emergent scales, even though dyadic affiliative relationships outside of pair bonds are present and likely important in a wide range of taxa (Seyfarth and Cheney, 2012) and affiliative relationships can have large impacts on fitness (Silk et al., 2003;Silk et al., 2006a;Silk et al., 2006b;McDonald, 2007;Ryder et al., 2008;Silk et al., 2009). Many birds show a strong pair-based social structure (Emery, 2006;Emery et al., 2007) and the quality of social relationships has been shown to be important in ravens (Fraser and Bugnyar, 2010), suggesting that a mix of affiliative and agonistic relationships are likely important structural features in social avian species. However, our results could also suggest that other processes or mechanisms may be driving response patterns, rather than perception of the importance of different types of relationships. In particular, we were unable to determine whether call recognition processes may have affected response patterns. We focused on contact calls as stimuli during playback experiments, but we did not directly evaluate whether individuals were able to recognize others solely by contact call. Based on previous results in other parrot species, it is likely that monk parakeets can recognize non-pair individuals by contact calls: brown-throated conures Aratinga pertinax, green-rumped parrotlets Forpus passerinus, spectacled parrotlets Forpus conspicillatus, and budgerigars Melopsittacus undulates have all shown evidence for individual recognition by contact call (Brown et al., 1988;Wanker et al., 1998;Buhrman-Deever et al., 2008;Berg et al., 2011). Our results suggest that individuals can recognize their partners by call alone. However, it is unknown whether less closely associated individuals can also be recognized solely by vocal structure. Even if monk parakeets recognize all social associates by contact call, the timing of our playback experiment could have contributed to the variability in response patterns. While Group 2 individuals were recorded and then tested in playback trials within the same week, Group 1 had a longer lag between recordings and trials (recordings: Group 1: 08-09 July; Group 2: 03-06 August; playback trials: both groups: 07-11 August). If monk parakeets alter their contact calls over time, this lag of about 1 month for Group 1 may have been enough time for individuals to alter their own calls and to learn the new calls of their social associates. If this was the case, the playback stimuli would represent 'outdated' contact calls, which may be a reason that they did not elicit strong responses. Further study is currently underway to determine if contact call structure changes over time in monk parakeets, as is commonly found in budgerigars (Brown et al., 1988;Farabaugh et al., 1994;Hile et al., 2000). In addition to call recognition effects, several social factors could also have contributed to the variable response patterns. Memory and forgetfulness cause human perception of social relationships to vary from measures based on observational methods (Brewer, 2000;Bell et al., 2007) and social context and individual personality can also affect a person's level of accuracy in recalling social associates (Casciaro, 1998). A similar mismatch between interaction events and perception of relationships may have contributed to the variable responses we observed during playback experiments with our parakeets. Differences in response rates and general association patterns between the two groups may also help explain the inconsistent response patterns between groups. Group 2 had higher response rates than Group 1, and also had significantly higher association strengths than Group 1 . If the function of the contact call is to regain contact with group members, there may have been little incentive or biological reason for individuals to respond preferentially to only their closest associates. Instead, Group 2 individuals may have benefitted equally from contacting any member of their group because most individuals in Group 2 had moderately strong association strengths. Additional measures, such as physiological responses, may provide additional insight into the perception of relationships when used in conjunction with vocal response playbacks. Finally, our statistical approach was designed to detect consistency in overall response patterns within groups. However, if individuals within groups differ in which relationships they perceive as important, their response patterns may also differ, causing inconsistencies at the group level that would be difficult to detect with our current methods. Importance of understanding relationships across contexts and scales Many species form dyadic and emergent social relationships across both affiliative and agonistic social contexts, and individuals may gain fitness benefits from a combination of different types of relationships. In primates, individuals may invest in dyadic relationships with specific individuals in one context to gain a benefit from their relationship with that individual in a different social context. For example, female baboons with young infants form dyadic affiliative relationships with males, and then benefit from reduced aggression as the males then defend the females and their offspring against aggression from other males in the population (Nguyen et al., 2009). In this case, stronger dyadic affiliative relationships serve as a buffer against the formation of dyadic agonistic relationships. In another example, subordinate females in several primate species preferentially groom higher-ranked females (thus investing in dyadic affiliative relationships) and are then more likely to receive benefits from those individuals such as support during agonistic encounters (thus receiving a benefit in dyadic agonistic relationships, Seyfarth 1977;Schino 2001). Recent work with ravens has shown that individuals may strategically intervene in affiliative interactions among others, possibly to prevent individuals from forming alliances and becoming stronger competitors (Massen et al., 2014b). In this case, individuals use dyadic agonistic relationships to disrupt the dyadic affiliative relationships that the target of aggression can form with others. Because the benefits from relationships can differ depending on the social context and social scale, individuals may be able to employ different social strategies in order to gain access to similar benefits. Understanding how dyadic relationships and emergent social properties form across affiliative and agonistic social contexts, and how individuals perceive their social landscape, are crucial to understanding selection pressures on sociality and the evolution of complex sociality across a broader range of taxa. The analysis and visualization methods developed here could be used in a wide range of social species, and comparative analyses among diverse taxa could provide important insight into the perception of social relationships across context and scale.
2017-03-18T00:34:38.005Z
2015-02-01T00:00:00.000
{ "year": 2015, "sha1": "17e3b1038e674bc74074109ee6a49aa53e7a645f", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1093/czoolo/61.1.55", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "279fea8c02a9d6a89ec4f5234c3464c7c56933b1", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
253581394
pes2o/s2orc
v3-fos-license
A generalization of Newton's quadrilateral theorem and an elementary proof of Minthorn's quadrilateral theorem Newton's quadrilateral theorem can be phrased as follows. If H is a circle that is tangent to the four extended sides of a non-parallelogram quadrilateral Q, the center of H lies on the Newton line of Q. We prove that the theorem remains true if H is an arbitrary hyperbola or ellipse. A quadrilateral can have at most one circle tangent to it but infinitely many ellipses and hyperbolas. We also prove a converse of Newton's theorem, namely that every point on the Newton line, excepting three singular points, is the center of some ellipse or hyperbola tangent to the four extended sides of Q. Using the same proof techniques we give an elementary proof of the (lesser known) Minthorn's quadrilateral theorem, which concerns quadrilaterals passing through the four vertices of Q. Our proofs are analytic; they rely on linear algebra and affine transformations. Centers of tangent conics are marked in magenta, centers of passing conics are marked in blue. We fixed a quadrilateral in the plane, randomly generated 3000 tangent conics and 3000 passing conics, and marked the centers of the conics in appropriate colors. Figure 1, as well as the other figures in this paper, was produced in Wolfram Mathematica. We make this claim precise, make it stronger, and prove it as Theorem 4.4. Conjecture 1.2 was proved in 1912 by Maud Minthorn [10], but we give in this paper a much shorter and more elementary proof. Conjectures 1.1 and 1.2 are illustrated in Figure 1. Conjectures 1.1 and 1.2 might be dual to each other, but the exact nature of this duality (if it exists) remains a mystery. In this paper, we do not use complex numbers or projective geometry; instead, we use real numbers and linear algebra. Our proofs are analytic. Despite the elementary nature of Conjectures 1.1 and 1.2, the proofs of Theorems 4.3 and 4.4 are surprisingly unenlightening -consisting of dry formal manipulations, they offer very little insight as to why Theorems 4.3 and 4.4 are true. We identify the Euclidean plane with R 2 ; for instance, (4, √ 7/2) is a point in the plane. Geometric figures are viewed as subsets of R 2 . Since R 2 is a vector space, we in some sense treat points as vectors. For instance, if x and y are points in the plane, 1 2 x+ 1 2 y is their midpoint. The word "collinear" is therefore ambiguous: (1, −1), (1, 0), and (1, 1) are collinear when viewed as points in the plane but not when viewed as elements of a vector space; so we avoid the word "collinear." Given x, y, z ∈ R 2 , we say that x, y, z lie on a line if there exists a line L that contains x, y, z. Given x, y ∈ R 2 , we say that x and y are multiples of each other if one vector can be obtained from the other by multiplying it by a real number. Given x ∈ R 2 , we let x1 be the first component of x and x2 the second component, so that x = (x1, x2). We denote by |x| the absolute value of x, which is equal to x 2 1 + x 2 2 . If A is a matrix, we denote the i-th entry of the j-th column of A as Aij, where indexing starts from 1, so that A11 is the top left entry of A. To denote the vector whose components are 2 and −5, we will either write (2, −5) (note the comma) or 2 −5 . Though row vectors are used in this paper, we do not write row vectors explicitly and will instead express them as transposes of column vectors. If f : R 2 → R is a differentiable function, we let ∇f be the gradient of f , which we treat as a function from R 2 to R 2 . One might argue that ∇f (x) should be a row vector instead of a column vector, but we say it is a column vector for simplicity. Affine transformations are going to be extremely useful to us in this paper, since they allow us to "bend" the plane to our convenience. They preserve the "essence" of geometric figures while letting us vary the details. Under affine transformations, lines map to lines, conics map to conics, and the topology is preserved. It is known that conic sections can be described in algebraic terms by quadratic polynomials [1]. It is also known that certain conics, including ellipses and hyperbolas, have a "center." We proceed to give a formal definition of a center of a geometric figure and prove a few lemmas about how a center of a conic relates to the conic's algebraic description. Definition 2.2. Let U be any subset of R 2 , and let c be a point in R 2 . Say that c is a center of U if and only if U is reflectionally symmetric with respect to c. That is, if and only if whenever x lies in U , 2c − x also lies in U . An ellipse or hyperbola has exactly one center. A parabola has zero centers. Certain degenerate conics, such as the line {x ∈ R 2 : x 2 1 = 0}, have infinitely many centers, while others, like the "cross" figure {x ∈ R 2 : x1x1 = 0}, have exactly one center. Despite the seeming complexity, Lemma 2.4 gives a simple algebraic description of centers of conic sections. Lemma 2.3 will be used in the proof. Lemma 2.3. Let A be a symmetric two-by-two matrix, and let u, v, w be three vectors no two of which are multiples of each other. If u T Au = 0, v T Av = 0, and w T Aw = 0, then A is the zero matrix. Proof. None of u, v, w is zero -if u were zero, it could be written as 0v. Therefore, we can write w in the basis {u, v} as w = αu + βv for some real numbers α, β. Since w is not a multiple of u or v, α = 0 and β = 0. Then and therefore u T Av = 0. Now, let x be any vector in R 2 . Write x in the basis {u, v} as x = su + tv for some real numbers s, t. Then Since A is symmetric, it is diagonalizable and has real eigenvalues. If A were nonzero, it would have a nonzero eigenvalue λ; let q be the corresponding eigenvector, and assume without loss of generality that q is real (e.g., q1 ∈ R and q2 ∈ R). Then q T Aq = λ|q| 2 = 0, a contradiction. So A = 0. Lemma 2.4. Suppose H is a conic section described by the equation f (x) = 0, where f : R 2 → R is a polynomial of degree 2. A point c ∈ R 2 is a center of H if and only if the gradient of f evaluates to zero at c. for some symmetric matrix A, vector v, and real number s. The gradient of f at c is ∇f (c) = 2Ac + v. It is easy to verify that Suppose ∇f (c) = 0, and suppose for the sake of contradiction that c is nevertheless Qualitatively, L is the line passing through c whose normal vector is ∇f (c). Let y be a point in H, and let y = 2c − y be the mirror image of y with respect to c (in this paper, we do not consider the empty set a conic, even though it is described by the equation x T x + 1 = 0; so H is guaranteed to have at least one point). Since ∇f (c) is by assumption nonzero and it has to be that at least one of ∇f (y) and ∇f (y) is nonzero. Assume without loss of generality that ∇f (y) = 0. Both curves H and L pass through the point y, and H is contained in L. The normal vectors to H and L at y must be multiples of each other -if that were not the case, H would "go at an angle" relative to L. Since ∇f (y) = 0 and H is the set of points x where f (x) = 0, ∇f (y) is a normal vector to H at the point y. By definition, ∇f (c) is a normal vector to L. Therefore, for some nonzero real number α. Denoting δ := ∇f (y), we can therefore describe L as the line passing through y whose normal vector is δ: Let δ be the result of rotating δ by π/3 radians counterclockwise, and let δ be the result of rotating δ by π/3 radians clockwise. Consider the set {δ, δ , δ }. Since A is a symmetric two-by-two matrix, by Lemma 2.3, w T Aw being zero for every w ∈ {δ, δ , δ } would imply A = 0, which is a contradiction because f (x) is a polynomial of degree 2. Let w ∈ {δ, δ , δ } be such that w T Aw = 0. Define the function F : R → R as F (t) := f (y + tw). We can write F (t) explicitly as Here, we use that f (y) = 0 and that ∇f (y) = 2Ay + v. Because of how δ and δ were defined, we are guaranteed to have δ T w > 0. One can check that t * = − δ T w w T Aw = 0 is a root of F , e.g., F (t * ) = 0. This implies that the point y + t * w lies on H. But since δ T ((y + t * w) − y) = t * · δ T w = 0, equation 3 dictates that y + t * w does not lie on L. This is a contradiction because H ⊆ L. Proof. The Hessian of f is 2A. Since the Hessian is nonsingular, A is nonsingular. The gradient of f , which is equal to ∇f (x) = 2Ax + v, evaluates to zero at exactly one point, namely Corollary 2.6. If H is an ellipse or hyperbola and c is its center, H can be described Proof. Since ellipses and hyperbolas do not contain their center, f (c) = 0. Let Since H is an ellipse or hyperbola, A is nonsingular, and therefore B is nonsingular. where we use that ∇f (c) = 0 because c is the center of H. Then and it follows that F (x) = 0 if and only if f (x) = 0. Tangency of geometric figures is a complicated notion. For the sake of brevity we choose in this paper to use a "makeshift" definition of tangency that only applies to lines and conics. We do so with the hope that every systematic notion of tangency would reduce to Definition 2.8 in the special case of lines and conics. Definition 2.7 provides background for Definition 2.8. Definition 2.7. Let L be a line, and let H be a conic. Say that L is directed along H if H is a hyperbola and L is parallel to either of the two asymptotes of H, or if H is a parabola and L is parallel to the axis of symmetry of H. Definition 2.8. Let L be a line, and let H be a conic. Say that L is tangent to H if either of the following holds: 1. L intersects H at exactly one point and is not directed along H, 2. H is a hyperbola and L is one of the two asymptotes of H. Case 1 of Definition 2.8 corresponds to the "commonsense" definition of tangency, when L "touches" H but does not cross it. Case 2 of Definition 2.8 declares that the asymptotes of a hyperbola are tangent to it. Even though asymptotes never reach the hyperbola they belong to, they come "infinitely close" to it -that is, no straight line can "fit between" a hyperbola and either of its two asymptotes [11, p. 124]. Lemma 2.9 gives an algebraic description condition for whether or not a line is directed along an ellipse or hyperbola. Proof. If H is an ellipse, both sides of the biconditional "L is directed along H if and only if v T Av = 0" are false. Indeed, the condition for L being directed along H fails automatically because H is not a hyperbola or parabola, and v T Av = 0 because A is positive-definite and v is nonzero. Suppose H is a hyperbola. Then det A < 0, so A has one positive eigenvalue and one negative eigenvalue. By diagonalizing A, one can produce two vectors v (1) and v (2) (1) or v (2) , then x T Ax = 0. Since A is a nonzero two-by-two symmetric matrix, by Lemma 2.3, the converse is also true: if x T Ax = 0, then x is a multiple of v (1) or v (2) . Define L (1) and L (2) to be lines consisting of multiples of v (1) and v (2) , plus c: We claim that L (1) and L (2) are the two asymptotes of H. So x ∈ H if and only if αβ = 1. This allows us to parametrize H as (2) 1 t gets closer and closer to L (2) , and as t → ±∞, (1) t gets closer and closer to L (1) . So L (1) and L (2) are the two asymptotes of H. The line L = {u + vt : t ∈ R} is directed along H if and only if it is parallel to L (1) or L (2) , which if and only if v is a multiple of v (1) or v (2) , which is if and only if v T Av = 0. Finally, we are ready to give an algebraic description of tangency, Lemma 2.10. We initially discovered Lemma 2.10 in the special case of ellipses using Lagrange multipliers -if n is a normal vector to L, we computed the point on H that had the greatest dot product with n; if this dot product was equal to b, we reasoned that H and L must be tangent. That was not a complete proof, but it was simple and clear. Below we give a formal proof of Lemma 2.10. Proof. Let σ be the counterclockwise 90-degree rotation matrix, so that the cross product x×y := x1y2 −x2y1 of two vectors x and y could be expressed as x × y = y T σx. Since n is a normal vector to L, the vector v := σn points in the direction of L. Observe that Also, note the identity where A12 = A21 because A is symmetric. To prove that L is tangent to H if and only if n T A −1 n = (b − n T c) 2 , note that either L is directed along H or it is not. W first prove the biconditional in the case when L is not directed along H and then prove it in the case when L is directed along H. When L is not directed along H, by Definition 2.8, L is tangent to H if and only if L intersects H at exactly one point. We can parametrize L as By Lemma 2.9, the fact L is not directed along H implies that v T Av = 0; therefore, f (t) is a polynomial of degree 2. The quadratic equation f (t) = 0 has exactly one solution if and only if the discriminant is zero. That is, if We now embark to find a convenient formula for D/4: (We use that u = c + b−n T c |n| 2 n.) Using equation 4 and v = σn, we can rewrite this as Using equation 5, we arrive at the following relation: This completes the proof in the case when L is not directed along H. Now suppose L is directed along H. By Lemma 2.9, v T Av = 0, so by equation 5, n T A −1 n = 0. Now, H is a hyperbola and L is parallel to one of its two asymtotes; let L * be the asymptote of H to which L is parallel. Since c is the center of H, L * passes through c. The line L may or may not pass through c. Since L is parallel to L * , it follows that L equals L * if and only if c ∈ L, which is if and Corollary 2.11. Let p, q be distinct points in the plane. The line passing through p and q is tangent to the unit circle if and only if (p × q) 2 = |p − q| 2 . Proof. The unit circle is the set of points x satisfying x T Ix = 1, where I is the identity matrix. The line passing through p and q is the set of points x satisfying (q −p)×x = q ×p; by the identity (q −p)×x = x T σ(q −p), we have σ(q −p) is a normal vector to this line. By Lemma 2.10, the line and the circle are tangent if and only if This completes the proof. The proof of the following proposition is left to the reader. Where the magic happens In this section, we will prove Conjectures 1.1 and 1.2 in the special case of quadrilaterals three of whose vertices are (0, 0), (1, 0), and (0, 1); the general case can be reduced to this special case via an affine transformation. Theorem 3.1. Suppose Q is a quadrilateral (not necessarily simple or convex) whose vertices, listed in order, are the points (0, 0), (1, 0), p, (0, 1), where p ∈ R 2 . Suppose also that Q is not a trapezoid and that no three vertices of Q lie on a line. Let µ = (1/2, 1/2) be the midpoint of the diagonal connecting (1, 0) and (0, 1), and let ν = (p1/2, p2/2) be the midpoint of the diagonal connecting (0, 0) and p. be the midpoint of the line segment connecting the points of intersection of the opposite sides of Q (see the Figure 2). There exists a line L with the following properties: 1. The points µ, ν, τ lie on L. 2. If c is the center of some ellipse or hyperbola tangent to the four extended sides of Q, then c lies on L. 3. Every point of L, except the three points µ, ν, τ , is the center of some ellipse or hyperbola tangent to the four extended sides of Q. Proof. Note that since no three points of Q lie on a line, p1 = 0 and p2 = 0. Since Q is not a trapezoid, p1 = 1 and p2 = 1. Since Q is not a trapezoid, it is not a parallelogram, so the midpoints µ, ν of its diagonals are distinct. Let L be the line passing through µ and ν: Here, x × y is the cross product of two vectors x, y ∈ R 2 , defined as x × y := x1y2 − x2y1. The fact that τ lies on L can be verified by substituting τ into equation 6 and referencing the definitions of µ, ν, τ . An equation of the form n T x = b gives the line {x ∈ R 2 : n T x = b}. The four extended sides of Q are given by the following four equations: Suppose K is an ellipse or hyperbola that is tangent to the four extended sides of Q. By Corollary 2.6, K can be written as for some nonsingular symmetric matrix A and vector c. Clearly, c is the center of K. We wish to show that c lies on L. For two distinct points x, y in the plane, denote by x#y the line that passes through x and y. By Lemma 2.10, the condition that K is tangent to lines (0, 0)#(0, 1) and (0, 0)#(1, 0) is The condition that K is tangent to lines (0, 1)#p and (1, 0)#p can be written as Combining equations 9 and 10, we get and therefore Look how wonderful! We have obtained a linear equation in c. By this point, we are basically done. Note that . That is, one can verify using the definitions of µ, ν, τ that for any (Here, p1 + p2 − 1 = 0 because p does not lie on the line connecting (0, 1) and (1, 0).) From equations 12 and 13 it follows that (c − µ) × (ν − µ) = 0, and therefore c lies on L. Note that c cannot be equal to µ. Indeed, if c = µ, equations 9 and 11 dictate that . But then A −1 fails to be invertible, a contradiction because A is the inverse of A −1 . A similar argument shows that c cannot be equal to ν or τ . Suppose d is a point on L that is not one of µ, ν, and τ . Since L is the line passing through µ and ν and d is a point on L, we can write d as for some real number t. Let (The definition of B was inspired by equations 9 and 11.) If B happens to be invertible, we make the following definition: One can verify using Definition 2.2 and Lemma 2.10 that K is an ellipse or hyperbola centered at d that is tangent to the four extended sides of Q. Whether B is invertible or not is determined by d, which is in turn described by t. We therefore wish to express det B = d 2 1 d 2 2 − k 2 in terms of t. First, consider k. Using equations 14 and 15 and expressing µ, ν, τ in terms of p, we arrive at the following surprisingly simple formula: With some further algebraic manipulations, one can verify that det B factors as The determinant of B is zero if and only if t = 0, t = 1, or t = − p 1 +p 2 −1 (p 1 −1)(p 2 −1) . (Here, p1, p2 = 1 because Q is not a trapezoid.) By equation 14, the values of d corresponding to these three cases are µ, ν, and τ . Since d is not equal to either of µ, ν, and τ , the determinant of B is nonzero, and the conic K as defined in equation 16 is an ellipse or hyperbola centered at d and tangent to the four extended sides of Q. Why is it that µ, ν, and τ are the only three points of L where no ellipse or hyperbola tangent to the four extended sides of Q can be centered? A somewhat informal explanation that CodeParade alluded to in their YouTube video [4] is that these three points correspond to centers of "infinitely thin ellipses" -that is, ellipses that have "infinitely small minor axis." These "ellipses" are not formally considered conic sections and we avoid them in our proof, though one might imagine an alternative definition of conics where "infinitely thin ellipses" are considered degenerate conics. We leave this topic and proceed to prove Theorem 3.2, which can be thought of as expanding Theorem 3.1 to the case when Q is a trapezoid and τ is a "point at infinity." Theorem 3.2. Suppose Q is a quadrilateral (not necessarily simple) whose vertices, listed in order, are the points (0, 0), (1, 0), (1, s), (0, 1), where s = 0 is a real number. Suppose also that s = 1, so that Q is not a parallelogram. Let L := {(1/2, t) : t ∈ R} be the line passing through the midpoints (1/2, 1/2) and (1/2, s/2) of the two diagonals of Q. 1. If c is the center of some ellipse or hyperbola tangent to the four extended sides of Q, then c lies on L. Proof. An equation of the form n T x = b gives the line {x ∈ R 2 : n T x = b}. The four sides of Q are given by Suppose K is an ellipse or hyperbola that is tangent to the four extended sides of Q. Write K as for some nonsingular symmetric matrix A and vector c. By Lemma 2.10, From the first and fourth equations we have c 2 1 = A −1 11 = (1 − c1) 2 , and therefore c1 = 1/2. This shows that c lies on L. , where m := d1d2 The determinant of B factors as Since t = 0 and t = 1, det B = 0. One can verify using Definition 2.2 and Lemma 2.10 that is an ellipse or hyperbola that is centered at d and tangent to the four extended sides of Q. We will now shift discussion from conics that are tangent to the four extended sides of a quadrilateral to conics that pass through the four vertices of a quadrilateral. Theorem 3.3 has been proven by Minthorn [10], but we give a proof that is shorter and does not employ use the advanced machinery of projective geometry. Theorem 3.3. Suppose Q is a simple quadrilateral whose vertices, listed in order, are the points (0, 0), (1, 0), p, (0, 1), where p ∈ R 2 . Suppose also that Q is not a trapezoid, that the diagonals of Q are not parallel, and that no three vertices of Q lie on a line. There exists an ellipse or hyperbola H that satisfies the following properties: 1. Every point of H is the unique center of some conic that passes through the four vertices of Q. 2. If c is a center of some conic that passes through the four vertices of Q, then c lies on H. The center of H is , which is the arithmetic mean 1 4 0 0 + 1 0 + 0 1 + p of the four vertices of Q. Define the function Γ : Though it may look complicated, Γ(x) is actually just a quadratic polynomial in x1 and x2. One can check through trivial (albeit laborious) algebra that Γ evaluates to zero at the nine points (0, . Hence, we shall call the curve {x ∈ R 2 : Γ(x) = 0} the nine-point conic. To determine what kind of shape the nine-point conic is, let us compute the determinant of the Hessian matrix of the function Γ: Since p1, p2 = 0 and p1 + p2 = 1, det HΓ is guaranteed to be nonzero, and by Corollary 2.5 of Lemma 2.4, the nine-point conic has a unique center. A simple calculation shows that the gradient of Γ evaluates to zero at ( 1+p 1 4 , 1+p 2 4 ): It follows that ( 1+p 1 4 , 1+p 2 4 ), which is the arithmetic mean of the four vertices of Q, is the unique center of Γ. One can verify that By equation 23, this is nonzero. Since the nine-point conic does not contain its unique center, it is an ellipse or hyperbola. Having sufficiently investigated the shape of the nine-point conic, we will now proceed to prove how it relates to centers of conics passing through the four vertices of Q. We prove part 2 of the current theorem. Suppose c is a center of some conic K that passes through the four vertices of Q. Write K as where A is a nonsingular symmetric matrix, v is a vector, and s is a real number. We wish to show that c lies on the nine-point conic. That is, We don't know if c is the unique center of K or if K has many centers; the algebra works out to eventually yield Γ(c) = 0, so presumably the restrictions we have imposed on Q stipulate that every conic passing through its four vertices either has no center (e.g., is a parabola) or has a unique center. Let The condition that K passes through the four vertices of Q then becomes f ((0, 0)) = 0, f ((1, 0)) = 0, f (p) = 0, f ((0, 1)) = 0. From the three equations f ((0, 0)) = 0, f ((1, 0)) = 0, f ((0, 1)) = 0 we obtain the following relations: This allows us to write f (x) as Here, α denotes A12 (which is also equal to A21 because A is symmetric). An additional constraint on f comes from the fact that K must be centered at c. The equation produced by Lemma 2.4 is There is one constraint on f that we haven't used yet, namely that f (p) = 0. To prove that Γ(c) = 0, we proceed in cases. First, suppose α is nonzero. The condition f (p) = 0 of course implies that (c 1 −1/2)(c 2 −1/2) α f (p) = 0. Substituting equation 29, we get From equations 30 and 28 it follows that Therefore, We have obtained the desired equality Γ(c) = 0. (The definition of Γ was originally inspired by the above equation.) Now suppose α = 0, so that A is a diagonal matrix. Since A is nonsingular, and it follows that c1 = c2 = 1/2, so c = (1/2, 1/2). Direct calculation shows that Γ((1/2, 1/2)) = 0. This completes the proof of part 2. To prove part 1 of this theorem, which states that every point on the nine-point conic is the unique center of some conic that passes through the four vertices of Q, let d be a point with Γ(d) = 0. We now proceed to list the "leftover cases," e.g., when d does not satisfy the assumptions d1, d2 / ∈ {0, 1/2} and d1 + d2 Define the quadratic functions gc, gm, gi as where in the definition of gc, Let Jc be the set of points x where gc(x) = 0; define Jm and Ji in terms of gm and gi similarly. It can be checked through trivial (though laborious) algebra that gc, gm, and gi evaluate to zero at the four vertices of Q; so Jc, Jm, and Ji each pass through the four vertices of Q. Similarly one can verify that the gradients of gc, gm, and gi evaluate to zero at (1/2, 1/2), (0, 1/2), and ( p 1 1−p 2 , 0), respectively, so by Lemma 2.4, (1/2, 1/2) is a center of Jc, (0, 1/2) is a center of Jm, and ( p 1 1−p 2 , 0) is a center of Ji. We proceed to use Corollary 2.5 to show that each of Jc, Jm, and Ji has exactly one center. The determinants of the Hessians of gc, gm, and gi can be factored as These are nonzero by equations 22 and 23. The conic Ji is degenerate, but it nevertheless has a unique center. For each of the points (1/2, 1/2), (0, 1/2), and ( p 1 1−p 2 , 0), we have exhibited a conic that passes through the four vertices of Q and has that point as the unique center. Similarly one can exhibit two conics such that one is uniquely centered at (1/2, 0) and the other at (0, p 2 1−p 1 ). Stating the general theorems In this section we extend Theorems 3.1 and 3.3 to generic quadrilaterals. This is made possible by Lemma 4.1, which is trivial but critically important. Lemma 4.1. Let Q be a quadrilateral such that no three points of Q lie on a line. There exists an affine transformation φ such that φ(Q) is a quadrilateral three of whose vertices are (0, 0), (1, 0), (0, 1). A complete quadrilateral is the figure determined by four lines, no three of which are concurrent, and their six points of intersection [14] [9, pp. 61-62]. We do not use the notion of a complete quadrilateral directly, but it is related to Definition 4.2. Definition 4.2. Let Q be a quadrilateral that is not a trapezoid. Let L1, L2, L3, L4 be the four extended sides of Q, in this particular order. If L1 intersects L3 at p, and if L2 intersects L4 at q, we call p and q the two hidden vertices of Q and p+q 2 the third diagonal midpoint of Q. ). Suppose Q is a quadrilateral that is not a parallelogram. Suppose also that no three points of Q lie on a line. There exists a line L with the following properties: 1. The midpoints of the two diagonals of Q lie on L. If Q is not a trapezoid, the third diagonal midpoint of Q lies on L. 2. If c is the center of some ellipse or hyperbola tangent to the four extended sides of Q, then c lies on L. 3. Every point of L, except the midpoints of the two diagonals of Q and the third midpoint of Q, is the center of some ellipse or hyperbola tangent to the four extended sides of Q. Proof. By Lemma 4.1, there exists an affine transformation φ such that φ(Q) is a quadrilateral three of whose vertices are (0, 0), (1, 0), (0, 1). Since affine transformations preserve parallelism and Q is not a parallelogram, φ(Q) is not a parallelogram. If φ(Q) is not a trapezoid, by Theorem 3.1 there exists a line L satisfying the desired properties for φ(Q). An application of Proposition 2.12 completes the proof. We provide the details here but omit them in similar future arguments. Let S := {center of K : K is an ellipse or hyperbola tangent to Q}. Let µ , ν be the midpoints of the two diagonals of φ(Q), and let τ be the third diagonal midpoint of φ(Q). By Proposition 2.12, φ −1 (µ ), φ −1 (ν ) are the midpoints of the two diagonals of Q, and φ −1 (τ ) is the third diagonal midpoint of Q. By Theorem 3.1, L ⊇ {µ , ν , τ } and φ(S) = L \{µ , ν , τ }, and it of course follows that Finally, note that since L is a line, φ −1 (L ) is a line. If φ(Q) is a trapezoid but not a parallelogram, by Theorem 3.2 there exists a line L satisfying the desired properties for φ(Q). An application of Proposition 2.12, similar to what is presented above, completes the proof. We now proceed to prove Theorem 4.4. A complete quadrangle is a set of four points, no three lying on a line, and the six lines which join them [13]. We do not use the notion of a complete quadrangle directly, but it is related to Theorem 4.4. Theorem 4.4 (The locus of the center of a passing conic is the nine-point conic). Let Q be a quadrilateral such that Q is not a trapezoid, the diagonals of Q are not parallel, and no three vertices of Q lie on a line. There exists a conic section H that satisfies the following properties: 1. If c is a center of some conic that passes through the four vertices of Q, then c lies on H. 2. Every point of H is the unique center of some conic that passes through the four vertices of Q. 3. H contains the midpoints of the four sides of Q, the midpoints of the two diagonals of Q, the two hidden vertices of Q, and the intersection of the two diagonals of Q. 4. The center of H is the arithmetic mean of the four vertices of Q. 5. H is a hyperbola if and only if Q is strictly convex. Proof. By Lemma 4.1, there exists an affine transformation φ such that φ(Q) is a quadrilateral three of whose vertices are (0, 0), (1, 0), (0, 1). Since affine transformations preserve parallelism and Q is not a parallelogram, φ(Q) is not a parallelogram. By Theorem 3.3, there exists a conic section H satisfying the desired properties for φ(Q). An application of Proposition 2.12, similar to what was given in the proof of Theorem 4.3, completes the proof. Conclusion We have stated and proved in this paper two theorems regarding the locus of the center of a conic that is (a) tangent to the four extended sides of a quadrilateral and (b) passing through the four vertices of a quadrilateral. Theorems 4.3 and 4.4 are surprising and elegant. Take Theorem 4.3, for example. Why is it be that the locus of the center of a conic tangent to a quadrilateral is the Newton line of that quadrilateral? By default we would expect the locus to be a quadratic curve. Surely there must be some very deep and profound explanation why the locus is actually a line. Is Theorem 4.3 a special case of some greater theorem that we were not able to see in this paper? Is there some general principle that makes Theorem 4.3 hold? Our bland proof, unfortunately, answers neither of those questions; but it is valuable in that it rigorously establishes the truth of Theorem 4.3, a theorem whose beauty is hard to deny. The value of Theorems 4.3 and 4.4 is mostly aesthetic; however, Theorems 4.3 and 4.4 do have applications in practical problems, particularly those involving conics and quadrilaterals. Consider Theorem 4.3, for example. Since a conic has five degrees of freedom and the tangency condition takes away four, there is one degree of freedom remaining. Theorem 4.3 provides an easy parameterization of this degree of freedom, and from the proof of Theorem 3.1 one can extract the exact formula for the conic. The problem of inscribing the biggest possible ellipse inside a quadrilateral can thus easily be solved. Theorem 4.3 can be used in mechanical engineering in determining how far can a cone be inserted into a quadrilateral-shaped opening. Acknowledgements I would first and foremost like to thank Williams College and its generous financial aid program for having me as a student, which is what made the production of this paper possible. I would like to thank Professor Cesar Silva, who introduced me to and showed me the charm of pure mathematics and helped me tremendously in pursuing mathematical research. I would like to thank the Wolfram Fundamental Physics Project, which provided me with invaluable guidance in my first steps as a researcher. I would like to thank Professor Ralph Morrison, Professor Cesar Silva, and Professor Steven Miller of the Williams Math Department for advising me on the formatting and publication of this paper. I am grateful to have access to Wolfram Mathematica, where I performed visualizations and checked algebraic results, which was instrumental in completing this paper. The YouTube content creator CodeParade has played a huge part in the production of this paper by producing Conjecture 1.1. Last but not least, I would like to thank my family, my previous olympiad physics teachers Darkhan Shadykul and Margulan Tursynkhan, the National School of Physics and Mathematics in Astana, "Daryn" Center of the Ministry of Education and Science of Kazakhstan, MIT OpenCourseWare, educational content creators on YouTube, and all the other wonderful people and organizations, for giving me the background that I am so lucky to have.
2022-11-18T06:42:50.169Z
2022-11-17T00:00:00.000
{ "year": 2022, "sha1": "1a010df8d32ba8012fed0997311a40a4939549ca", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1a010df8d32ba8012fed0997311a40a4939549ca", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
10652302
pes2o/s2orc
v3-fos-license
Substrate Sequences Tell Similar Stories as Binding Cavities: Commentary Similarities in binding cavities attract attention for the prediction and doptimization of ligand selectivity. Glinca and Klebe propose a clustering based on physicochemical properties of the binding site analyzed with Cavbase and conclude that their novel cavity-based method tells more than sequences.5 We agree that protein structures are key in understanding of ligand recognition. Still, we think that sequences can tell a lot, if the focus is shifted away from protein sequences toward substrate sequences. We show that an analysis of protease substrates, inherently containing valuable information about binding site characteristics, can be directly utilized to predict potential off-targets. Selectivity is a central issue in drug design, as drugs frequently hit more than a single target.1 Therefore, molecular modeling aims at the prediction of polypharmacology with different approaches followed. Applied methods include ligand-based and structure-based methods as well as network analyses.2−4 Glinca and Klebe demonstrated recently that similarities in physicochemical characteristics of the binding cavity directly relate to overlapping substrate readout.5 By application to protease test sets they show that their cavity-based approach yields similar results as analysis of ligand data from ChEMBL,6 thereby outperforming a similarity analysis of protease sequences. Hence, they conclude, that “cavities tell more than sequences”. We definitely agree that structural information on the binding site is crucial in the rationalization of substrate recognition. Still, we think that sequence information can contribute significantly to an understanding of substrate specificity, when the focus is shifted from protease sequences toward substrate sequences. A plethora of protease substrate sequences has been deposited in the MEROPS database in recent years.7 They are frequently depicted as sequence logos8 to visualize substrate preferences of proteases. Recently, we showed, how these sequence logos can be utilized to yield a quantitative metric for protease specificity.9 Thereby, we also showed that information on protein sequences only is insufficient to predict protease specificity. Furthermore, similarities in protease substrate recognition can be directly deduced via analysis of sequence logos.10 We expect this approach to complement structure-based comparisons, as substrate sequences inherently contain information on binding site characteristics. Substrate peptides probe protease cavities via similar features as Cavbase11 by binding of hydrophobic and hydrophilic, positively and negatively charged, and aromatic amino acids. We performed a substrate sequence-based similarity analysis of the serine protease test set of Glinca and Klebe. Substrate data was downloaded from MEROPS, normalized to the respective natural abundance of amino acids,12 and converted to vectors containing 20 amino acid probabilities at 8 substrate position P4 to P4′. After normalization, scalar products of these substrate vectors yield pairwise protease similarites ranging from 0 to 1.10 A comparison of all eleven serine proteases in the set yields a heat map depicting similarities in protease substrate recognition (see Figure ​Figure1).1). Furthermore, a hierarchical clustering based on complete-linkage yielding six clusters was performed as suggested by Glinca and Klebe. Figure 1 Heatmap obtained for clustering of proteases based on similarities in peptide substrates. Deep blue color depicts maximum similarity, whereas red regions show dissimilarity in substrate recognition. Six resulting protease clusters are separated with horizontal ... The resulting protease similarity map and clustering shows pronounced overlap with the cavity-based analysis of Glinca and Klebe. Thus, substrate sequence analysis shows similar discriminative power as an analysis of binding pockets. Urokinase-type (uPA) and tissue-type plasminogen activator (tPA) form a consistent cluster as in the study of Glinca and Klebe. Furhermore, our clustering nicely groups trypsin, thrombin, and factor Xa (FXa), known to show pronounced overlap in substrate recognition of small molecules.13 In conclusion we show that sequences can tell a lot on substrate recognition of proteases, if substrate sequences are considered. We are sure that peptide substrates comprise valuable information on protease recognition and propose their usage for the prediction of off-target effects, thereby complementing structure-based approaches. S imilarities in binding cavities attract attention for the prediction and doptimization of ligand selectivity. Glinca and Klebe propose a clustering based on physicochemical properties of the binding site analyzed with Cavbase and conclude that their novel cavity-based method tells more than sequences. 5 We agree that protein structures are key in understanding of ligand recognition. Still, we think that sequences can tell a lot, if the focus is shifted away from protein sequences toward substrate sequences. We show that an analysis of protease substrates, inherently containing valuable information about binding site characteristics, can be directly utilized to predict potential off-targets. Selectivity is a central issue in drug design, as drugs frequently hit more than a single target. 1 Therefore, molecular modeling aims at the prediction of polypharmacology with different approaches followed. Applied methods include ligandbased and structure-based methods as well as network analyses. 2−4 Glinca and Klebe demonstrated recently that similarities in physicochemical characteristics of the binding cavity directly relate to overlapping substrate readout. 5 By application to protease test sets they show that their cavity-based approach yields similar results as analysis of ligand data from ChEMBL, 6 thereby outperforming a similarity analysis of protease sequences. Hence, they conclude, that "cavities tell more than sequences". We definitely agree that structural information on the binding site is crucial in the rationalization of substrate recognition. Still, we think that sequence information can contribute significantly to an understanding of substrate specificity, when the focus is shifted from protease sequences toward substrate sequences. A plethora of protease substrate sequences has been deposited in the MEROPS database in recent years. 7 They are frequently depicted as sequence logos 8 to visualize substrate preferences of proteases. Recently, we showed, how these sequence logos can be utilized to yield a quantitative metric for protease specificity. 9 Thereby, we also showed that information on protein sequences only is insufficient to predict protease specificity. Furthermore, similarities in protease substrate recognition can be directly deduced via analysis of sequence logos. 10 We expect this approach to complement structure-based comparisons, as substrate sequences inherently contain information on binding site characteristics. Substrate peptides probe protease cavities via similar features as Cavbase 11 by binding of hydrophobic and hydrophilic, positively and negatively charged, and aromatic amino acids. We performed a substrate sequence-based similarity analysis of the serine protease test set of Glinca and Klebe. Substrate data was downloaded from MEROPS, normalized to the respective natural abundance of amino acids, 12 and converted to vectors containing 20 amino acid probabilities at 8 substrate position P4 to P4′. After normalization, scalar products of these substrate vectors yield pairwise protease similarites ranging from 0 to 1. 10 A comparison of all eleven serine proteases in the set yields a heat map depicting similarities in protease substrate recognition (see Figure 1). Furthermore, a hierarchical clustering based on complete-linkage yielding six clusters was performed as suggested by Glinca and Klebe. The resulting protease similarity map and clustering shows pronounced overlap with the cavity-based analysis of Glinca and Klebe. Thus, substrate sequence analysis shows similar discriminative power as an analysis of binding pockets. Urokinase-type (uPA) and tissue-type plasminogen activator (tPA) form a consistent cluster as in the study of Glinca and Klebe. Furhermore, our clustering nicely groups trypsin, thrombin, and factor Xa (FXa), known to show pronounced overlap in substrate recognition of small molecules. 13 In conclusion we show that sequences can tell a lot on substrate recognition of proteases, if substrate sequences are considered. We are sure that peptide substrates comprise valuable information on protease recognition and propose their usage for the prediction of off-target effects, thereby complementing structure-based approaches.
2018-04-03T00:17:01.159Z
2013-10-28T00:00:00.000
{ "year": 2013, "sha1": "0aafbd37dbb6427f946239640c9e147e8ea7dd86", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://doi.org/10.1021/ci4005783", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "899f9e0f91928e981cd4de15bffd89ea97858222", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Computer Science", "Medicine" ] }
3527121
pes2o/s2orc
v3-fos-license
Phage Display Informatics Phage display is an efficient laboratory technique that can be used to screen for specific peptides and proteins displayed on the surface of bacteriophage. Since Professor George Smith of the University of Missouri pioneered the powerful and flexible method in 1980s [1], it has been adapted and improved by many scientists from various fields. For example, the sequence displayed on the coat proteins of phage has been extended from random peptides to protein fragments, enzymes, antibodies, and even the whole peptidome of a given species [2]; the way of panning has been expanded from in vitro to in vivo [3]; the platform for screening has been extended from plates and beads to microfluidic devices [4]. In addition to the development of “hardwares” of phage display, researchers in closely relevant fields have also witnessed the birth and burst of “softwares” for managing enormous amounts of data on phage display and for making biological discoveries or predictions [5, 6]. With the spread of phage display technique and the progress of its “hardwares” and “softwares,” it has made a great impact on modern medicine. For instance, phage display has been widely used for epitope mapping, analysis of protein-protein interactions, prediction of drug target, and identification of enzyme substrates and inhibitors. Some antibodies and peptides derived from phage display technology have been developed into new drugs approved by FDA; others have shown promise for the development of diagnostics, vaccines, and the targeted delivery of therapeutics. In these achievements, informatics means play an increasingly important role. In this special issue, we take an interest in the investigation of computational and mathematical methods and their applications in all fields using phage display. For both experimental biologists and computational biologists, mapping conformational B-cell epitopes is a very challenging task. The paper “Bioinformatics resources and tools for conformational B-cell epitope prediction” contributed by P. Sun et al. summarized the recent advance of bioinformatics resources and tools for the prediction of conformational B-cell epitopes. According to their review, the prediction methods based on the experimental results of phage display have become one major category of all algorithms. B. He et al. panned the Ph.D.-12 phage display peptide library against metuximab, a new drug for radioimmunotherapy of hepatocellular carcinoma approved by the State Food and Drug Administration of China in 2005, in the paper “Epitope mapping of metuximab on CD147 using phage display and molecular docking.” After cleaning their phage display data computationally, they predicted for the first time the complete epitope recognized by metuximab based on the analyses of mimotopes. Very interestingly, the prediction based on phage display largely overlapped with their docking result and the CD147-CD147 interfaces in the CD147 crystal structure. Consequently, they proposed that blocking the formation of CD147 dimer might be an important mechanism of metuximab function. The study by B. He et al. demonstrates that the prediction of conformational B-cell epitopes based on phage display is a cheap and quick strategy with an acceptable accuracy. Though phage display was born for biomedicine studies, it has already gone beyond this field. For example, it has shown its power in the research for new material, new energy, environmental protection, and agriculture. R. Kushwaha et al. reviewed discoveries via phage display that impacted the use of agricultural products in “Uses of phage display in agriculture: a review of food-related protein-protein interactions discovered by biopanning over diverse baits.” Some parts of this review are relevant to medicine and new energy. For instance, the application of phage display in the studies of food allergy and biofuel production was highlighted. Moreover, the utilization of phage display in the defense of plants against herbivores and microbes was discussed. It was expected that phage display and relevant computational methods would become more popular in the agricultural research. Indeed, in another paper “Uses of phage display in agriculture: sequence analysis and comparative modeling of late embryogenesis abundant client proteins suggest protein-nucleic acid binding functionality.” by R. Kushwaha et al., sequence analysis and homology modeling were used to study 21 client proteins identified by phage display. The results from this initial computational study would guide their future efforts to uncover the protein protective mechanisms of plant seeds during heat stress. As we mentioned previously, the blueprint of phage display proposed by Professor George Smith has inspired many scientists to adapt and improve this technique. Different phages and various coat proteins have been tested to construct new phage display systems. As the genomes of hundreds of phages have been sequenced, identification of their virion proteins will be helpful for the development of new phage display systems. P.-M. Feng et al. presented a Naive Bayes-based method that can predict phage virion proteins using amino acid composition and dipeptide composition in “Naive bayes classifier with feature selection to identify phage virion proteins.” In their jackknife test, the classifier achieved an accuracy of 79.15% to divide phage virion and nonvirion proteins, which were superior to other state-of-the-art methods. Using next-generation sequencing techniques to enable cost-effective high-throughput analysis is a new trend in phage display technology. However, the trend suffers from errors in deep sequencing data, which may exceed 1%. W. Matochko et al. proposed a linear algebra framework for analyzing errors in a 7-mer peptide library with a medium scale sequenced by Illumina method in “Error analysis of deep sequencing of phage libraries: peptides censored in sequencing.” As technical capabilities and depth of sequencing increases, the method would be applicable to larger libraries as well. In summary, the six papers in this volume involve in various aspects of informatics tools and their applications in several fields using phage display technique. As a snapshot of phage display in the information age, it demonstrates that phage display in the 21st century is being transformed from a purely lab-based science to an information science as well, which can make it even powerful. With the rapid development of “hardwares” and “softwares” of phage display and information technology, we can even expect an in silico phage display system in future. Jian Huang Yanxin Huang Ratmir Derda Phage display is an efficient laboratory technique that can be used to screen for specific peptides and proteins displayed on the surface of bacteriophage. Since Professor George Smith of the University of Missouri pioneered the powerful and flexible method in 1980s [1], it has been adapted and improved by many scientists from various fields. For example, the sequence displayed on the coat proteins of phage has been extended from random peptides to protein fragments, enzymes, antibodies, and even the whole peptidome of a given species [2]; the way of panning has been expanded from in vitro to in vivo [3]; the platform for screening has been extended from plates and beads to microfluidic devices [4]. In addition to the development of "hardwares" of phage display, researchers in closely relevant fields have also witnessed the birth and burst of "softwares" for managing enormous amounts of data on phage display and for making biological discoveries or predictions [5,6]. With the spread of phage display technique and the progress of its "hardwares" and "softwares, " it has made a great impact on modern medicine. For instance, phage display has been widely used for epitope mapping, analysis of protein-protein interactions, prediction of drug target, and identification of enzyme substrates and inhibitors. Some antibodies and peptides derived from phage display technology have been developed into new drugs approved by FDA; others have shown promise for the development of diagnostics, vaccines, and the targeted delivery of therapeutics. In these achievements, informatics means play an increasingly important role. In this special issue, we take an interest in the investigation of computational and mathematical methods and their applications in all fields using phage display. For both experimental biologists and computational biologists, mapping conformational B-cell epitopes is a very challenging task. The paper "Bioinformatics resources and tools for conformational B-cell epitope prediction" contributed by P. Sun et al. summarized the recent advance of bioinformatics resources and tools for the prediction of conformational B-cell epitopes. According to their review, the prediction methods based on the experimental results of phage display have become one major category of all algorithms. B. He et al. panned the Ph.D.-12 phage display peptide library against metuximab, a new drug for radioimmunotherapy of hepatocellular carcinoma approved by the State Food and Drug Administration of China in 2005, in the paper "Epitope mapping of metuximab on CD147 using phage display and molecular docking." After cleaning their phage display data computationally, they predicted for the first time the complete epitope recognized by metuximab based on the analyses of mimotopes. Very interestingly, the prediction based on phage display largely overlapped with their docking result and the CD147-CD147 interfaces in the CD147 crystal structure. Consequently, they proposed that blocking the formation of CD147 dimer might be an important mechanism of metuximab function. The study by B. He et al. demonstrates that the prediction of conformational B-cell epitopes based on phage display is a cheap and quick strategy with an acceptable accuracy. Though phage display was born for biomedicine studies, it has already gone beyond this field. For example, it has shown its power in the research for new material, new energy, environmental protection, and agriculture. R. Kushwaha et al. reviewed discoveries via phage display that impacted the use Computational and Mathematical Methods in Medicine of agricultural products in "Uses of phage display in agriculture: a review of food-related protein-protein interactions discovered by biopanning over diverse baits." Some parts of this review are relevant to medicine and new energy. For instance, the application of phage display in the studies of food allergy and biofuel production was highlighted. Moreover, the utilization of phage display in the defense of plants against herbivores and microbes was discussed. It was expected that phage display and relevant computational methods would become more popular in the agricultural research. Indeed, in another paper "Uses of phage display in agriculture: sequence analysis and comparative modeling of late embryogenesis abundant client proteins suggest protein-nucleic acid binding functionality." by R. Kushwaha et al., sequence analysis and homology modeling were used to study 21 client proteins identified by phage display. The results from this initial computational study would guide their future efforts to uncover the protein protective mechanisms of plant seeds during heat stress. As we mentioned previously, the blueprint of phage display proposed by Professor George Smith has inspired many scientists to adapt and improve this technique. Different phages and various coat proteins have been tested to construct new phage display systems. As the genomes of hundreds of phages have been sequenced, identification of their virion proteins will be helpful for the development of new phage display systems. P.-M. Feng et al. presented a Naïve Bayes-based method that can predict phage virion proteins using amino acid composition and dipeptide composition in "Naïve bayes classifier with feature selection to identify phage virion proteins. " In their jackknife test, the classifier achieved an accuracy of 79.15% to divide phage virion and nonvirion proteins, which were superior to other state-of-the-art methods. Using next-generation sequencing techniques to enable cost-effective high-throughput analysis is a new trend in phage display technology. However, the trend suffers from errors in deep sequencing data, which may exceed 1%. W. Matochko et al. proposed a linear algebra framework for analyzing errors in a 7-mer peptide library with a medium scale sequenced by Illumina method in "Error analysis of deep sequencing of phage libraries: peptides censored in sequencing. " As technical capabilities and depth of sequencing increases, the method would be applicable to larger libraries as well. In summary, the six papers in this volume involve in various aspects of informatics tools and their applications in several fields using phage display technique. As a snapshot of phage display in the information age, it demonstrates that phage display in the 21st century is being transformed from a purely lab-based science to an information science as well, which can make it even powerful. With the rapid development of "hardwares" and "softwares" of phage display and information technology, we can even expect an in silico phage display system in future.
2016-05-12T22:15:10.714Z
2013-12-19T00:00:00.000
{ "year": 2013, "sha1": "b997c62e4645c88d6140dc8982078efdb79095c8", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/cmmm/2013/698395.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c50788a2246f3cd16a6ba812aba20787f740e7e", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Computer Science", "Biology" ] }
76661042
pes2o/s2orc
v3-fos-license
Transposable elements contribute to fungal genes and impact fungal lifestyle The last decade brought a still growing experimental evidence of mobilome impact on host’s gene expression. We systematically analysed genomic location of transposable elements (TEs) in 625 publicly available fungal genomes from the NCBI database in order to explore their potential roles in genome evolution and correlation with species’ lifestyle. We found that non-autonomous TEs and remnant copies are evenly distributed across genomes. In consequence, they also massively overlap with regions annotated as genes, which suggests a great contribution of TE-derived sequences to host’s coding genome. Younger and potentially active TEs cluster with one another away from genic regions. This non-randomness is a sign of either selection against insertion of TEs in gene proximity or target site preference among some types of TEs. Proteins encoded by genes with old transposable elements insertions have significantly less repeat and protein-protein interaction motifs but are richer in enzymatic domains. However, genes only proximal to TEs do not display any functional enrichment. Our findings show that adaptive cases of TE insertion remain a marginal phenomenon, and the overwhelming majority of TEs are evolving neutrally. Eventually, animal-related and pathogenic fungi have more TEs inserted into genes than fungi with other lifestyles. This is the first systematic, kingdom-wide study concerning mobile elements and their genomic neighbourhood. The obtained results should inspire further research concerning the roles TEs played in evolution and how they shape the life we know today. Transposable elements (TEs) constitute a significant but understudied fraction of eukaryotic genomes. They are mobile genetic units that proliferate and expand to distant genomic regions. TEs are classified into two classes based on their transposition mechanism. Class I groups elements that transpose using an RNA intermediate, whereas Class II members skip RNA transcript and transpose directly from DNA to DNA 1 . TE landscape of most eukaryotic genomes consists of Class I representatives including: retrotransposons with Long Terminal Repeats (LTR retrotransposons), Long Interspersed Nuclear Elements (LINEs), Short Interspersed Nuclear Elements (SINEs), as well as of Class II DNA transposons that encode a classic DDE transposase ("cut and paste" DNA TEs, TIRs) or comply with yet unknown mechanism of transposition, e.g. Helitrons and Polintons/Mavericks. For a long time, transposable elements were considered just another species of "junk DNA" and the hypothesis on their regulatory roles raised by Barbara McClintock 2 remained ignored. Their impact on eukaryotic evolution and genome function is still a matter of vigorous debate between two extremes: TEs as passive genetic material for selection on one side and powerful factors that immediately impact cell and organism's fate on the other 3,4 . Nonetheless, TEs can be considered as molecular parasites, which introduce mutations and eventually contribute significantly to genome size inflation [5][6][7] . Like other parasites, they take part in an arms race against host's defence mechanisms and organisms have developed multiple complex mechanisms to keep their genomes clear from foreign DNA. The most common are DNA methylation 8 , targeting by tRNA-derived small RNAs 9-11 , RNAi mediated silencing 12 and repeat-induced point mutations 13 . TE insertion breaks continuity of co-selected traits, alters gene transcription, leads to chromosomal rearrangements by promoting recombination 14,15 and promotes insertional mutations, which can impose deleterious consequences for target loci 4 . In the last decade, remarkable examples of TE functional impact on host, mostly (2019) 9:4307 | https://doi.org/10.1038/s41598-019-40965-0 www.nature.com/scientificreports www.nature.com/scientificreports/ animal, have been described, including organ development 16 , karyotype changes 17 , cell fate regulation 18 and stress response modulation 19 . TE-derived genes play crucial roles in all living organisms and massively alter expression of the proximal genes 20 . A TE can modify host transcripts via exonisation of itself, induction of original exon skipping what leads to alternative transcripts, insertion into an ORF (into an existing frame) creating a new fusion protein, and insertion of alternative polyadenylation signals. It can also interfere with gene regulation by delivering novel, illegitimate promoter sequences. For example, a single transposon-derived protein, CSB-PGBD3 (domesticated transposase) can interact with as many as 900 remnant TE sequences and plays roles in gene regulation upon DNA damage 21 . Also, host's protein-coding mRNAs can be occasionally retrotransposed by retrotransposon-related machinery, which can result in formation of novel pseudogenes and genes 22 . The latter might eventually donate polyadenylation sites to neighbouring genes and further expand transcript diversity 22 . Phenomena resulting from gene-transposable element proximity have been thoroughly studied mainly for model animals 20,23,24 and plants 3 , and only a few studies included fungal genomes despite genomic resource abundance [25][26][27] . For instance, a remnant LTR retrotransposon insertion into promoter region of a gene coding for MFS1 transporter was found to induce this gene overexpression and to enhance fungicide resistance 28 . Also, gene clusters can be regulated by neighbouring TEs, e.g. the penicillin cluster in Aspergillus nidulans has lower expression in the absence of Pbla element 29 . In Schizosaccharomyces pombe, Tf1 element has a preference for promoters of stress-related genes, which eventually enhances their expression and promotes survival of the fungus 30 . TE neighbourhood within a window of 1 kb has a repressive effect on neighbouring genes in fungi equipped with functional methylation machinery, but casts no such effect in Saccharomyces cerevisiae, which lacks methylation 25 . Genes within 1 kb to a Gypsy or hAT transposon have lower expression in Coccidioides immitis 31 . In this organism, TEs are often inserted in proximity of phosphorylation-related genes. Castanera and colleagues showed also that the presence of TE clusters has more pronounced regulatory effects on gene expression as compared to a single TE upstream or downstream 25 . Some fungal pathogens of plants have genomes with a clearly dualistic architecture described by the two-speed model of evolution. The core genome is densely packed with housekeeping genes while a lifestyle-adapting part contains effector genes and TEs 7,32 . This genome architecture was reported for versatile fungal pathogens among them Fusarium 33 , Leptosphaeria 34 and Verticillium 35 . The lifestyle-specific genome is expected to be enriched in TEs, as they may play roles in host switching and adaptation to new ecological niches 36 , which can be observed in Magnaporthe oryzae, where genes associated with TEs are involved in host specialization 37 . In consequence, even closely related fungal taxa may differ significantly in transposable content, e.g. Amanita species with saprophytic and mycorrhizal lifestyles 38 . Encouraged by the aforementioned experimental screenings demonstrating the impact of TEs on gene expression, we performed a systematic analysis of their genomic context in publicly available fungal genomes. Here, we investigate the immediate neighbourhood of transposable elements, with special focus on co-localizing genes. Moreover, we interpret our results from a lifestyle perspective. Methods Genomes and transposable elements. Fungal proteomes were downloaded from NCBI on 17th August 2016 39 and genomic sequences were downloaded from NCBI genome portal on 18th of August 2016. 625 genomic assemblies with corresponding proteomes analysed in this study are listed in Supplementary Table S1. Genome sequences deposited at the NCBI were obtained using diverse sequencing techniques, with different sequencing depths, assembled and annotated using a plethora of approaches. In consequence, there ought to be gene calling inconsistencies, missing genome fragments and to deal with it our study will focus on general trends instead of singularities. Genomic coordinates of TEs were inferred in the course of de novo and homology-based TE annotation. Irf inverted repeat finder 40 (irf parameters used: matching weight 2, mismatching weight 3, indel penalty 5, match probability 80, indel probability 10, minimum alignment score to report 20, maximum stem length to report 500000, MaxLoop 10000, additional options: -a3 -t4 1000 -t5 5000) and RepeatModeler 41 were used to detect TE candidates de novo. Irf hits were classified using the RepeatModeler annotating script. Multiple overlapping hits were removed by clustering with RepBase database entries 42 using CD-HIT 43 , and the resulting sequence dataset of TE consensus sequences was used as a library for RepeatMasker homology search 44 (RepeatMasker was invoked with options: -gccalc -no_is, TEs with scores above 200 were taken). All the resulting sequences were scanned with manually curated list of reference Pfam HMM profiles (using pfam_scan.pl with E-value threshold 0.01) 45 and CDD profiles (RPS-BLAST with E-value threshold 0.001) 46 listed in Supplementary Table S2. This TE annotation pipeline has been successfully employed previously in the study of DNA TE's 47 as well as in a growing number of genome annotation studies [48][49][50] . The chosen protein domains are either associated with TE activity or related to TEs and were collected based on TE architectures known from RepBase and literature. The elements containing sequences similar to known TE-related domains are labelled along the manuscript as "with domain" transposable elements. Sequences without detectable similarity to known TE domains were considered as fragments and remnants of old TEs. A schematic workflow of the analyses is shown as Supplementary Fig. S1. Neighbourhood classification. Three classes of TE neighbours were defined: (i) nothing, (ii) other TE and (iii) gene. In order to provide a robust and consistent neighbourhood classification, we defined the following set of rules. First of all, to adhere to varying genome architectures for each species, an adaptive scanning window size was estimated as a median of gene distances in the whole assembly (Supplementary Table S1) with the top size of 1 kb. The minimal median gene distance value was 71 for Enterocytozoon bieneusi and maximum 8,997 for Edhazardia aedis. In total, 12 analysed assemblies had the window narrower than 100 bp while 79 -wider than 1 kb. All protein sequences encoded by genes partially overlapping with TE coordinates were scanned against a list of TE-related protein domains using pfam_scan.pl tool. If the gene had no detectable TE-related domains, the TE borders were shortened and the gene became TE's immediate neighbour; otherwise, the gene was included into TE's borders and the neighbourhood was determined against expanded TE coordinates. If TE fully covered any gene, it replaced this gene in further neighbourhood assessment. Moreover, if the neighbouring gene contained an inner TE, which was also located within the window distance to the analysed TE, this inner element was annotated as a neighbour ( Supplementary Fig. S2). When two or more TEs overlapped, they were merged together and tagged with the most specific annotation common to all of participating TEs. When merged TEs were of totally distinct species, the newly defined TE was tagged as a 'composite' . A TE inserted into a gene can reside within a 3′ UTR, 5′ UTR, intron or exon. Unfortunately, the majority of analysed assemblies lacked gene inner structures and even less included UTRs at all. In consequence, we were not able to study the detailed location of TEs at a sub-genic level. The encoded proteins were scanned for secretion signals using TargetP 51 and were assigned to GO categories using pfam2GO table 52 . Data analysis. All genomes with incomplete annotation, for instance without gene predictions, were excluded from analysis, as mentioned above. Genome statistics (size, density, intron per gene) were computed based on the assembly sequences and gff annotation files downloaded from the NCBI database. Since gene calling strategies vary in reliability between genomes and initial data quality directly impacts our neighbourhood analyses, we have selected only highly significant patterns emerging from analyses described in this manuscript. Information on fungal lifestyles, as in our previous study on DNA transposons 47 , was derived from the available literature. Categories including host type (plant, animal, fungus), main habitat (soil/dung, water) and lifestyle (pathogenic, symbiotic and saprotrophic) were assigned to every species in the dataset. Noteworthy, a single fungus could represent multiple categories, if applicable, e.g. species functioning both as a plant symbiont and animal pathogen (see Supplementary Table 1). Taxonomical annotation was derived from the NCBI taxonomy database, with manual curation when needed (see Supplementary Table 1). TE types were described using a 2-level hierarchy comprising Wicker's orders/Repbase classes (e.g. LINEs, SINEs, LTRs) and superfamilies (e.g. Copia, hAT). Exploratory analysis and basic statistics for the dataset were carried out using pandas and seaborn Python packages. Statistical tests were performed in Python with the scipy package. Distributions of distances between TEs and genes for fungi with different lifestyles were compared with Mann-Whitney U test. Relationships between the number of TE inserted into genes as well as other genome statistics were evaluated using McFadden's R-squared for logistic regression with binomial errors. The logistic regression models were built with statsmodels package. Enrichment analyses were performed using binomial distribution, and upper-bounds for p-values were computed with formula derived from Hoeffding's inequality: where n is the number of trials, k is the number of successes and p is the success probability. The genome features are available as Supplementary Table 1 Table S2 lists protein domains either associated with TE activity or related to TEs). Our TE counts are likely to be underestimated and there are two major reasons for that. The first and more fundamental one is a derivative of methods used in whole genome sequencing projects, which rely mostly on sequencing reads of lengths insufficient for effective reconstitution of long repeat regions. The second reason lies in our approach, as we chose to apply rather stringent filtering of identified TE fragments in order to increase the method's reliability. All TE candidates had to be confirmed with RepeatMasker using extended fragments library as described in Methods section. Additionally, all TEs regarded as still functional were supposed to contain at least one known TE-related protein domain. Fungal genomes have different gene densities and architectures ranging from very compact in endoparasitic Microsporidia to relatively big and complex Tuber and Puccinia genomes. A question arises whether and how such rough genome characteristics correlate with TE localization in different taxa. We found that non-autonomous TEs and remnants massively overlap with regions annotated as genes. These results suggest a great contribution of TE-derived sequences to host's genes (Fig. 1). 50.6% of non-autonomous TEs are inserted into a genic region (1,024,918) and 11.6% (235,593) TEs fragments were found in proximity of gene on either side, being equally ubiquitous downstream and upstream of genes (116,722 downstream, 118,871 upstream). The location of a TE fragment between two genes is relatively rare (1.8% of TEs, 36,841). That totals to 64% of non-autonomous TEs co-localising with genes and points at the compact architecture of many fungal genomes assuming random distribution of ancient TEs and genes in many of the fungal genomes. More compact genomes host more remnant TEs inserted into genes as compared to genomes with greater non-genic space ( Fig. 2A). 14.6% of TEs had another TE as a neighbour either upstream (147,874) or downstream (147,114), 11.8% (238,024) TEs were located in-between other TEs, while 9.6% of TEs fragments (193,448) had neither genes nor TEs identified within the chosen scanning window. In total, 35.9% of analysed TEs either had another TEs as exclusive neighbours or lacked neighbourhood at all. Active tes cluster with other tes. Transposable elements with at least one protein domain typical for mobile elements have a distinct genomic neighbourhood profile. They are rarely found within or in close proximity of genes (less than 15.9%, 46,789 of these elements are inserted into a gene and 16,2%, 47,493 are close to a gene) and tend to cluster with other TEs (almost 49.5%, 145,384) or locate in regions without genes and other TEs www.nature.com/scientificreports www.nature.com/scientificreports/ (18.4%, 54,080). Academ is an exception here because most of their copies contain recognizable protein domains themselves (81%, 1,568/1,938) and are classified as overlapping with a gene. These domains are encoded by a TE but are not classified as TE-specific (e.g. DEAD/DEAH box helicase domain (PFAM: PF00270) Replication, recombination and repair, recQ_fam (CDD:129701)), which hacks classification criteria and eventually makes Academs frequently annotated as host genes. This non-random distribution of potentially active TEs might be a sign of general negative selection imposed on TEs interfering with gene coding regions or target site preference as observed for some types of TEs (e.g. Zisupton 54 ). Even if insertion preference might play a pivotal role in shaping the genomic landscape of active elements, once they became inactivated, the evolutionary pressure against them faded and TE fragments have survived in genomic areas where active TEs are not allowed. Genome properties and te localization. There is a significant correlation (R 2 McF = 0.53 for TEs with a domain and R 2 McF = 0.65 for TE fragments) between the fraction of TEs targeting genes and genome compactness measured as fraction of the genome occupied by genes ( Fig. 2A,B). The smaller the gene distances and fewer non-genic regions, the more TE-related sequences overlap with genes likely as a result of scarcity of other genomic locations. Overall ubiquity of remnant TEs in gene neighbourhood can be a consequence of the random distribution of TEs resulting from neutrality of old and fragmented TEs, lack of traceable target site preference among most types of TEs, and most probably recurrent usage of ancient TE-derived sequences. Interestingly, we observe a binomial distribution of in-gene insertion frequency for TEs with TE-related domains (Fig. 2C). The two peaks correspond to two distinct genome architectures within fungi: one with a higher fraction of both remnant and coding TEs in genes (mostly in Saccharomycotina, see Supplementary Fig. S3) and the other one with only remnant TE debris www.nature.com/scientificreports www.nature.com/scientificreports/ located within genes (filamentous fungi). The former TE distribution is peculiar and might be a consequence of Saccharomycotina's selection on compactness of the genomes. Remnant tes populate enzyme-encoding genes. Non-autonomous TEs and TE rem- nants. Protein-coding genes impacted by old TE insertions are significantly depleted in protein repeat motifs such as Ankyrin, WD40 and in protein-protein interaction domains like F-box (See Supplementary Table 3). This pattern has not been described so far and will be explored in detail in further studies. One might expect that www.nature.com/scientificreports www.nature.com/scientificreports/ repeat sequences will appear as artefacts with de novo TE searches mainly due to large families present in a single genome. However, the obtained result showing protein repeat underrepresentation can be a hallmark of method robustness and supports the lack of such artefacts, at least manifested at protein-level. Additionally, it might suggest previously undescribed selection pattern yet to be understood. Fragments of LINEs co-localise with ATP-synt_ab_N ATP synthases (PF02874) and Metallophos phosphoesterases (PF00149). Non-autonomous LTR retrotransposons are preferentially associated with genes coding for Aconitase (PF00330), Catalase (PF00199), Peptidase_M41 (PF01434) and Chitin_synth_1 synthase (PF01644). Remnants of DNA TE are found with genes coding for Glyco_hydro_3_C hydrolase (PF01915) and PNP_UDP_1 phosphorylases (PF01048). Helitron remainings can be found in proteins with Peptidase_S8 (PF00082) and Pkinase (PF00069) domains. TEs with a coding region. Functional transposable elements rarely insert into genes and do not show a statistically significant preference for specific protein domains. Usually, they cluster with other TEs in genomic areas containing fewer genes. Eventually, genes infested by them often carry TE-related domains, and are likely to be TEs misannotated as genes and included into proteomes. TEs tend to insert into other TEs leading to the formation of TE-clusters or composite elements 55,56 . TE location, abundance and hosts' ecology. Animal-related and pathogenic fungi have more TEs inserted into genes as compared to fungi with other lifestyles (Fig. 3). Plant-related, saprotrophic organisms and those living in soil or on dung have fewer TEs overlapping with genic regions. This effect is straightforwardly correlated with genome compactness of animal pathogenic fungi and genome expansion present in many plant-associated fungi 7 . Genome architecture seems to be the dominant factor determining the relationship between the coding and non-coding genome. Plant-associated fungi have a greater average distance between TEs and genes (370 bp) and fewer genes close to TEs as compared to non-plant related fungi (351 bp between gene and TE on average, p-value = 7.7e-78). Both features are likely attributed to greater genome sizes and overall decrease in gene density. Small secreted proteins. Small secreted proteins (SSPs) are often related with plant-associated lifestyle providing effector activity modulating host performance 57 . Plant-pathogenic fungi are known for their peculiar genome architecture with fast-evolving genomic regions rich in repeat proteins, SSPs and TEs 7 . We tested whether SSPs would indeed cluster with TEs in terms of the neighbourhood defined in this paper. The SSPs were defined either as shorter than 300 amino acids and predicted to be secreted or additionally possessing more than 5% of cysteines (which narrowed the gene list). Regardless of the applied definition, we found no statistical support for the association between SSP neighbourhood with TEs. One of the possible explanations here would be that in fast-evolving genome parts, the maximum distance allowing for TE-gene influence might be bigger than 1 kb averaged for many genomes, with a majority of them having a more uniform genome architecture. Our analyses are also limited by assembly fragmentation particularly affecting repeat-rich genome regions. SSPs understood as genes coding short secreted proteins constituted 3.6% of all neighbouring genes and 6.1% of all protein coding genes. These values varied among genomes with Agaricomycetes (n = 72, mean of 78) having more SSPs in TE neighbourhood than Tremellomycetes (n = 31, mean of 10). Among Pezizomycotina, Eurotiomycetes had less SSPs co-localising with TEs (n = 122, mean 39) than Dothideomycetes (n = 39, mean 69) and Leotiomycetes (n = 33, mean 103), being the most SSP rich in proximity of TEs. Discussion The aim of this study was to explore neighbourhood of fungal transposable elements, either functional or not. TEs are intrinsically linked to genome evolution and constitute minor but still ubiquitous fraction of most fungal genomes. Their roles as potent regulatory elements, genomic parasites and nearly neutral sequences are being revised constantly 58 . According to Arkhipova and others, the most transposable elements remain silent, evolve in a neutral fashion and only a minor fraction gets ever involved in adaptive roles 59 . Our results seem to confirm this perspective, showing no correlation between TE neighbourhood and gene function for many TE families and remnant elements. We might not be able to detect rare events on this large scale e.g. a new regulatory network that uses TEs as TF binding sites. With the advancement of single cell sequencing technologies, it will soon become feasible to observe TE movements and distribution across fungal populations without being limited to model organisms only. Observed localisation of TEs in 625 fungal genomes shows a dichotomy between relatively young elements depleted in genes, and remnant sequences clearly derived from transposable elements, now more deteriorated, which are equally likely to be found both, within genes and in other locations. This phenomenon provides a pathway to exaptation of TEs, producing new coding regions and utilization as evolutionary raw material for selection. The significance of exaptation in the course of Metazoa evolution has been noted by Scharder and Schmitz in their review on TEs in adaptive evolution 58 . There are numerous factors shaping TEs distribution ranging from target site preferences in some retrotransposons favouring insertion upstream of polymerase III transcribed genes 60 , strand preference in LTR retrotransposons, via genome rearrangements, to forces of selection and genetic drift acting at a population scale removing TEs with deleterious phenotypes 56,61 . Regardless of insertional preference, present in some TE types, the overall pattern of genomic distribution of both functional and dead elements corroborates a random fashion of TE dispersal within genomes. These genomic parasites remain active outside of genes, where they are less likely to cause deleterious mutations. TEs with a coding region are predominantly in a distance from host genes, what might be related to the repressive effect of many TEs on neighbouring genes 31 . Remnant TEs are not subjected to such constraints and can now be used as raw material for new coding sequences. The proportion of the genome originated from TEs varies in different fungal lineages as shown previously 7,47 . The bigger the genome, with greater distances between genes, the fewer TEs overlap with genes. The observed www.nature.com/scientificreports www.nature.com/scientificreports/ pattern suggests the presence of constraints imposed on the size of small genomes, despite multiplication of TEs and randomness of the insertion process. In consequence, small genomes remain small, and large ones grow. The growth of big fungal genomes can be acknowledged to genetic drift, they change with time gaining new slightly deleterious mutations, mobile elements and introns [62][63][64] . In contrary, the very compact genomes of yeast-like organisms are likely a result of selection 62,65 . Genome architecture seems to depend on fungus ecology. Most fungi with complex genomes shaped by numerous TEs are plant-associated which has been noticed previously 7,47 . Plant-related fungi are known to use SSPs to deal with plant's immune reaction. It has been claimed that SSP-coding genes co-localise with TEs, www.nature.com/scientificreports www.nature.com/scientificreports/ however, we did not observe this effect. The latter effect can be masked by the underrepresentation and fragmentation of repeat rich genomic locations in assemblies. Surprisingly, our findings point at several previously unreported correlations between occurrence of TE-gene overlapping and animal-related and/or pathogenic host lifestyle. It remains an open question whether there is a causative relation between fungal ecology and TEs distribution in the genome -it may be validated by experiments involving multiple high-quality genomes and transcriptomes from closely related taxa differing in lifestyle. Analysis of an extensive dataset of genomes covering organisms of diverse genome sizes, lifestyles, taxonomic position and TE abundance enabled us to ask whether TE insertions are linked to specific functional categories described previously, such as stress response 66 , mutualism 67 or phosphorylation 31 . We found no general relationship between the aforementioned biological functions and TE neighbourhood. This finding may suggest that these phenomena are taxon-specific. However, we did find associations between TEs and several unrelated enzyme classes, for particular fungal lineages and TEs classes. Our conclusion supports Arkhipova's hypothesis that adaptive roles of TEs will remain statistically undetectable and will remain a case-by-case phenomenon. We might hypothesise that TEs can play diverse roles, including adaptive ones, in the course of evolution of particular fungal populations, each being shaped by its constraints. When analysed together, these specific cases are masked by the dominant random and neutral fashion of TE evolution. Data Availability Information processed in statistical analyses is available as Python code and Excel tables.
2019-03-15T02:58:03.166Z
2019-03-13T00:00:00.000
{ "year": 2019, "sha1": "6aa8ec9905728c5340e1a72c6fc6746605169ad1", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-40965-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6aa8ec9905728c5340e1a72c6fc6746605169ad1", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
18954452
pes2o/s2orc
v3-fos-license
Clustering and principal-components approach based on heritability for mapping multiple gene expressions. When the number of phenotypes in a genetic study is on the scale of thousands, such as in studies concerning thousands of gene expression levels, the single-trait analysis is computationally intensive, and heavy adjustment of multiple comparisons is required. Traditional multivariate genetic linkage analysis for quantitative traits focuses on mapping only a few phenotypes and is not feasible for a large number of traits. To cope with high-dimensional phenotype data, clustering analysis and principal-component analysis (PCA) are proposed to reduce the data dimensionality and to map shared genetic contributions for multiple traits. However, standard clustering analysis and PCA are applicable for independent observations. In most genetic studies, where family data are collected, these standard analyses can only be applied to founders and can lead to the loss of information. Here, we proposed a clustering method that can exploit family structure information and applied the method to 29 gene expression levels mapped to a reported hot spot on chromosome 14. We then used a PCA approach based on heritability applicable to small number of traits to combine phenotypes in the clusters. Lastly, we used a penalized PCA approach based on heritability applicable to arbitrary number of traits to combine 150 gene expression levels with the highest heritability. Genome-wide multipoint linkage analysis was carried out on the individual traits and on the combined traits. Two previously reported peaks on chromosomes 14 and 20 were identified. Linkage evidence was stronger for traits derived from methods that incorporate family structure information. Background Gene expression levels, treated as complex quantitative traits, have been found to show familial aggregation [1]. Microarray technique allows measurement of thousands of gene expression levels simultaneously, providing an opportunity to map genetic determinants that regulate multiple expression levels. To locate such determinants, single-trait analysis can be performed on each individual trait and results can be compared [2]. However, when the number of gene expression phenotypes is on the scale of thousands, the single-trait analysis is computationally intensive, and heavy adjustment of multiple comparisons is required. Traditional multivariate genetic linkage analysis for quantitative traits focuses on mapping only a few phenotypes and is not feasible when the number of phenotypes is large [3]. To cope with high-dimensional phenotype data, clustering analysis and principal-component analysis were proposed to reduce dimensionality and to map shared genetic contributions for multiple traits [4]. However, the standard clustering analysis and principalcomponent analysis are applicable to independent observations. In most genetic studies, when family data are collected, these standard analyses are applied only to founders and can lead to loss of information [2]. Here, we proposed a clustering approach that takes into account of the family structure information. We then used a principal-components approach based on heritability proposed by Ott and Rabinowitz [5] to combine the phenotypes in each cluster. By maximizing the heritable component of the trait variation, this approach may increase power of linkage analysis on the combined trait because standard principal-components analysis may maximize non-genetic variance component [5,6]. The methods of Ott and Rabinowitz [5] are only applicable to a small number of phenotypes. We thus used a penalized principal components of heritability analysis proposed by Wang et al. [6] which can be applied to arbitrary number of traits to screen large number of expression levels simultaneously for hot spots. Genome-wide multipoint linkage analysis was applied to the first few combined traits. Methods All analyses were performed on the GAW15 Problem 1 human gene expression data. Clustering analysis was applied to the 29 gene expression phenotypes found to have significant linkage results on chromosome 14 [2]. Principal-component analysis based on heritability [5] was performed to combine gene expression traits in each of the resulting cluster. A ridge-penalized principal-components approach based on heritability proposed by Wang et al. [6] was applied to the 150 gene expression levels with highest heritability. Multipoint linkage analysis was carried out on each of the 29 individual traits on chromosome 14 as well as on several combined traits. Clustering analysis Here we proposed a clustering method that uses all subjects in the data set and incorporates family structure information by defining a distance measure that reflects similarity of traits among family members. This distance measure is a sum of weighted family-specific mean trait differences. The weights are calculated from within-family trait sum-of-squares. When the trait values for subjects within a family are more similar, leading to a smaller within-family sum-of-squares, the differences in their trait means is more important and thus is weighted larger. To be specific, let i index families and j index subjects. Let n i be the number of members in the i th family. Then the distance between trait x and trait y is defined as where and . This distance measure resembles the F statistic in the ANOVA test. The proposed clustering using all subjects was compared to the standard hierarchical clustering using founders. Principal components of heritability The principal-components approach based on heritability proposed by Ott and Rabinowitz [5] exploited family structure information by defining principal components of heritability (PCH) as scores with maximal heritability, subject to scores being orthogonal to each other. To be specific, a trait can be decomposed into a family-specific component and a subject-specific component. Instead of maximizing the total variation as in standard principalcomponents analysis, the PCH maximizes the relevant family-specific component variation relative to the subject-specific component variation. That is, the PCH is the solution to: , where B is the family-specific variation and W is the subject-specific variation. Note that this maximization criterion is equivalent to maximizing the heritability (the ratio of the family-specific variation to the total variation) of a score. Here we use betweenfamily sum-of-squares to estimate B, and use within-family sum-of-squares to estimate W. The first three PCHs are computed in each of the clusters found in the previous section. Penalized principal-components of heritability Without knowing which expression levels are regulated by a common gene, it may be desirable to apply the principal components of heritability approach on a large number of . . traits and evaluate which traits have significantly large loadings at linkage peaks. However, the method of Ott and Rabinowitz [5] is not applicable for high-dimensional traits for two reasons: first, it does not account for the problem of overfitting, which is a common problem to high-dimensional data; second, the sample withinfamily sum-of-squares (estimate of W) could be singular and cannot be inverted. Although generalized inverse can be used, the results will be highly unstable. In order to screen large number of traits, we used a penalized principal components of heritability [6] defined as to stabilize the PCH. Here, λ is the tuning parameter. When λ is zero, the penalized PCH is reduced to the PCH in Ott and Rabinowitz [5]; when λ approaches infinity, the penalized PCH approaches the score that maximizes the family-specific variation. In the latter case, the penalized PCH is close to the regular principal component applying to the founders. The λ is chosen by maximizing a cross-validated heritability [6]. We applied penalized PCH to 150 gene expression levels with the highest heritability. Linkage analysis Prior to linkage analysis, genotype consistency was checked by PEDCHECK. SNPs with Mendelian genotyping errors were set to missing. Multipoint linkage analyses were performed by SIBPAL in S.A.G.E. The weighting method used for different sibling pairs was 'W4' [7]. The Rutgers genetic map provided by Sung et al. [8] was used. Linkage results from S.A.G.E. were summarized by t statistics and p-values. Clustering analysis Standard hierarchical clustering computed from 56 founders is summarized in Figure 1a. The proposed family structure-based clustering computed from all subjects is summarized in Figure 1b. The first cluster tree was cut at 0.52, the threshold for correlation as suggested in Morley et al. [2], and the second tree was cut such that each cluster would have at least two members. Permutation can also be used to determine the cut-off value. The classical clustering produced six groups, while the proposed clustering produced three. Nine out of the ten members in the cluster A were in the cluster of 14 genes mapped to chromosome 14 hot spot reported by Morley et al. [2] compared to that, all the members in the cluster D were in the same cluster as reported in Morley et al. [2]. Principal components of heritability Principal components of heritability were computed for traits in Clusters A through D. Results were presented in Table 1. The first three components in Cluster A explained 72% of the total heritability. The corresponding proportion of heritability explained by the first three components in Cluster D was slightly higher (74%). The highest proportion explained was 77% in Cluster B. Linkage analysis Genome-wide linkage analysis was performed on each of the 29 expression levels mapped to chromosome 14. The maximum t value for the peaks reached 7.25, which corresponded to a p-value of 1.22 × 10 -12 and a genome-wide pvalue of 3.53 × 10 -9 [9]. Genome-wide linkage scans for principal components of heritability are summarized in Table 1 and Figure 2. Note that each cluster had at least one component with a peak on chromosome 14. Among the components with a peak on chromosome 14, the linkage evidence was stronger for components derived from the proposed clustering method (Cluster B, C, and D) than the component derived from the standard method (Cluster A). For example, the peak t value for the component A.2 using the standard method was 3.01, while the peak t value for the component D. Table 1). Penalized principal components of heritability We applied the penalized PCH [6] to the 150 expression levels with the highest heritability. The cross-validation procedure suggested λ to be 7.88. Genome-wide linkage results of the first PCH were shown in Figure 3. There were two peaks on chromosomes 5 and 20, with t values of 5.60 (194 cM) and 5.25 (4.8 cM), respectively. The peak on chromosome 20 was also identified by Morley et al. [2]. Discussion We proposed a clustering method applicable to correlated family data. The distance measure used for clustering takes into account the trait similarity among family mem-bers. Unlike the standard hierarchical clustering, which only includes independent individuals, all the subjects in the data set contribute to the proposed method and can potentially recover some of the information lost in restricting analysis to founders. The clustering followed by PCH and multipoint linkage analysis identified the peak on chromosome 14 reported by Morley et al. [2]. The linkage evidence on chromosome 14 was stronger for the components computed from the proposed clustering (p = 7.35 × 10 -8 ) than the ones computed from the standard clustering (p = 1.00 × 10 -3 ). The penalized PCH approach applied to 150 traits with highest heritability identified a previously reported peak on chromosome 20 [2], suggesting it may be used to screen large number of traits for hot spot. However, note that the penalized PCH cannot be used to determine which traits to include for collinear traits. For example, for two perfectly correlated traits the cross-validation procedure cannot distinguish which trait is more important that the other without prior information. Linkage analysis on the combined trait may give less significant results than on the individual trait after adjusting for multiple comparisons. This could be because the combined trait involves a linear combination of all traits, which is subjected to more noise. However, when the marginal effect of a gene on each trait is moderate but the combined effect is large, investigating single trait separately may not identify the gene, while a multivariate method could reveal the joint effect of such a gene. Another possible reason for less significant results of PCH might be that here we used within-family sum-of-squares to estimate subject-specific component variation and relatives' kinship relationship was not exploited. Such information can be added by incorporating kinship coefficients into a variance components model [10]. Linkage analysis of the principal components obtained from standard method (Cluster A) and proposed method (Cluster D) Figure 2 Linkage analysis of the principal components obtained from standard method (Cluster A) and proposed method (Cluster D). Linkage analysis of the penalized PCH approach applied to the 150 traits with highest heritability Figure 3 Linkage analysis of the penalized PCH approach applied to the 150 traits with highest heritability.
2014-10-01T00:00:00.000Z
2007-12-18T00:00:00.000
{ "year": 2007, "sha1": "3bbd624e0ace75874d8a2418349a7456bde6950c", "oa_license": "CCBY", "oa_url": "https://bmcproc.biomedcentral.com/track/pdf/10.1186/1753-6561-1-S1-S121", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7b646165dd2dfd697cc234fd740b200fb7b16d40", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
40818211
pes2o/s2orc
v3-fos-license
Effect of broken symmetry on resonant inelastic x-ray scattering from undoped cuprates We study the magnetic excitation spectra of resonant inelastic x-ray scattering (RIXS) at the $L$-edge from undoped cuprates beyond the fast collision approximation. We analyse the effect of the symmetry breaking ground state on the RIXS process of the Heisenberg model by using a projection procedure. We derive the expressions of the scattering amplitude in both one-magnon and two-magnon excitation channels. Each of them consists of the isotropic and anisotropic contributions. The latter is a new finding and attributed to the long range order of the ground state. The presence of anisotropic terms is supported by numerical calculations on a two-dimensional spin cluster. We express the RIXS spectra in the form of spin-correlation functions with the coefficients evaluated on the cluster, and calculate the function in a two dimensional system within the $1/S$ expansion. Due to the anisotropic terms, the spectral intensities are considerably enhanced around momentum transfer $\textbf{q}=0$ in both one-magnon and two-magnon excitation channels. This finding may be experimentally confirmed by examining carefully the $\textbf{q}$-dependence of the spectra. I. INTRODUCTION Resonant inelastic x-ray scattering (RIXS) has attracted much interest as a useful tool to investigate excited states in solids 1 . The L-edge RIXS experiments have been recently carried out with high energy resolution in transition-metal compounds, which have revealed magnetic excitations as spectral peaks in the low-energy region [2][3][4] . Starting from the undoped cuprates, the activity spreads over, rapidly and widely, doped high-T c cuprates 5-7 , nickelates 8 , pnictides 9 , 5d transition metal compounds 10,11 , and so on. Among them, the investigation on cuprates is one of the most active fields due to the relation with high-T c superconductivity. Stimulated by these experiments, theoretical efforts to elucidate mechanism of the magnetic RIXS in cuprates also have developed [12][13][14][15][16][17][18] . But rich information involved in the L-edge RIXS data such as momentum and energy transfer dependence as well as polarization dependence never ceases to require further reliable and convincing theories more than ever. The L-edge resonance in undoped cuprates is described by the second-order dipole allowed process that a 2p-core electron is prompted to an empty x 2 − y 2 orbital by absorbing photon and then an occupied 3d electron combines with the core hole by emitting photon. When the 3d orbital in the photo-emitting process is different from the one in the photo-absorbing process, the excitations within the 3d orbitals are brought about, which are called as d-d excitation 19 . When the 3d orbitals in the photoabsorbing and photo-emitting processes are the same x 2 −y 2 orbital but their spins are different, magnetic excitations with spin flip could be generated 20,21 . Even if the spins are the same, the spin-conserving excitations could be brought about by the presence of the core hole during its finite lifetime [12][13][14] . This process could be described only when it is treated beyond the fast collision approximation (FCA) that no relaxation could take place in the intermediate state because of the short lifetime 20,21 . In our previous papers 12, 13 , we have analysed the process leading to the final states in the second-order process, and have clarified how the spin excitations are taken place around the core hole site beyond the FCA. In one dimension, the analysis has been straightforward, since the spherical symmetry in spin space remains intact in the ground state, while in two dimensions under the antiferromagnetic ordered phase, the analysis has been rather complicated due to the breaking of spherical symmetry. In both cases, we have obtained the scattering amplitudes in an invariant form with respect to the polarization vectors of the incident and scattered x-rays, and spin operators. Disregarding possible effects of the symmetry breaking ground state, we have obtained spin excitations extending to neighbours of the core hole site. Such excitations have been clearly observed in a onedimensional system CaCu 2 O 3 22 and in two-dimensional systems Sr 2 CuO 2 Cl 2 4 and La 2 CuO 4 23 . However, under the presence of the antiferromagnetic long-range order, it may be reasonable to presume that the scattering amplitudes include anisotropic terms associated with the direction of the staggered moment, since the second-order process could be affected by the anisotropy originated from the broken symmetry of the ground state. This observation contrasts to neutron scattering, in which the scattering amplitude is directly described by the interaction Hamiltonian between the spins of neutron and electron. The purpose of this paper is to clarify the presence of anisotropic terms in the scatter-ing amplitude under the presence of the spin long range order by analysing the second-order process on a model of undoped cuprates, where the low-lying excitations are described by the Heisenberg model. In the scattering amplitudes summarised in an invariant form, we obtain the anisotropic terms, which include a vector characterizing the staggered moment. To estimate quantitative impact of the anisotropic terms, we evaluate them by carrying out numerical analysis on spin clusters. For a cluster of 13 spins, which is regarded as a model of two-dimensional cuprate, various terms in the scattering amplitudes are calculated. We verify the anisotropic terms have finite contributions. If the connection of the anisotropic terms with the symmetry breaking of the ground state is intrinsic, the weights of anisotropic terms are expected to increase with increasing antiferromagnetic long-range order parameter. This anticipation is confirmed by the numerical calculation on a ring of 12 spins with varying the external staggered magnetic field, which is given in Appendix C. Collecting up such amplitudes from all the Cu sites, we derive the RIXS spectra represented by spin correlation functions. When we investigate the correlation functions, analysis on a larger system might be preferable. This is because the spin excitations propagate through the entire crystal in the final state. Thus, we employ the 1/S expansion to the spin operators 24 , which practically enables us to treat an infinite system. As a result, we can express the RIXS spectra in terms of the correlation functions of one-magnon and two-magnon contributions. Since two magnons are excited close to each other, their mutual interaction is important. We treat multiple scattering of two magnons by following the method previously developed 12 . It turns out that the correlation functions for both one-magnon and two-magnon channels have anisotropic contributions in addition to isotropic ones. We find that the anisotropic terms produce substantial enhancement on the RIXS intensities for momentum transfer q close to the Γ point in both channels. This shows a sharp contrast to the fact that the contributions from the isotropic terms vanish at q = 0 in both channels. Experimentally, polarization analysis may help to clarify the existence of the anisotropic terms, since the polarization dependence is completely different between the one-magnon and the two-magnon spectra. The present paper is organized as follows. In section II, we describe the second-order dipole allowed process responsible for RIXS process. In section III, we analyse the RIXS process paying attention to the influence of the symmetry breaking on the scattering amplitude, in which anisotropic terms are derived in an invariant form. In section IV, we evaluate numerically the amplitudes of creating excitations on a finite size of twodimensional cluster under a molecular field on the boundary. In section V, we derive the RIXS spectra in terms of the spin-correlation functions, which are treated with the 1/S expansion to spin operators. The RIXS spectra consisting of one-magnon and two-magnon excitations are y' x' electron into the empty 3d state. The site of the excited electron is chosen as the origin of the crystal-fixed coordinate system with x, y, and z axes (or also called as a, b, and c axes). The direction of the unit vector of the staggered magnetic moment at the origin is called as em, which defines the spin coordinate system with x ′ , y ′ , and z ′ axes. The spin quantization axis is coincident with z ′ axis. calculated. Section VI is devoted to the concluding remarks. In Appendix A, absorption coefficients at the L 2 − and L 3 -edges are briefly discussed. A short comment on the projection procedure is given in Appendix B. In Appendix C, we show how the expansion coefficients develop with increasing the staggered moment in a finite-size ring under the external staggered field. Appendix D outlines the 1/S expansion in the Heisenberg model. II. SECOND-ORDER DIPOLE ALLOWED PROCESS We briefly explain L-edge RIXS in cuprates (See figure 1). The RIXS process at the copper L-edge may be described by the electric dipole (E1) transition between the 2p-core states and the 3d states. The 2p states are split into two levels with the total angular momentum j = 1 2 and 3 2 , which are discriminated as L 2 and L 3 edges, respectively, due to the strong spin-orbit interaction. Because each Cu atom has one hole in the x 2 − y 2 orbital in undoped cuprates such as La 2 CuO 4 and Sr 2 CuCl 2 O 2 , we employ a hole picture. Then, the E 1 transition may be expressed by the interaction between photon and hole as, (2.1) where c qα annihilates a photon with four-vector q ≡ (q, ω q ) and polarization α. The h † i,jm represents the creation operator of the 2p hole with jm at site i, and d iσ denotes the annihilation operator of the 3d hole with the x 2 − y 2 orbital and spin σ at site i. The w is a constant proportional to ∞ 0 r 3 R 3d (r)R 2p (r)dr where R 3d (r) and R 2p (r) are the radial wave-functions for the 3d and 2p states of Cu atom. The D α (jm, σ) describes the dependence of the E1 transition amplitude on the 2p core-hole angular momentum and the spin of the 3d hole. In the E1 transition at the L 2 -and L 3 -edges, the initial photon having q = q i , α = α i excites 2p core hole into empty 3d state, which decays back into the 2p state by emitting the final photon having q = q f , α = α f . The RIXS spectra associated with this process may be expressed as where |g and |f ′ represent the ground state and excited states of the matter with energy E g and E f ′ , respectively. Note that f refers to the final state of the photon and f ′ refers to the excited state of the electron. The |vac is the vacuum state for photons. The eigenstate and its energy of the intermediate state are referred to as |n and E n , respectively. Incidentally, since the final state in the absorption coefficient A(ω) is the intermediate state in the RIXS, we have The explicit form is summarised in Appendix A. III. MAGNETIC EXCITATIONS AROUND THE CORE-HOLE SITE In undoped cuprates, the low-energy excitations may be well described by the two-dimensional antiferromagnetic Heisenberg Hamiltonian on a square lattice, where S i denotes the spin one half operator at site i, and i, i ′ indicates that the summation runs over nearestneighbour pairs. Since our focus is not on a discussion of the magnetic dispersion, we have adopted the exchange interaction J only between the nearest neighbour sites. In the thermodynamic limit, the ground state of H mag on a square lattice is spontaneous symmetry broken phase, that is, long-range ordering antiferromagnetic phase. We write the ground state |g of H mag as where | ↑ and | ↓ represent the spin states at the origin, and |ψ ↑ 0 and |ψ ↓ 0 are constructed by the bases of the rest of spins. We assume that a core hole is created at the origin as a result of absorbing photon (figure 1). In the intermediate state, the spin degrees of freedom is lost at the core-hole site, since the 3d hole in the x 2 −y 2 orbital is annihilated by the 2p-3d dipole transition. Note that the Hamiltonian in the intermediate state is similar to that for a system with a non-magnetic impurity introduced into antiferromagnet 25,26 . Denoting |φ η as the eigenstate of the intermediate Hamiltonian with eigenvalue ǫ ′ η , we can express the second-order amplitude in (2.2) as where ǫ g represents the ground state energy of H mag . The ǫ core denotes the energy required to create the 3d 10configuration and a 2p core hole in the state |jm . The Γ stands for the life-time broadening width of the core hole; Γ ∼ 0.3 eV at the Cu L 3 edge. Notice that the scattering amplitude (6) and those investigated in the remaining of this section are originated from the excitation of the single electron at the origin. A whole scattering intensity will be given by collecting up the amplitudes from all Cu sites. In the scattering amplitudes leading to those excited states, we seek the invariant form with the polarization vectors α i and α f of the incident and scattered x-rays, spin operators S i , and the unit vector of the staggered moment e m . To this end, it is convenient to consider a general situation that e m is pointing to an arbitrary direction, which is denoted as axis z ′ . Then, for spin operators of the 3d electron, coordinate frame of x ′ , y ′ , z ′ axes is prepared (See figure 1 (a)). On the other hand, the 3d orbitals as well as spin and orbital of the core hole are described in the crystal-fixed coordinate frame with x, y, and z axes. Since the definition of the spin coordinate system and that of the crystal-fixed system are independent, we can relate them by any method which can describe the transformation from the one to the other. We adopt here the rotation of the Euler angles α, β, and γ as the transformation from the crystal-fixed to the spin coordinate system 27 . Our final formulae do not depend on the specific choice of the Euler angles. The D µ (jm, σ) in this definition is given in table I of 12 . Then we introduce P where −σ represent ↓ and ↑ for σ =↑ and ↓, respectively. The P σ (j; α f , α i ) correspond to the spin-conserving and the spin-flipping processes, respectively. Polarizations of x-ray are along the x, y, and z axes defined in the original crystal axes. Since the following analysis is confined to the L 3 -edge, we fix j = 3 2 and omit the argument in the expressions of P We list all the non-zero values of them for j = 3 2 below. where sgn(σ) gives +1 and −1 for σ =↑ and ↓, respectively. Note that P (0) (α f , α i ) and P (1) σ (α f , α i ) are zero if α i = z and/or α f = z. This results from the fact that the process is restricted with the hole of the x 2 − y 2 orbital in the ground state. A. Scattering channel with changing polarization As seen from (3.8) and (3.9), both P (0) σ (α f , α i ) and P (1) σ (α f , α i ) have off-diagonal elements with α i and α f . This implies that the scattering channel with changing photon polarization includes both the spin-flipping and spin-conserving processes. Let us investigate them separately in the following. Spin-flipping process The final state arising from the spin-flipping process may be written as Assuming the magnetic excitation associated with the creation of core-hole at site 0 has a local character around the core-hole site, we approximate |F by a linear combination of the states |ψ (±) 1 = S ± 0 |g and |ψ (±) 2 = X ± |g , where X = 1 z j S j with j running over the nearest neighbour sites around the core-hole site. The number of the nearest neighbour sites z is four and two for two and one dimensions, respectively. Spin raising and lowering operators on the core-hole site and neighbouring site are defined as Since the |ψ (±) n 's are not orthogonal to each other nor normalized, we need to introduce the density matrices A procedure to determine the expansion coefficients is given in Appendix B where the projection formalism is utilised. It may seem strange the non-orthonormal set is used in the expansion. However, since the procedure described in Appendix B can determine the expansion coefficients uniquely for the finite number of the projected states, the non-orthonormal set can have a one-to-one correspondence with some orthonormal set, for instance, by means of Gram-Schmidt process. Since the physical meaning of each element of the non-orthonormal set is much clearer than that of the orthonormal one, we use the former. Then, the final state is approximately expressed as This expression is rearranged as The coefficients are given by 14) The f Let us examine each coefficients appeared in (3.12). We suppose that the core hole site belongs to 'up spin' sublattice. This does not mean S + 0 |g = 0 when |g is the symmetry broken antiferromagnetic ground state. That is, the spin can be raised even at the 'up spin site'. Then, for example, if spin-flip excitation takes place at the core-hole site, two channels, from up spin to down spin and vice versa should be survived. Each cahnnel experiences different surroundings in the intermediate state through the second order process, which is materialized due to the fact that the core-hole has a finite life-time. As a result, both channels aquire different values of the coefficients. Similar explanation is also valid for the spin-flip process at the nearest neighbour sites. In the presence of the antiferromagnetic long-range order, the coefficients f 1,σ (ω i ) for σ =↑ are expected to be different from those for σ =↓. Then, let us divide them into two parts as follows. It has been confirmed that ∆ ⊥,0 (ω i ) = ∆ ⊥,1 (ω i ) = 0 and f n,↓ (ω i ) in the absence of the long-range order for one-dimensional system 13 . Therefore, the ∆ (3.12) with the help of (3.9), we notice that (3.12) with the Euler angles α, β, γ constitute an invariant form (see (3.18) in 12 for isotropic terms). The result is given by where α i⊥ and α f⊥ , respectively, are polarization vectors of the incident and scattered photon, which are projected onto the a-b plane. Operators S 0⊥ and X ⊥ , respectively, are S 0 and X, which are projected onto the plane perpendicular to the direction of the staggered magnetic moment. Spin-conserving process According to (3.3), the spin-conserving process may be written as where the off-diagonal elements with the polarizations are used for P σ (α f , α i ). We approximate |F ′ by a linear combination of the states |ψ 1 = |g , |ψ 2 = S z ′ 0 |g , and |ψ 3 = X z ′ |g . Since these states are not orthogonal to each other nor normalized, we repeat the analysis that utilises the density matrixρ i,j ≡ ψ i |ψ j . Hence the final state in this channel is approximately expressed as This relation is rewritten as where S 0 and X , respectively, represent the vector operators of S 0 and X parallel to the direction of the staggered magnetic moment. Note that the amplitude associated with |ψ 1 is omitted. The definition of the expansion coefficient g n (ω i ) is inferred from the projection procedure in Appendix B. We have already confirmed that g (1) 0 (ω i ) and g 1 (ω i ), respectively, in the absence of long-range order 13 . 28 Therefore, it is natural, in the presence of long-range order, to write them as ,1 (ω i ). (3.22) Here ∆ ,0 (ω i ) and ∆ ,1 (ω i ) correspond to the anisotropic contributions of the coefficients. Combining the spin-conserving term (3.20) to the spinflipping term (3.17), we finally have ,0 (ω i )e m (e m · S 0 ) |g The terms containing e m represent the effect of the long range order, that is, that of the broken symmetry in spin space. If e m is defined on the A sublattice and the same e m is used on the B sublattice, ∆ ,0 (ω i ) and ∆ ,1 (ω i ), respectively, take the same value in both sublattices. On the other hand, the values of ∆ σ (α f , α i ) has the non-zero diagonal elements with α i and α f , (3.3) may be expressed as We see that the FCA could not give rise to spin excitations in this process because the diagonal element P (0) σ (α, α) is independent of σ. Since the total spin is conserved, |F 2 may be expressed by |g , S z ′ 0 |g , X z ′ |g , S z ′ 0 X z ′ |g , and 1 2 (S + 0 X − +S − 0 X + )|g . Similar to the procedure resorted in the preceding subsection, |F 2 is approximated by a linear combination of these states with the help of the density matrix. Hence |F 2 is approximately expressed as where the amplitude associated with |g is omitted. The terms containing e m represent the effect of broken symmetry in spin space. The expansion coefficients for S z ′ 0 |g and X z ′ |g are denoted as ∆ (2) ,0 (ω i ) and ∆ (2) ,1 (ω i ), respectively, while those defined for S z ′ 0 X z ′ |g and 1 2 (S + 0 X − + S − 0 X + )|g are divided into the isotropic term f 2 (ω i ) and anisotropic term Λ (2) (ω i ). If e m is defined on the A sublattice and the same e m is used on the B sublattice, Λ (2) (ω i ) take the same value in both sublattices. On the other hand, the values of ∆ (2) ,0 (ω i ) and ∆ (2) ,1 (ω i ) in sublattice B, respectively, are obtained by changing entire sign of those in sublattice A, respectively. IV. EVALUATION OF THE COEFFICIENTS Various coefficients defined in the preceding section could be evaluated by diagonalizing the Heisenberg Hamiltonian on finite-size clusters. Since the excitations are localized around the core-hole site, the calculation on small clusters may give reliable estimates to the coefficients. We consider a cluster of 13 spins shown in figure 2. A complication is that analysis on finite-size cluster cannot provide spontaneous symmetry breaking ground state. In order to break the spherical symmetry in spin space, we assume that the spins on the boundary are subjected to the molecular field, −J| S z ′ 0 |, per bond. The expectation value of S z ′ 0 is determined self-consistently as S z ′ 0 = 0.394. The coefficients in the RIXS process are evaluated by diagonalizing the Hamiltonian matrices Note that the coefficients have dimensions of (energy) −1 as seen from right-hand side of (3.3). The coefficients not shown there are small, and will be neglected in the calculation of the RIXS spectra in the next section. As seen from (3.23) and (3.25), it is obvious qualitatively that the origin of the anisotropic terms, which include the unit vector representing the staggered moment (e m ), is attributed to the broken symmetry of the ground state in spin space. In quantitative sense, the magnitudes of such terms are expected to develop as increasing the staggered moment. This is confirmed in Appendix C for a finite-size ring of spins. V. ANALYSIS OF RIXS SPECTRA FROM UNDOPED CUPRATES Now, we are in a position to calculate the RIXS spectra. It is preferable to treating a larger system since the spin excitations propagate through the entire crystal in the final state. Thus, we employ the results of the 1/S expansion to the spin operators, which practically corresponds to taking into account of an infinite system effect as well as the interaction among the magnetic excitations. In doing so, we proceed the analysis by dividing the RIXS spectra into two channels, with and without changing photon polarization. Since α f⊥ and α i⊥ are polarization vectors projected onto the a-b plane, α f⊥ × α i⊥ is parallel to the c axis. In undoped cuprates such as La 2 CuO 4 and Sr 2 CuO 2 Cl 2 , the staggered magnetization aligns along the (1, 1, 0) direction in the CuO 2 plane 29 . Therefore the anisotropic terms proportional to e m could not come out. We collect up the remaining amplitudes from all Cu sites, where (3.23) is multiplied by the weight exp(iq · r i ) at the corehole site r i with momentum transfer q ≡ q i −q f . Thereby we obtain Here the time dependent operator of an arbitrary operator A is defined as A(t) = e iHmagt Ae −iHmagt . The Fourier transforms of the spin operators are given by S a (−q) = (2/N ) where the sum is taken over site i on the A or B sublattices. The x ′ , y ′ , and z ′ axes are defined as directing to (0, 0, 1), (1, −1, 0), and (1, 1, 0), respectively. The spinflip excitations on the neighbouring sites to the core hole are neglected, because their amplitudes are quite small. We expand the spin operators by means of magnon operators in the 1/S-expansion method, which is briefly summarised in Appendix D. In their expressions, momenta are defined within the first magnetic Brillouin zone (MBZ). When momentum q lies outside the first MBZ, S a (−q) and S b (−q) are replaced by S a ([−q]) and sgn(γ G )S b ([−q]), respectively, where q is put back into the first MBZ by a reciprocal lattice vector G. That is, q = [q] + G with [q] lying inside the first MBZ. The sgn(γ k ) denotes the sign of γ k , where γ k = 1 2 (cos k x + cos k y ) with k in units of 1/a (a is the lattice constant). For example, γ G = −1 for G = (π, π). With these notations together with the magnon operators α † [−q] and β † [−q] , Z (1) (ω i ; q) is expressed as The definitions of ℓ q and x q are found in Appendix D [(D10)] and use has been made of the relations ℓ −q = ℓ q and x −q = x q . Therefore Y (1) (ω i ; q, ω) consists of the δ-function peak, which is located at where A = 0.1579 is the first order correction in the 1/Sexpansion (see Appendix D) 30 . Figure 3 shows Y (1) (ω i ; q, ω) as a function of ω for q along the symmetry directions with Γ/J = 2.4. A notable aspect is that the intensities diverge at q = (0, 0) and (π, π). The corresponding integrated intensity is given by (5.9) Figure 4 shows I (1) (ω i ; q) for q along symmetry directions with Γ/J = 2.4. The intensities are enhanced around q = (0, 0) as 1/|q|. Since the contribution from the isotropic term vanishes around q ∼ 0, the enhancement is due to the finite value of the anisotropic coefficient ∆ On the other hand, the divergence around q = (π, π) is brought about by the isotropic term, which is why the behaviour is irrelevant of presence of the anisotropic term. It has been observed in the RIXS experiments 2-4 that the intensity of magnon peak increases significantly with q → 0. Such increase is consistent with the effects of the anisotropic terms. So far, the increase of intensity has been interpreted simply as the contribution from elastic scattering. To confirm the effects of anisotropic terms, it may be necessary to examine carefully the spectra with subtracting systematically the contribution of elastic scattering around q ∼ 0. It should be noted here that there exists non-linear terms which make the one-magnon excitation split into three-magnon excitations in the second order correction of the 1/S-expansion 12,31 . Accordingly Y (1) (ω i ; q, ω) contains the energy continuum of the three-magnon excitations in addition to the δ-function peak mentioned above. The contribution from the three-magnon excitation grows gradually when q is near the boundary of the first MBZ. See figure 7 in 12 for such RIXS spectra. B. Scattering channel without changing polarization In order to calculate the RIXS intensity in this channel, we collect up the amplitudes from all the Cu sites with the use of (3.25). We obtain The correlation function Y (2) (ω i ; q, ω) is defined by (5.14) Here the sum over δ is carried out on the nearest neighbour sites around site i. Expanding Z (2) (ω i ; q) in terms of magnon operators within the 1/S-expansion(see Appendix D), we obtain with k running within the first MBZ, and This expression is valid even when q is outside of the first MBZ. Note that when Λ (2) (ω i ) = 0, N (ω i ; q, k) vanishes at q = (0, 0) and (π, π) 32 . Note also that the isotropic terms of the two-magnon part are the same as those obtained for the K-edge RIXS, where no anisotropic term exists [32][33][34] . From (5.15), we see that Y (2) (ω i ; q, ω) consists of the energy continuum of the two-magnon excitations. Since two magnons are created at neighbouring sites through x-ray scattering, inclusion of the magnon-magnon interaction is crucial to obtain the spectral shape. As already discussed in 32 , the magnon-magnon interaction in the 1/S-expansion could be divided into a separable form so that the t-matrix of the scattering is neatly evaluated. We resort to the similar evaluation. Figure 5 shows Y (2) (ω i ; q, ω) as a function of ω for q along the symmetry directions. We find rapid enhancement of the intensity is brought about by the presence of the anisotropic terms as |q| goes to (0, 0). Without them, in contrast, the intensity diminishes in this limit as shown in figure 8 of 12 . We see the peak energy decreases with |q| approaching zero. At q = 0, the peak energy becomes very close to zero, ∼ 0.025eV. It may be a difficult task to distinguish the spectral peak from the elastic peak. However a careful study on the q-dependence of the spectra may clarify such effect of the anisotropic terms. The frequency integrated intensity I (2) (ω i ; q) may be given by (5.17) Figure 6 shows I (2) (ω i ; q) for q along symmetry directions. Notice that I (2) (ω i ; q) diverges logarithmically when |q| approaches zero. It demonstrates a possibility that this logarithmic enhancement can be recognized at a region where q is away from (0, 0). C. Polarization dependence We consider a scattering geometry used in the experiments of La 2 CuO 4 3 and Sr 2 CuO 2 Cl 2 4 . It is schematically shown in figure 7 for q along (0, 0)− (0, π) direction, where the angle between the incident and the scattered xray is fixed at 130 degrees. The scattering plane includes the b(x) and c(z) axes. Then α i⊥ = (1, 0, 0) for the σ polarization and α i⊥ = (0, α π i , 0) for the π polarization in the incident x-ray, while α f⊥ = (1, 0, 0) for the σ ′ polarization and α f⊥ = (0, α π f , 0) for the π ′ polarization in the scattered x-ray. Thereby the RIXS spectra may be expressed as The one-magnon term Y (1) and the two-magnon term Y (2) are separated by the polarization. Accordingly the polarization analysis is useful to clarify the contribution of Y (2) . For other directions of q, we could obtain the similar formulas separated by polarizations. VI. CONCLUDING REMARKS We have studied the magnetic excitations on the Ledge RIXS from undoped cuprates beyond the FCA. Emphasis is on how the symmetry breaking of the ground state affects the magnetic RIXS spectra. It is found that the spin excitations are brought about at neighbouring sites in addition to the core-hole site. We have shown that the anisotropic terms emerged in the scattering amplitudes as a direct consequence of the broken symmetry. The fact contrasts sharply with the case in neutron scattering, where the amplitude is described through the interaction Hamiltonian between the spins of neutron and electron. The presence of such anisotropic terms has been supported by the calculation on a one-dimensional finitesize ring of spins under the staggered external field and on a two-dimensional cluster with the molecular field acting on the boundary. Collecting up such amplitudes on all the Cu sites, we have expressed the RIXS spectra in the form of spin correlation functions, which have been calculated within the 1/S-expansion. The anisotropic terms have made the RIXS intensity considerably enhanced as |q| goes to zero. Such enhancement could be confirmed experimentally by observing carefully the spectra around q = 0. With a little further improvement on energy resolution, it would make the distinction possible as achieved in Sr 2 IrO 4 , in which band-splitting, predicted by a theory 35 , conspicuous around q = (0, 0) had been discerned by recent experiment 36 . We believe our present emphasize on the anisotropic terms originated from the antiferromagnetic long range order might be insightful when one analyses the systems with short range order such as the doped high-T c cuprates [16][17][18] . VII. ACKNOWLEDGMENTS This work was partially supported by a Grant-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology of the Japanese Government. 2 ). The incident photon energy ωi is set to give the maximum absorption coefficient. (3.19), and others for the coefficients of the spin excitations. By setting H ex = 0, we first evaluate the isotropic terms in the absence of the anisotropic terms. Table II shows the coefficients for isotropic terms for several values of Γ/J with ω i being fixed at the value to give the maximum absorption coefficient. The values Γ/J = 2.0 and 1.5 may correspond to CaCu 2 O 3 37 and Sr 2 CuO 3 38 , respectively. The coefficient f 2 (ω i ) for the S 0 · X term is comparable to the coefficient f (1) 0 (ω i ) for the spin-flip term. It grows with decreasing Γ/J, as was discussed in 13 . Next, we turn our attention to the anisotropic terms. Figures 9(b) and (c) show the absolute values of the coefficients as a function of staggered moment for Γ/J = 2.0. They demonstrate that the anisotropic terms develop with increasing staggered moment as expected. Note that the magnitudes of the isotropic terms vary gradually and slightly diminish rather than increase with increasing ,0 |, |∆ (1) ,1 |, and |∆ (2) ,0 |, respectively. The inset in Panel (a) shows the staggered moment as a function of Hex/J. staggered moment as shown in figure 9 (a). Appendix D: 1/S-expansion Here, we briefly summarise an introduction of the 1/S expansion. The emphasis is on the definitions of the quantities used in the main text. The details are relegated to the references such as 12 . Assuming two sublattices in the antiferromagnetic ground state, we express spin operators by boson operators as 24 where a i and b j are boson annihilation operators, and f ℓ (S) = 1 − n ℓ 2S = 1 − 1 2 n ℓ 2S − 1 8 n ℓ 2S 2 + · · · , (D5) with n ℓ represents a † i a i and b † j b j for ℓ = i and j, respectively. Indices i and j refer to sites on the A and B sublattices, respectively. Using (D1)-(D4), H mag is expanded in powers of 1/S, where N and z are the number of lattice sites and that of nearest neighbour sites, respectively. H we could diagonalize H mag as Here, where δ connects the origin with the nearest neighbour sites. The expression for H (1) mag becomes 39 k1,k2,k3,k4 30 . For the square lattice, A = 0.1579. The Kronecker delta δ G (k 1 +k 2 −k 3 −k 4 ) indicates the conservation of momenta within a reciprocal lattice vector G. In the second term of (D12), only the relevant term representing scattering of two magnons is shown explicitly. The vertex function B (3) in a symmetric parametrization as well as omitted terms are found in 31,39,40 .
2015-03-25T02:33:12.000Z
2015-03-25T00:00:00.000
{ "year": 2015, "sha1": "37fc04c7f3b5f30df4570c867f3cbca59e998a6e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1503.07264", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "37fc04c7f3b5f30df4570c867f3cbca59e998a6e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
57605217
pes2o/s2orc
v3-fos-license
Relationship among local and functional factors in the development of denture stomatitis in denture wearers in northern Brazil Objective: The aim of this study was to evaluate the relationship among functional and qualitative factors in the development of denture stomatitis (DS) (according to Newton’s classification) in acrylic-based denture wearers residents from northern Brazil. Material and method: A total of 99 patients who wore partial or total acrylic resin-based upper dentures were included in this study. The subjects completed an epidemiological data form that includes the patient’s gender, age, local factors (hygiene habits, remove denture to sleep, use of mouthwash, present condition of the denture, age of the denture) and functional factors (vertical dimension at rest, vertical dimension of occlusion, occlusion, retention, and static and dynamic stability). To detect yeasts, samples were collected from the inner surface of the dentures and from the palatal mucosa in contact with it. Subsequently, the samples were cultured on Sabouraud dextrose agar, observing macro and microscopic characteristics. Result: In the present study, we did not find any significant relationship between the gender and disease onset. Based on the Newton classification, 36.3% of the patients presented with DS and 89.0% were colonized by yeasts; of these subjects, 50% had type I lesions, 33.3% had type II lesions, and 16.6% had type III lesions. All of the qualitative and local factors, except the use of mouthwash, were clinically relevant to the development of disease. Conclusion: Denture stomatitis in denture users in northern Brazil was multifactorial, involving local, functional and microbiological factors. Descriptors: Dentures; stomatitis; oral hygiene; Candida. Rev Odontol UNESP. 2014 Sep.-Oct.; 43(5): 314-318 Relationship among local and functional factors... 315 INTRODUCTION The rehabilitation of totally or partially edentulous patients requires them to carefully adhere to prescribed clinical and laboratory regimens so that the dentures can integrate more harmoniously, thereby restoring the stomatognathic system function and aesthetics and preserving the oral mucosa and underlying bone structures 1 . Iatrogenic factors, such as trauma caused by ill-fitting dentures, poor hygiene, and inadequate occlusal dimensions, facilitate the onset of pathological processes in the oral cavity, the most common of which is denture stomatitis (DS) 2,3 . Other factors also contribute to the onset of disease, such as a change in the resin polymerization (although the criteria for the liquid-to-powder proportions recommended by the manufacturer and polymerization cycles are followed); these areas are sites of disease onset because of the pores that remain within the resin due to compression and roughness of the surface, which favors the adherence and colonization of microorganisms 4,5 . Dağistan et al. 6 described DS as an inflammatory process that primarily involves the palatal mucosa (PM) when it is fully or partially covered by dentures, affecting 60-100% of acrylic denture users. Barbeau et al. 7 noted that the etiology of DS is multifactorial and includes advanced age, decline in the defense mechanisms of the immune system, systemic diseases, smoking, the use of dentures while sleeping, poor oral hygiene resulting in the accumulation of plaque on the denture, poorly fitting dentures, and functional factors related to the occlusion. Pattanaik et al. 1 reported that DS may be triggered by an allergy to residual resin monomers and is always associated with yeast from the genus Candida, particularly Candida albicans, which is a dimorphic fungus that has two major forms: a yeast form (commensal) and a hyphal form (pathogenic). C. albicans is frequently found in patients who wear full or partial dentures, immunocompromised patients, patients who have undergone antibiotic therapy, and patients who take medications that induce xerostomia 8 . Pattanaik et al. 1 reported that because the etiology of DS is multifactorial, the treatment is complex and must include the use of eff ective antifungals, denture removal while sleeping, and effi cient control of biofi lm. Patients with DS typically present the clinical signs described by Newton 9 , which may include unusual symptoms such as pain, halitosis, or an itching and burning sensation. Th ese symptoms are oft en associated with C. albicans spp. that express high levels of exoenzymes, which predominantly include proteinases that facilitate adhesion modulated by host factors such as saliva, pH, and other microorganisms in the oral environment 10 . In the present study we evaluated functional factors (such as vertical dimension at rest (VDR), vertical dimension of occlusion (VDO), occlusion, retention, and static and dynamic stability) and microbiological in the thermopolymerized acrylic resin-based denture wearers for possible correlations between these variables and the onset of DS. Patients Th e present study included 99 patients who wore partial (class I of Kennedy 11 up to four tooth) or total upper acrylic resin-based dentures and were examined at the School of Dentistry, Federal University of Para in 2012. Th is investigation was approved by the Research Ethics Committee at the Evandro Chagas Institute (CEP/IEC 032/10). All of the study participants signed an informed consent form. Th e subjects also fi lled out a form that included epidemiological data such as gender, age, local factors (i.e., hygienic habits, use at night, use of mouthwash, present denture condition, denture age), and functional factors (i.e., VDR, VDO, occlusion, retention, and static and dynamic stability). Th e hygiene evaluation was based on the presence of biofi lm, for which the patient's hygiene was considered unsatisfactory when there was biofi lm on the denture surface. Th e present condition of the dentures was considered unsatisfactory when there were fractures, loss of structures and/or teeth, or the presence of stains or wear. Th e VDR evaluation was performed using a Willis gauge (Jon Ltd., São Paulo, Brazil) (with the patient at rest), of which the horizontal shaft s were used to measure the distance from the base of the nose to the lower base of the chin, with its vertical shaft leaning against the chin of the patient. With the gauge still in position, the patients were asked to occlude, and the measurement thus obtained corresponded to the VDO. For occlusion and the open-and closed-mandibular movements, the laterality and protrusion were evaluated and was deemed satisfactory in patients who had denture stability during these movements (i.e., bilateral balanced occlusion). Th e retention and dynamic stability were considered satisfactory when there were no complaints of denture displacement during the normal function of the stomatognathic system (e.g., speech, swallowing, articulatory phonetics, or facial expressions). For retention and static stability, gentle vertical and horizontal tension was applied to the incisors in the premolar region, and slight pressure was applied using fi ngers placed against the soft tissue at the Denture Base (DB). Th e absence of movement and/or dislocation of the dentures were considered satisfactory for this examination. To diagnose DS, the following criteria proposed by Newton 9 were considered: type I, slight color change of the Palatal Mucosa (PM) to a punctate hyperemia; type II, diff use hyperemia; and type III, granular hyperemia. Th e exclusion criteria eliminated people with diabetes or autoimmune diseases and those who used corticosteroids. Mycological Examination To determine whether yeasts were present, sterile swabs (Jiangsu Suyun Medical Materials Co, Ltd, China) were used to collect samples from the PM and the inner surface of the DB. Th e samples were cultured on Sabouraud dextrose agar (SDA) (Difco, Laboratories, Detroit, MI), which was incubated at 35 °C and observed daily for 7 days. Aft er, colonies that exhibited the suggestive characteristics of yeast growth, the Gram stain was performed on a smear of the colony to ensure that there was no bacterial contamination and to confirm the yeast isolation. Identification at genus level was performed according to Sidrim, Rocha 12 . Statistical Analysis The Bioestat version 5.3 software (Maumirauá Institute, Belém, Brazil) was used. Descriptive statistical inference of the results presented in this work were performed using nonparametric and chi-square tests, with a significance level of p ≤ 0.05. RESULT Of the 99 patients enrolled (all between 29 and 83 years of age), 64 (64.6%) were female and 35 (35.4%) were male. Eightyseven (87.8%) participants wore total dentures, and 12 (12.1%) wore partial free-end dentures. The average age of dentures was 5.3 years. We found no evidence that gender was associated with disease (p = 0.4613). Yeasts were isolate both from DB (58/99; 58.6%) as PM (39/99; 39.4%). Thirty-six (36.3%) patients showed signs of DS, 32 (32/36; 89.0%) of whom were colonized by yeasts, and for four of them (11.0%), yeasts were not isolated. Of the patients with DS, 18 (18/36; 50%) had type I lesions, 12 (12/36; 33.3%) had type II lesions, and six (6/36; 16.6%) had type III lesions. Upon analysis of the DS patients who were colonized, we observed that both the DB and the PM of 24 (66.6%) of these subjects were colonized. The factors related to the occlusion and denture stability were significantly related to the development of DS (Table 1). The results revealed that all the local factors analyzed in this study influenced the development of DS, except for the use of mouthwash (p = 0.1719). After analyzing the qualitative and functional factors of the dentures, we observed that it was not possible to correlate the disease with a single factor, as presented in Table 1. The statistical tests indicated that the age of the prosthesis was directly proportional to the onset of DS (p=0.0055) ( Table 2). DISCUSSION In our study, there was no association between gender and DS, which is inconsistent with the findings of da Silva et al. 13 and Arnaud et al. 14 , who reported a significant association in females. We observed that the prevalence of DS was 36/99 (36.4%), which is similar to the results reported by Arnaud et al. 14 , who examined 174 patients wearing acrylic-based dentures, 35% of whom had DS. However, our results are inconsistent with the findings reported by Dağistan et al. 6 , who examined 70 patients and found that 70% had DS. The factors that contribute to DS are variable and have been associated with both local and systemic components 15 . Because of their chemical and physical properties, the use of poly(methyl methacrylate) dentures facilitates the colonization of various microorganisms including Candida spp., which are normal fungal commensals, and members of the normal oral cavity microbiota, which can become pathogenic when local and systemic changes promote their proliferation 16 . Immunocompromised patients undergo changes in their oral environment that affect their immune response, resulting in the inability of these oral tissues to support the use of dentures 3 . Jeganathan, Lin 17 and Budtz-Jørgensen 18 reported that the interaction between Candida and oral bacteria promote the onset of the disease when combined with local factors such as temperature, pH, adhesive capacity of these microorganisms, and systemic components. The present study evaluated several local, qualitative factors such as oral and denture hygiene, the use of mouthwash, nighttime denture use, age of the dentures and functional factors such as VDO, VDR, occlusion, retention, and static and dynamic stability. We noted that these factors alone could not lead to disease onset, but when combined with yeast presence, they promoted disease onset, which is consistent with the findings reported by Naik, Pai 16 . Cruz et al. 19 conducted a clinical trial comparing the efficacy of chemical and chemomechanical methods to clean dentures in denture wearers. The researchers reported that chemical methods alone do not reduce the amount of bacterial biofilm on dentures and that the chemomechanical method is more effective at removing biofilm. Their data reinforce our findings that mouthwash use did not prevent colonization by yeasts, although in the present study was not conducted in vitro analysis of the action of mouthwash on Candida colonization of dentures. Identification of strains at genus level was performed according to Sidrim, Rocha 12 . However, our data disagree with the findings reported by Orsi et al. 20 and Işeri et al. 21 , who evaluated the action of antiseptics on denture surfaces and concluded that these agents are effective in eliminating Candida from the resin surface. Salerno et al. 15 stated that poor hygiene is one factor that promotes disease onset; they concluded that good hygiene alone alleviates the symptoms of DS and that the control of denture hygiene is essential for preventing relapse after antifungal treatments. Hygiene is therefore an important prophylactic measure against oral candidiasis manifestations, such as DS. Ferreira et al. 22 reported a highly significant association between the failure of a patient to remove his/her dentures before sleep and the development of disease. These data are consistent with our findings that wearing dentures while sleeping promotes disease onset (Table 1). Contrary to our results, da Silva et al. 13 evaluated local factors in 102 patients and found that denture removal before sleep was not relevant to the onset of DS. Hadžić et al. 23 analyzed the denture age and colonization and noted that age was a facilitating factor for colonization because of wear, roughness, and plaque accumulation, which is consistent with our findings (Table 2). We found that the denture condition is as important as its age, which was also noted by Naik, Pai 16 . A study conducted by Garcia, Souza 24 evaluated the necessity of replacing or relining dentures 4 years after installation. These authors noted that this procedure was not necessary in most cases after 1 year but was necessary after 3 years of use. Based on these findings, we grouped the age of the dentures into 4-year spans and found that the age of the dentures was directly proportional to the onset of DS. A study conducted by Bomfim et al. 25 demonstrated that problems related to denture occlusion were significantly associated with the development of DS because of the increased likelihood of trauma that would cause tissue damage. These data are consistent with our findings that all of the functional factors promoted the onset of DS (Table 1). Overall, instructions regarding oral hygiene, denture maintenance, and regular visits to the dental surgeon are essential in maintaining the overall health of the oral cavity. Because dental surgeons can diagnose DS in their clinical practice and then either provide therapy or refer the individuals for treatment to resolve the causative factors, these patients are able to achieve increased comfort and a better quality of life. CONCLUSION Denture stomatitis in denture users in northern Brazil was multifactorial, involving local, functional and microbiological factors.
2018-07-04T00:16:32.571Z
2014-10-01T00:00:00.000
{ "year": 2014, "sha1": "fdee064540e24febdf7f404bfe5e4e5f698ccb0d", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rounesp/v43n5/1807-2577-rounesp-43-05-00314.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ea19838ad1a91fff0a6002067ee0e37c33e563e8", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
234885779
pes2o/s2orc
v3-fos-license
Remarks on the Behavior of an Agent-Based Model of Spatial Distribution of Species : Agent-based models have gained considerable notoriety in ecological modeling as well as in several other fields yearning for the ability to capture the emergent behavior of a complex system in which individuals interact with each other and with their environment. These models are implemented by applying a bottom-up approach, where the entire behavior of the system emerges from the local interaction between their components (agents or individuals). Usually, these interactions between individuals and their enclosing environment are modeled by very simple local rules. From the conceptual point of view, another appealing characteristic of this simulation approach is that it is well aligned with the reality whenever the system is composed of a multitude of individuals (behavioral units) that can be flexibly combined and placed in the environment. Due to their inherent flexibility, and despite of their simplicity, it is necessary to pay attention to the adjustments in their parameters which may result in unforeseen changes on the overall behavior of these models. In this paper we study the behavior of an agent-based model of spatial distribution of species, by analyzing the effects of the model parameters and the implications of the environment variables (that compo se the environment where the species lives) on the models’ output. The presented experiments show that the behavior of the model depends mainly on the conditions of the environment where the species live, and the main parameters presented in life cycle of the species. Introduction Agents have their own behaviors and act in order to accomplish a purpose. Agent-based models (ABM) describe individuals (agents) as a unique and autonomous entities that normally interact with each other and their environment [1]. ABM are considered computational models that show how the dynamics of a system have emerged resulting from the interactions of its entities (agents) in a shared environment [2]. ABM have been applied in several areas such as ecology, biology, engineering, climate change, and many other fields [3][4][5]. In the ecological modeling field, agent-based models (also referred as individual-based models) are simulation models that consider agents or individuals as unique and discrete entities with proprieties that change during its life cycle [6]. Normally, four classification criteria are taken into account to distinguish classical models and agent-based models in ecology: (1) the individuals life cycle reflected in the model, (2) the considered resources (like food, www.aetic.theiaer.org habitat quality), (3) the representation of population size, (4) the variability of individuals of the same age that is considered [7]. Agent-based model (ABM) brings to the ecological modeling field the ability to simulate ecological phenomena (such as distribution of species) in more realistic ways [8], allowing management and conservation of species more suitable. Several studies have shown how ABM have helped ecological modelers to create and simulate species distribution models in certain study areas analyzing and comparing its results [9][10][11]. However, uncertainty related to the ABMs outputs and the production of more realistic models output remains a challenge for modelers [3]. This paper presents the results of the analyses performed to study the effects that model parameters have on the behavior of an agent-based modeling approach which has been designed to study spatial distribution of species in actual and foreseen environmental scenarios. With that purpose, a series of simulations are run by modifying colonization scenarios in a simple heterogeneous environment. The remaining of this paper is organized as follows: In the second section we characterize our model by describing the model purpose and behavior, as well as the life cycle adopted by the model; In the third section we perform three experiments in order to analyze and compare the results of the model in three different scenarios that mimic common real situations; In the fourth section we discuss the results of our study and present the main conclusions. Characterization of the Model Agents can represent several entities that have behaviors and react according to their states and their environment in different granularities (different levels of observation of the environment) [12]. A single agent from the outside of a system could be used to define a set of agents. For example, when simulating a complex system composed with another small systems, each small system can be defined as an agent, but internally representing a set of agents. This study considers an agent as a colony of individuals (instead of one particular individual or species), that depends on the suitability of the environment to establish itself. Notice that the purpose of this model is to analyze the spatial distribution of species in a heterogeneous habitat. Suitable environment can be seen as places (habitat units) with appropriate environmental conditions and enough resources for the species to survive and reproduce. The environment consists of habitat units or cells characterized by their location (x,y) coordinates, the quantity of species in that location, and a suitability value of the cell. The suitability value of each cell has values between zero and one. Cells with values close to one are more suitable for the specie to survive and reproduce. An artificial environment was set in a grid, with a dimension of 200 × 200. From the practical point of view, characteristics of agents can be defined as follows [13][14]: -An agent is an identifiable, discrete, or modular individual with a set of characteristics and rules that drive his behavior and decision-making ability. Since we are interested in studying the spatial distribution of species, we conceptualized the agent as a square area in a geographical map. Each area has a number (may be zero) of individuals of a given species. Each species has a number of attributes such as birth rate, death rate and spread rate. -An agent is autonomous and self-directed. An agent can function independently in your environment, and interacting with other agents, for a limited range of situations of its interest. In our model, the agents interact with their environment (habitat units or grid cell) in a way that a percentage of the species population is transferred to their neighboring cells (agents) in each iteration. -An agent is social, interacting with other agents. Agents have interaction protocol as well as communication between agents. Agents have the ability to recognize and distinguish the particularities of other agents. In our model each agent (cell) has access to the suitability of the neighboring cells as well as to what amount of the cells are filled with the species population. Each agent exchanges material with its neighbors. -An agent is situated in an external environment with which it interacts, in addition to other agents. In this work the agent's interaction with the environment is closely coupled with the maps of environmental variables that determine the suitability of the external environment. www.aetic.theiaer.org -An agent can be directed by objectives, having goals to achieve in relation to his behavior. This allows an agent to compare their results with the goals they want to achieve. In our model the main objective of the agent is mostly encoded in the spread rate of the species: a greater value means that the species tries to colonize the entire environment whereas a smaller one means that the species wants to establish colonies and settle in place. -An agent is flexible in the sense of having the ability to learn and adapt his behaviors based on experience (this requires some kind of memory). On the other hand, an agent may have rules that modify its behavior. In our model the rules of behavior depend entirely on the values of the environmental variables. Those determine the suitability of the surrounding environment hence species tend to survive and reproduce more widely in locations (cells) considered suitable. In the less suitable locations, the content of the cell will be depleted. During the implementation we followed the ODD (Overview, Design concepts, Detail) protocol [15][16][17] for the description of the model. Some of its main components are briefly presented in the following. Process Overview The goal of the species is to move and establish itself (survive and reproduce) in more suitable places, where the suitability values are closer to one. In this process three main parameters are taken into account: birth rate, death rate and the spread rate. These three parameters are independent of each other. However, the composite effect of these parameters on the model's output is observed. Algorithm 1 emphasizes the species' life cycle and how it was implemented in this work. Underlying the model, it is assumed that there is a description of the suitability of the environment, its determination (a modeler's task) is outside the scope of this article. After setting the parameters of the model, the environment is initialized with the map of suitability. Similarly, the population of species is initialized. In this specific case, a random number of species is set in a randomly chosen cell. In each iteration (tick) is applied a birth and death rates for each quantity of species in each cell. The birth and death of a species are affected by the suitability of the cell. Therefore, the quantity of species in suitable places (higher suitability) grows in greater quantities compared to the places with low suitability having the same birth rate. Likewise, the death rate has a higher incidence in less suitable places. After that, the species tries to expand and colonize the neighboring cells. Each neighbor cell will receive a percentage of the quantity of species (determined by the spread rate). Design Concepts The species life cycle consists of three main steps: (1) at each time (tick) the species reproduce according to the birth rate and the conditions of its cell (suit-ability value); (2) an amount of species dies according to the death rate and the suitability of the cell (low suitability, more death); and (3) neighboring cells receive each one an amount of individuals according to the spread rate, see The model uses as input data, environmental variables (maps) that influence the behavior of the species. These environmental values are disposed in a grid of cells, that contains in each cell a value normalized to the unit interval. The suitability map will be composed by these environmental variables, see Figure 2. Selected Experiments and Results In the reported experiments only one cell in the environment is initialized (species' origin), with a random quantity of species, the remaining cells are depleted of individuals. The location of the origin is randomly chosen between the cells with suitability values close to one (places where the species have more probability to survive and reproduce). The model considers 1000 as the maximum relative quantity of species in each cell. Before drawing any conclusions regarding the model's behavior, several parameters combination will be tested, and its results will be compared. Combinations are made between birth rate, death rate and spread rate. For the birth and death rates the following values were chosen: 0.1, 0.3, 0.5, 0.7 and 0.9, and for the spread rate: 0.03, 0.05, 0.07, 0.09. Three different experiments are reported. The first experiment presented in [18] shows the effects of the main parameters of the model in a setup where only one environmental variable is considered as the determinant for the species' suitability. The environment is assumed to be smoothly changing from an area of high suitability (a level next to 1) towards a hostile area (a suitability level next to 0). Departing from a small population in a suitable area, the propagation in the environment is compared to the suitability map after an equilibrium states reached. The next experiment introduces a second environmental variable as a way to mimic the presence of migratory routes or otherwise corridors which are propitious to the development of a given species. The third setup shows the combined effect of two environmental variables, each one with a graduation from suitability to non-suitability in different directions. It worthwhile to mention that these environmental variables are artificial, and were created only for experimental purposes, however their distribution, at least at a local level, are not far away from some situations that occur in the real environments. Due to the high number of simulated scenarios, only a selected set of results is presented. One fundamental aspect related to these experiments was the exact time to stop each simulation (stop criteria). We run several simulations in order to find the point where the system reached the stabilization -no noticeable change be-tween two consecutive states of the model. We analyzed the differences between two sequential states of the system (time t and time t-1). In our model, the difference between one state of the system and another, lies in the quantity of species presented in each cell. Thus, we calculate the sum of the Cell-by-Cell differences in these sequential states of the www.aetic.theiaer.org system. The simulation was interrupted when this difference was maintained below a small threshold for several ticks. Figure 2 depicts an environment that is gradually changing from an area of high suitability (a level next to 1 in the bottom) towards a non-suitable area (suitability level next to 0 over the top). After randomly placing the origin of species in a suitable environment, the simulation starts with a random quantity of individuals and then the model evolves according to their life cycle. In the following we present the results off our simulations scenarios varying the spread rate for values equals to (A) 0.03, (B) 0.05, (C) 0.07 and (D) 0.09 and keeping fixed the birth rate (0.7) and the death rate (0.1). These values were used not only for illustrative purposes, but also to analyze the effect of the spread rate on the results of the model. Figure 3 shows the output of the model for different spreading ratios after reaching stability. As can be seen in Figure 3, species tend to establish themselves in locations where the environmental conditions are suitable to them in order to survive and reproduce. Excluding the scenarios where the species cannot survive neither re-produce, model outputs often follow the same pattern, although the capacity of species to expand varies according to the three parameters (birth rate, death rate and spread rate). In a first approach, doing a visual comparison between these results ( Figure 3) and the suitability map (Figure 2), it is possible to verify similarities between them. Model www.aetic.theiaer.org output follows the transition (gradation) presented in the environment map. However, a visual comparison is not enough to draw conclusions related to the model's behavior. Often, species did not survive when the values of birth rate and death rate are equal, and in scenarios where the value of birth rate is less than death rate. In order to analyze the output of the model in these different parameters' combination, Figure 4 depicts the comparison of the model's output in all scenarios with the suitability map (see the environment map in Figure 2). We converted the model output to the same scale (0, 1) of the environment map to facilitate comparison. The overall comparison technique adapted from [19] was performed for each model output. In Figure 4 it is possible to observe the scenarios with the lowest differences. The combination (birth rate = 0.5, death rate = 0.1, spread rate = 0.09) presented the lowest difference, followed by the combination (0.9, 0.3, 0.09), and the combination (0.5, 0.1, 0.07) in the same order of the rates. According to Figure 4 for values of death rate greater or equal than 50% even with a birth rate of90% the chances of the species to survive are remote. On the other hand, at a birth rate less than 20% species have few chances to survive and expand. In this regard we can say that for higher spread rates -subsumed to the hypothesis on the suitability of the species -the model can achieve a more consentaneous filling of the environment. Figure 5 shows the number of iterations necessary to reach a stability state for four different spread rates (everything else being equal). Observing the Figure 5 we notice that at the beginning of the simulation the difference between two sequential states increases very quickly. We verify the increase of the difference until a certain number of iterations and then these differences start to decrease to the point that it stabilizes. Another interesting finding is that in our model a higher spread rate promotes a quicker stability. Conceptualization of a Suitability Corridor For this experiment we considered the synthetic environmental variable presented in the previous section, see Figure 6-A, and we introduced a second environmental map, Figure 6-B, representing a suitability corridor (we can think of it as a migratory route, for instance). The combined suitability cell values were obtained by summing up the values of the two environmental variables and sequent normalization to the unit interval, see Figure 6-C. According to Figure 7 species tend to colonize all the environment. Unlike the previous maps (Figure 3), where there were no conditions for the species to expand on the top, in this particular case, there are a set of suitable cells that allows species do expand. Another factor that influences the expansion of the species to the top of the map is the suitability corridor (the vertical line). This www.aetic.theiaer.org corridor allows species to reach to the less suitable locations. The difference between birth rate and death rate (0.7 and 0.1) also has a significant impact in the colonization effect, and we can observe a larger filling of the map when the spread rate is lower, see Figure 7-A. Comparing Figure 7 with the suitability map (Figure 6 C) we can observe the same pattern between them. The transition (gradation) and the vertical line presented in suitability map are also observed in the model results. Figure 8 shows the Cell-by-Cell comparison between the model output (smooth gradation + suitability corridor) and the environment map. These model results were converted in the scale (0, 1) in order to facilitate the comparison with the suitability map ( Figure 6). Doing the comparison between each simulation results (output) and the suitability map (Figure 8), we can verify that the combination (death rate = 0.1, birth rate = 0.5, spread rate = 0.09), presented the lowest difference, followed by the combination (0.1, 0.3, 0.03), and the combination (0.1, 0.5, 0.07) in the same order of the rates. Observing Figure 8, at death rate greater or equal than 70% species do not survive, even with a birth rate greater or equal than 90%. At the birth rate less than 20% the chances for the species to survive are remote. Contrary to the first experiments, the three best results were obtained with different spread rates namely: 0.09, 0.03 and0.07. Different of the first experiment, the best results were obtained in different spread rate (0.09, 0.03, 0.07). Figure 9 shows the number of iterations necessary to reach a stability state for four different spread rates. As in the first experiment, we observe in Figure 9 that the differences between two sequential states start to grow quickly, until reach its peak. These differences start to decrease until the point of stabilization. Comparing with the previous experiment, this experiment takes longer to converge due to a greater heterogeneity of the environment resulting from the combination of two environment variables. These results show that the simulation with spread rate equal to 0.03(A) takes much longer to converge; It allows a larger filling of the map when the combination of birth rate and death rate is suitable for the species (for example: birth rate equal to 0.7 and death rate equal to 0.1). Compound Effect of two Environmental Variables In this experiment we consider the environmental variable presented in the first experiment, see Figure 10-A, and we introduced a second variable by rotating 90ºthis map resulting in a similar gradation but with different orientation, see Figure 10-B. The suitability map was obtained by combining these two environmental variables, see Figure 10-C. In Figure 11 species occupy the most suitable places for them to stabilize and reproduce. Species tend to disappear in locations where the suitability values are low. We can observe in each Figures (A, B, C and D) the gradation pattern presented in suitability map. The impact of the spread rate is highly noticeable. In the resulting suitability map (Figure 10-C) the least suitable places for the species to survive are located at top left. Therefore, species do not reach these places. As we saw in previous experiments, species colonize in more abundance for the scenarios where the spread rate is lower. www.aetic.theiaer.org Figure 12 shows the Cell-by-Cell comparison between the model output (compound effect of two environmental variables) and the environment map. The simulations results allow us to verify that for this experiment, the combinations (death rate = 0.1, birth rate = 0.5, spread rate = 0.09) presented the lowest difference with respect to the suitability map, followed by the combination (0.1, 0.5,07), and the combination (0.3, 0.9, 0.03) in the same order of rates. In Figure 12 we can verify that there are no chances for the species to live neither reproduce at birth rate equal than death rate. The lowest differences can be observed for the four spread rates: 0.03, 0.05, 0.07 and 0.09. Figure 13 shows the number of iterations necessary to reach a stability state for four different spread rates. As the simulation proceeds, the differences between two sequential states gradually increase until reach its pick. Then, the differences start to decrease until reach a point in the simulation where the differences remain in a very low range, that corresponds to the stabilization point, see Figure 13. As we observed in the previous experiments, the lower the spread rate, the longer the simulation will take to converge. www.aetic.theiaer.org Concluding Remarks In this study we analyzed the effects of an agent-based model's parameters in the spatial distribution of species, by implementing an ABM able to deal with a heterogeneous environment represented by a combination of (environmental)variables of interest. We performed a parametric study in order to find the parameters combination that fits the purpose of our model. The results showed that in addition to the environmental conditions, the combination of the model parameters has a significant impact on its results. Our study is limited in the sense that the environment of our model was not real, however the initial conditions of the presented experiments are well aligned with a number of real local environmental constraints that we intend to explore in future studies for the prediction of the geographical distribution of biological species (both flora and fauna) with economical interest in a setup of environmental uncertainty. Model behavior and model outputs are deeply coupled with the chosen parameters and the selected environment. The parameters of the reported model are completely independent of each other in the sense that any adjustment made to any parameter does not affect the value of the remaining parameters. However, any small change in a subset of parameters can result in drastic changes on the overall behavior of the model; the same happens if we change the environmental conditions. In order to better understand the model's behavior, it is necessary to perform a thorough parameter analysis, and verify which are the environmental variables that compose the environment and its values. It is a well-known fact that com-prehensive analysis of the output to input variability is an important step during the development of an agent-based model [20]. Model parameterization allows the model to produce more realistic results [21]. Parameters analyzed in our study have each one its effect in the model. However, we cannot consider only these parameters individually but instead we must consider the effect that the combination of the different parameters has in the model's output. Discarding the effect of either one of the parameters will jeopardize the ability to explain the output of the model. One of the aspects to take into account is the distinction between birth rate and death rate. In order to observe reproduction, it is important to have a significant distinction between birth rate and death rate, fixing the values of birth rate al-ways greater than death rate. This is the only case that the species can survive and reproduce. However, without spread rate there is no way for the species to expand (colonize) to other cells in the environment. Once chosen the birth and death rates, the spread rate determines if the species have propensity to consolidate the occupied places or if instead, they have a greater predisposition to colonize new territories. The choice of parameters will always constraint the desired results. When using a model as the one described in this work one must analyze several scenarios in order to find the parameters' combination that answers the purpose of the reference model.
2021-05-21T16:57:41.022Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "c8cf254ff5da27f931d81b1ab0a4937895f23809", "oa_license": "CCBY", "oa_url": "http://aetic.theiaer.org/archive/v5/v5n2/p4.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2b9150491e419559efca12fb06140d903560232a", "s2fieldsofstudy": [ "Computer Science", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Computer Science" ] }
236942866
pes2o/s2orc
v3-fos-license
An Artificial Intelligence Approach for Forecasting Ebola Disease The abrupt explosion of the Ebola virus in 2014 in Western Africa was one of the world’s most widespread and deadliest epidemics with the highest number of casualties being reported in the regions of West and Central Africa. Ebola, a fatal hemorrhagic fever syndrome, is caused by the Ebola virus (EBOV). The World Health Organization proclaimed the disease as a world healthcare crisis. In most of the cases, the patients are known to have died before the antibodies could respond. This indicates the need to improve upon the diagnosis and prediction techniques available for this disease. This paper aims to analyze and improve upon the accuracy of the prediction systems for the Ebola disease using several inputs. The input relies on the symptoms shown by the patient during the early stages of the disease. The data mining techniques employed to carry out this research include Decision Trees; Bagging classifier, KNN, Support Vector Machine, Stochastic Gradient Descent classifier, Logistic Regression, Random Forest, Gradient Boosting classifier, Ridge Classifier, and Hybrid Neural Networks. The hybrid models recommended in this study include the use of classifiers, namely, Stochastic Gradient Descent, Random Forest and KNN classifier. The experimental results show the accuracy obtained by each classification technique and the hybrid models that were applied to the dataset. Introduction The 2014-15 Ebola Disease outbreaks in Central Africa promulgated as an alarming bell for a global healthcare crisis [2]. The World Health Organization Interim Assessment Panel identified the late initial response as one of the root causes of the massive death toll. As of 2019, the virus [19] has resurfaced in parts of the Democratic Republic of Congo [15]. Approximately 3,348 cases of Ebola infection have been identified including 2,210 fatal cases. The disease is now being identified as one of the leading causes of the ongoing conflicts, socio-economic slowdown and deteriorating development of the people of Africa [14]. Although the newly developed FDA vaccine opens a possibility of curing the promulgation of this disease, however, there is a necessity for the emergence of improved computational methods for the prognosis of the disease. Given the resurfacing of the disease as of December 2019, there is an ardent need to track the progression of the disease and use the currently available patient database to develop various disease predicting models. Such models which can identify the symptoms at the onset of the disease would aid the healthcare officials and workers to provide better and timely healthcare treatment [10]. Data mining [18] plays a crucial role in drawing deep insights from large datasets. At present, the increased use of data mining techniques in fields such as healthcare has revolutionized the approach employed by working professionals to cater to a given problem. The publicly available laboratory and clinical datasets [16] comprise several tests essential to diagnose a particular disease. Data mining [20] has transformed into an inevitable aspect of any research. This can be linked to the colossal amount of data being generated worldwide. Data mining can help the industry experts to improve upon the efficiency of medical treatments, provide a better evaluation of the symptoms shown by the patients and consequently reduce the death toll across the globe. The growing awareness about the various disease prediction models has urged the researchers to employ existing prediction and classification techniques [21] to various disease databases. There have been efforts to improve disease detection by devising new and more efficient algorithms One of the widely used techniques is the Naive Bayes classifier [9] which is based on Bayes' Theorem. In this classifier, it is presumed that the presence of any particular feature set is not linked to any other feature. The features are known to contribute independently to the probability. This model proves its effectiveness in terms of simplicity, efficiency and fast prediction for any dataset. Another classification technique used is the SVM classifier which analyzes data for regression and classification. A supervised learning model, it uses a decision plane to separate objects with different class memberships. SVM algorithm proves to be efficient when applied to complex problems such as text and image classification etc. Other widely used techniques include Decision Tree, Bagging Classifier, Random Forest and so on. Research in Context: Additional significance of this study/ research: The main contributions of this paper are as follows: x The application of various self-developed hybrid models to improve the disease detection process. x The proposed recommendation would prove highly beneficial to medical practitioners and laboratory experts in the early detection of the disease and save patients' lives. x The study examines the existing models and draws a comparison between these models and the proposed hybrid models. Dataset generation from various sources available. In this study, we employed various classification and hybrid models on the dataset made public by the World Health Organization. We propose a hybrid predictive model [11] to fulfill our objective of the Ebola disease prognosis. In doing so, various classification techniques [8] were used with neural networks to develop hybrid models. A comparative study has been made to better understand the results [10]. We have made all of the resources employed in this paper publicly available to facilitate further research, development and improved initial responses in case of an outbreak. The rest of the paper is structured into different sections which are as follows: Section 2 completes the literature review. Section 3 displays the data source. Section 4 illustrates the data mining techniques used. Section 5 proposed the predictive model. Results and discussion are explained in Section 6. And Section 7 ends with the conclusion. Literature Review Innumerous studies have been published which focus on Ebola disease prognosis using various classification algorithms and machine learning models. Various attempts have been made to understand the Ebola Virus and classify it to help develop better healthcare decision making among healthcare professionals. S. Sharma and V. Mangat [6] primarily focused on applying data mining techniques to the dataset of the Ebola Disease Virus for classifying the disease and formulating a comparative study between this and various other epidemic diseases. The paper presented by them generalizes error and intraclass separability. This is done by applying the relevance vector machine classifier. Classifying the Ebola virus data on the pretext of its spread in the various continents was performed by the authors in this paper. Many continents across the globe were analyzed. The RVM classifier was run by submitting various factors such as RVM weight and bias data, testing feature vector and group data. The matching RVM classification information returned was evaluated and the decision logic was returned. In 2016, Andres Colubri, Tom Silver and the other authors [3] employed a machine-learning-based structure and self-developed app for the prognostication of the health conditions of Ebola patients by analyzing the initial symptoms shown by the patient's body. They analyzed the problems caused by incomplete clinical data. Recognizing the need for mobile apps for clinical prognosis, the app demonstrated the derivation of actionable knowledge from systematic data collection to trigger improved decision making among the clinical and laboratory experts. Kanika Chuchra and Amit Chhabra [5] employed tree-based mining algorithms to Ebola Virus Dataset. Further improvement of the results has been done by filtering the dataset to remove the noise from the dataset. In addition to this, the authors worked with the J48, LMT, and REP algorithms. Unsupervised filter in correlation with multiple algorithms was employed to yield better results. The tools used were WEKA and MATLAB. The experimental result highlighted that the use of LMT classifier in combination with Random tree provided better results with an accuracy rate of 98.3193%. M. Jana Broadhurst Tim J. G. Brooks and Nira R. Pollock [1] described the progress and recent developments in the diagnostic testing of the Ebola disease virus. They also studied the steps taken to deploy diagnostic laboratories in the region of the outbreak of the disease. Further, they explored the challenges faced during the various stages of on field diagnosis to provide an extensive examination of the various diagnostic tests that have been employed to address the issue till now. Manu Anantpadma, Thomas Lane and various other authors [4] elaborated upon the previous Bayesian machine learning models which were approved from the FDA and were employed for the identification of various compounds that are active against the Ebola virus. After the identification of the active molecules, conclusions were drawn from the levels of tilorone (one of the active molecules). The application of the existing models along with their chemical knowledge provided a novel method to prioritize the compounds for testing in vitro. The study further explored the scope of applying such improved models and techniques to other pathogens. In 2015, Peng Zhang, Bin Chen, Liang Ma and various other authors [7] emphasized that the accuracy and reliability of various experimental outcomes could be studied better with the aid of artificial society. They demonstrated the construction of artificial Beijing and the Ebola propagation model according to the conditions in West Africa. Further, the propagation nature of the virus along with epidemic conditions was analyzed and corresponding results were presented. The study concluded that the Ebola outbreak is impossible to occur in the city of Beijing. Data Mining Techniques: The data mining process involves steps such as selection, preprocessing and transformation of the dataset followed by data mining and evaluation of the accuracy obtained as the output. A crisp description of each classification technique has been given. The Decision Tree classification technique [17] is a two-step approach that includes constructing a tree and applying it to the dataset. The process of distributing the given dataset into various subsets according to the attribute value test is recursively repeated on each subset. The recursive partitioning is completed when the splitting of the database no longer adds value to the predictions. The classification of instances starts at the root node followed by testing the attribute of the node and moving along the tree branch. One of the most widely used supervised learning methods, this type of classification requires no domain knowledge and easily processes and handles the numerical and categorical data. Bagging Classifier is a bagging ensemble meta-estimator in Sci-kit learn. After accepting the base classifiers as input and fitting each one of them on random subsets of the original dataset, it aggregates the individual predictions to provide the final prediction [22]. Bagging Classifier can be used to reduce variance by randomizing the construction procedure and then producing an ensemble out of it. K-nearest neighbors (KNN) classification technique is a type of supervised ML algorithm which works on the assumption that similar things tend to exist close to each other. It is run on the dataset with different values assigned to K. Further, the value of K chosen should be the one that reduces the error count while ensuring the algorithm's capability to make accurate predictions simultaneously. The algorithm can be regarded as a non-parametric and lazy learning algorithm. KNN is preferred for its ease of implementation and simplicity. It finds its applications in a wide range of classification, regression and search problems. Another type of classification technique, Support Vector Machine (SVM) converts the given labeled data into an optimal hyperplane. The hyper-plane divides the 2D space into 2 parts where each part contains a class. The main objective of this Kernel-based method is to locate a hyperplane in N-dimensional space for the classification of the data points [24]. It is known to provide better accuracy and easy handling of complex nonlinear data points. Equations (1) and (2) represent the rules for separating the dataset. Equation (1) provides the positive class hyperplane for all positive values of x which satisfy this rule. Similarly, equation (2) provides the negative class hyperplane for all negative values of x which satisfy this rule. Logistic regression as a classification technique assigns each observation to a discrete set of classes and returns a probability value as the output [12]. The output was transformed using the logistic sigmoid function and can be mapped to many discrete classes. One of the key aspects of logistic regression is the setting of the threshold value by the values of precision and recall. It can be broadly classified into three types: Binomial, Multinomial and Ordinal logistic regression. Logistic regression makes use of the logistic function. This function is an S-shaped curve used for mapping the input numbers to values between 0 1 https://www.geeksforgeeks.org/ml-bagging-classifier/ [13] works as a regularized linear model considering the few randomly selected samples instead of the whole data set. In its essence, Stochastic Gradient descent considers only 1 random point to change weights. This type of classifier is efficient, easy to implement, and computationally less expensive. However, the path followed by this algorithm to reach the minima faces more noise as compared to other Gradient Descent algorithms. One of the widely used ensemble methods, Random Forest Classifier offers high accuracy and handles large features. A set of T regression trees are generated which are trained by a Bootstrap sample technique. Each tree performs a class prediction and the class with the highest votes becomes the final prediction [25]. The node partitioning feature is selected as a random subset of the original features. Gradient Boosting classifier enables the easy optimizing of loss functions and works in a forward-stage wise manner to build an additive model. At each stage, the regression trees are fit on the negative gradient of the multinomial/binomial deviance loss function. This particular classifier broadly consists of a loss function, a weak learner and an additive model as its key components. It allows the optimization of a specific cost function specified by the user [23]. One of the regularization techniques, Ridge classifier converts the target values into (-1,1) and then processes the problem statement as a regression task. It performs generalized cross-validation. Data Source: The EVD clinical and laboratory database comprises 213 cases of Ebola infection reported in Sierra Leone in the year 2014. The data set comprises 106 positive cases. Out of these 55 positive cases were taken. The data samples used in this study were collected from Harvard Dataverse. (link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/292) Method: Artificial Neural networks (ANN) are multi-layered mathematical models that consist of fully-connected neural nets They essentially comprise of input, hidden and output layers. The information stored in each neuron represents the weights associated with each neuron. Such networks can learn, recall and generalize from the given database by assigning and adjusting the weights accordingly. The primary focus of this research is to build various hybrid models to provide a diagnosis of Ebola disease using the available dataset. To develop this system, a hybrid neural network was constructed containing 16 nodes with 3 Layers (2 hidden layers). Using the ReLu activation function values from the last hidden layer (2nd hidden layer) was employed as an input to the SGD, KNN, and Random Forest Classifier. When the neural network was applied to test data, an accuracy of 76% was obtained. In the SGD classifier, the value of alpha was set equal to 0.1 and epsilon was equal to 1000. The model had a batch size of 81 and epochs were set equal to 100. A multilayer perceptron, a type of artificial neural network, is made up of an input layer, an arbitrary number of hidden layers and an output layer. After the signals are received by the input layer, the output layer then predicts the input while the hidden layers function as the computational engine of MLP. Input involves the multiplication of vector x with weights w followed by the addition of a bias b:‫ݕ‬ = ‫ݓ‬ * ‫ݔ‬ + ܾ . Results and Discussion The experimental results are compiled and explained in detail in this section. The main database was filtered to get the test cases. This was done to overcome the problem of missing values for many attributes for a particular case. This resulted in the formulation of 81 test cases and 17 attributes. 0 and 1 are taken as the classifiers for diagnosis. The study evaluated the various symptoms as attributes in the database. The attributes used are described below. Comparison of Accuracy of the existing Classification techniques: The various classification techniques used in this research provided different accuracies when they were applied to the test data. The lowest accuracy was shown by Decision Tree classifier with 85.71% accuracy. Stochastic Gradient Descent and Bagging classifier showed 90.47% accuracy. The accuracy of each classification technique is listed below. The study showed that after employing various classification techniques and hybrid models, the highest accuracy was obtained by the Random Forest classifier with an accuracy of 100%. The highest accuracy amongst the hybrid models is shown by the Random Forest classifier and KNN classifier hybrid models with Neural network (96%). Conclusion Ebola virus is a growing concern especially for the African subcontinent with the death toll increasing exponentially each year. The decreased survival rate can be attributed to the multiple organ failure caused by the disease. In this research, the chances of a person getting affected by the disease were predicted using various classification techniques. The collection of more data on the disease is encouraged to further facilitate better results and improved accuracy. In the improved technique, we employed the hybrid neural networks developed by us. The success of various classification techniques was measured by the accuracy achieved by the technique. The experimental result showed that 100% accuracy could be achieved by employing Random Forest as a classification technique. In the future, the Ebola disease detection can be improved by enhancing the hybrid models employed in this paper.
2021-08-07T20:02:57.636Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "4d48c1b1a30b2747b988c59d50e41fd222ae8dd1", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1950/1/012038", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "4d48c1b1a30b2747b988c59d50e41fd222ae8dd1", "s2fieldsofstudy": [ "Medicine", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
259352715
pes2o/s2orc
v3-fos-license
Antibiotic Consumption During the Coronavirus Disease 2019 Pandemic and Emergence of Carbapenemase-Producing Klebsiella pneumoniae Lineages Among Inpatients in a Chilean Hospital: A Time-Series Study and Phylogenomic Analysis Abstract Background The impact of coronavirus disease 2019 (COVID-19) on antimicrobial use (AU) and resistance has not been well evaluated in South America. These data are critical to inform national policies and clinical care. Methods At a tertiary hospital in Santiago, Chile, between 2018 and 2022, subdivided into pre- (3/2018–2/2020) and post–COVID-19 onset (3/2020–2/2022), we evaluated intravenous AU and frequency of carbapenem-resistant Enterobacterales (CRE). We grouped monthly AU (defined daily doses [DDD]/1000 patient-days) into broad-spectrum β-lactams, carbapenems, and colistin and used interrupted time-series analysis to compare AU during pre- and post-pandemic onset. We studied the frequency of carbapenemase-producing (CP) CRE and performed whole-genome sequencing analyses of all carbapenem-resistant (CR) Klebsiella pneumoniae (CRKpn) isolates collected during the study period. Results Compared with pre-pandemic, AU (DDD/1000 patient-days) significantly increased after the pandemic onset, from 78.1 to 142.5 (P < .001), 50.9 to 110.1 (P < .001), and 4.1 to 13.3 (P < .001) for broad-spectrum β-lactams, carbapenems, and colistin, respectively. The frequency of CP-CRE increased from 12.8% pre–COVID-19 to 51.9% after pandemic onset (P < .001). The most frequent CRE species in both periods was CRKpn (79.5% and 76.5%, respectively). The expansion of CP-CRE harboring blaNDM was particularly noticeable, increasing from 40% (n = 4/10) before to 73.6% (n = 39/53) after pandemic onset (P < .001). Our phylogenomic analyses revealed the emergence of two distinct genomic lineages of CP-CRKpn: ST45, harboring blaNDM, and ST1161, which carried blaKPC. Conclusions AU and the frequency of CP-CRE increased after COVID-19 onset. The increase in CP-CRKpn was driven by the emergence of novel genomic lineages. Our observations highlight the need to strengthen infection prevention and control and antimicrobial stewardship efforts. Antimicrobial resistance (AMR) constitutes a major health crisis causing substantial global disease and economic burden worldwide [1][2][3]. A recent report estimated 1.2 million deaths directly attributable to AMR in the year immediately prior to the emergence of coronavirus disease 2019 (COVID-19) [1]. Further, the impact of AMR is expected to increase, with estimates of approximately 10 million global annual AMR-related deaths by 2050 [4]. The World Health Organization (WHO) declared AMR as one of the most critical public health threats of the century [5]. COVID-19 led to a sharp increase in hospitalizations, a large proportion of which corresponded to high-complexity patients requiring admission to intensive care units (ICUs), invasive procedures, and prolonged hospital stays, in addition S20 • CID 2023:77 (Suppl 1) • Allel et al Clinical Infectious Diseases S U P P L E M E N T A R T I C L E to shortages of healthcare personnel and protective equipment, especially early in the pandemic [6][7][8]. There is growing concern that COVID-19 might have resulted in higher antimicrobial use (AU) and in lapses in infection prevention and control (IPC) practices, both of which could have accelerated the spread of AMR [9][10][11][12]. Recent studies showed an escalation in AU during the pandemic, with up to 74.6% of patients with COVID-19 receiving one or more antibiotics [13,14], despite the relatively low occurrence of secondary bacterial coinfections [15,16]. The most frequently prescribed antibiotics were β-lactams (30%), fluoroquinolones (20%), and macrolides (18.9%) [13]. One study reported a significant increase in the use of broad-spectrum β-lactams (eg, cefepime, piperacillin/tazobactam, and carbapenems) and other last-resort antibiotics (eg, colistin and ceftazidime/avibactam) during the first pandemic peak [17]. Carbapenem-resistant Enterobacterales (CRE) are listed as critical-priority pathogens by the WHO [18]. A report from the US Centers for Disease Control and Prevention highlighted increases in both hospital-onset infections due to CRE and AU in inpatient settings during the first year of the pandemic [19]. Carbapenemase-producing (CP) CRE (CP-CRE) are particularly concerning as they harbor highly efficient enzymes often contained on mobile genetic elements that facilitate their spread, posing a daunting challenge for clinicians and IPC teams. A recent report alerted about an increased detection of CP-CRE after the COVID-19 pandemic in Latin America [20]. However, the magnitude of the impact of COVID-19 in the emergence of AMR remains unknown. In Chile, official reports have shown that the most important CRE is carbapenem-resistant (CR) Klebsiella pneumoniae (CRKpn), with a prevalence of approximately 35-40%. However, in contrast to other Latin American countries, the prevalence of CP-CRE prior to the pandemic was conspicuously low in Chile [21]. In this study, we evaluated the potential impact of the COVID-19 pandemic on AU and CRE. Moreover, we assessed the emergence of CP-CRE following the COVID-19 pandemic onset. Study Design and Sample Analysis We collected hospital-wide data on AU and the frequency of CRE isolation in a public tertiary-care hospital in Santiago, Chile, with 391 beds and a catchment area of approximately 423 000 population (annual hospital discharges: ∼24 300) from March 2018 until March 2022. For context, the first patient with COVID-19 in Chile was diagnosed on 3 March 2020, and antimicrobial stewardship and IPC practices remained unchanged during the pandemic. We compared two years before the pandemic (pre-COVID-19, March 2018-February 2020) with two years after the onset of COVID-19 in Chile (COVID-19, March 2020-February 2022), combining various datasets and analytical strategies. Data Collection and Processing Data were abstracted from the hospital's epidemiological and pharmacy records and included total number of beds, patient discharges, patient-days, and intravenous AU for all adult patients admitted to acute care wards during the study period. Acute care wards refer to any patient admitted from the emergency department or by a general practitioner, along with those electively admitted for a surgical procedure. Additionally, we obtained data on monthly ICU admissions and laboratory-confirmed COVID-19 discharges of adult subjects. Antimicrobial use was expressed in defined daily doses (DDDs) per 1000 patient-days and calculated for each intravenous compound as per WHO recommendations [22]. Antibiotics were classified into three groups: (1) broadspectrum β-lactams (ie, ceftazidime, cefepime, piperacillin/ tazobactam, ertapenem, meropenem, imipenem), (2) carbapenems (ie, imipenem, meropenem, and ertapenem), and (3) colistin, a drug frequently used against CP-CRE. Antibiotics evaluated in the study are presented individually in Supplementary Figure 1. Throughout the study period, we prospectively collected all clinical CRE isolates (ie, nonsusceptible to ≥1 carbapenem as per Clinical and Laboratory Standards Institute [CLSI] 2022) recovered from invasive infections (ie, bloodstream, sterile fluids, or tissues). Isolates were sent to a central laboratory where species identification was reconfirmed by MALDI-TOF (matrix-assisted laser desorption/ionization-time of flight) mass spectrometry. The antibiotic susceptibility profile was reconfirmed using the disk diffusion method following CLSI 2022 [23]. Testing was performed using a multiplex polymerase chain reaction (PCR) designed to detect the three carbapenemases most frequently reported in the country (ie, Klebsiella pneumoniae carbapenemase [bla KPC ], New Delhi metallo-β-lactamase [bla NDM ], and Verona integron-encoded metallo-β-lactamase [bla VIM ]) and was performed in all CRE isolates. Finally, given their high frequency and clinical relevance, we performed wholegenome sequencing (WGS) on all CRKpn isolates recovered during the study period. Statistical Analyses Descriptive statistics were used to visualize monthly AU, ICU admissions, and COVID-19 patient discharges. A second-order polynomial fit was adjusted to the data as it presented the best goodness-of-fit (according to the Akaike information criterion [AIC]). The AU rate for each antibiotic group expressed by DDDs per 1000 patient-days was compared between pre-and post-pandemic onset. To further understand AU over time, we calculated a baseline average monthly AU between March 2018 and February 2019. Using this information, we estimated the monthly percentage change for March 2019-February 2020 (prepandemic) and for the two years post-pandemic onset (March 2020-February 2022). We used interrupted time-series analyses for each antibiotic group [24,25] to evaluate the impact of COVID-19 on AU, adjusting for seasonality and autocorrelation. First, we logarithmically transformed AU rates to adjust their variance over time and computed a first-order differentiation between consecutive time points to correct stationarity. Subsequently, we tested autocorrelation and seasonality among AU group variables [25]. We used an autoregressive integrated moving average (ARIMA) approach through an automated algorithm, based on the best goodness-of-fit reported (eg, lowest AIC/Bayesian information criterion [BIC]), resulting in a seasonal ARIMA (1,0,0) (0,1,1) model [24]. A seasonal ARIMA model is classified as an ARIMA(p,d,q) x (P,D,Q), where (p,d,q) refers to the seasonal and (P,D,Q) to the non-seasonal component. P or p = number of seasonal autoregressive terms, D or d = number of seasonal differences, Q or q = number of seasonal moving average terms. The interrupted side of the model comprised step change and ramp components, derived from any random shift and slope changes in AU over time after the pandemic onset [25]. Finally, we generated a counterfactual scenario related to a hypothetical absence of the COVID-19 pandemic to contrast observed and estimated AU through the backward prediction of the time series as if no random shift and slope changes would have existed. Analyses were conducted using R version 3.2.1 (R Foundation for Statistical Computing). Whole-Genome Sequencing and Phylogenomic Analyses We performed WGS using Illumina MiSeq with the Illumina DNA library prep kit (Illumina, Inc). We used FASTQC and MultiQC to determine the read's quality, and Trimmomatic to pair the reads [26,27]. The genomes were assembled de novo with SPAdes, and the quality of the assemblies was assessed with QUAST [28,29]. We used MLST 2.19.0 [30] and ABRicate v1.0.1 15 to determine the sequence type (ST) and the presence of carbapenemases. We annotated genome assemblies with Bakta [31] and evaluated the pangenome using Roary v3.13.0 [32]. A maximum likelihood phylogenomic tree was performed using a core genome definition of 99% with RAxML 8.2.12 [33]. Finally, a recombination-free phylogenomic tree was generated with Clonal Frame ML v1.12 [34] and visualized with the interactive Tree Of Life (iTOL) tool [35]. Ethics Our study was approved by the Research Ethics Committee of the Clinica Alemana, Universidad del Desarrollo Faculty of Medicine (Institutional Review Board [IRB] 2021-24, Protocol number #UIEC1047). Hospital Characteristics and Epidemiological Analyses The first patient with COVID-19 in Chile was diagnosed in early March 2020 and the first pandemic wave peaked in June 2020 [36]. During this peak, our hospital discharged 530 patients with COVID-19 (Supplementary Figure 2A). The total number of beds and average monthly hospital discharges did not significantly vary during the study period. ICU admissions substantially increased after the pandemic onset, with an average of 11 and 25 ICU admissions in the pre-and post-pandemic period, respectively (P < .001). Most ICU admissions (80%) during the pandemic period were patients aged older than 60 years (Supplementary Figure 2B). Antibiotic Use Over Time and Impact of COVID-19 Compared with pre-COVID-19, we observed a significant increase in mean DDDs per 1000 patient-days during COVID-19, with an overall higher AU of broad-spectrum β-lactams (78.1 vs 142.5; P < .001), carbapenems (50.9 vs 110.1; P < .001), and colistin (4.1 vs 13.3; P < .001) ( Figure 1 and Table 1). Noticeably, the highest surge in AU of broadspectrum β-lactams, carbapenems, and colistin was observed approximately 12 months after the pandemic onset, peaking at 137%, 246%, and 705%, respectively ( Figure 1B). The monthly variation in AU for individual antibiotics is provided in Supplementary Figures 1 and 3. Cefepime, ertapenem, imipenem, meropenem, and colistin drove the increasing trend in consumption among the different antibiotic groups (Supplementary Figure 3). The AU of colistin, imipenem, and meropenem increased after COVID-19's onset in the ICU and in general wards, but remained stable in the emergency department (Supplementary Figure 4). DISCUSSION Understanding the drivers of AMR is critical to prevent the spread of multidrug-resistant organisms. Our data from a large public hospital in Chile show an association of the COVID-19 pandemic with increases in broad-spectrum antibiotic use and CRE infections. Notably, during the pandemic period we observed a significant increase in the proportion of CP-CRE, which was particularly relevant for CP-CRKpn, with an approximately 7-fold increase in isolates encoding bla KPC or bla NDM . This increase was driven by the appearance of two distinct genomic lineages of CP-CRKpn: ST1161 (harboring bla KPC-2 ) and ST45 (harboring bla NDM-7 ). The increase observed in CP-CRE, and especially in bla NDM -harboring organisms, which was previously uncommon in Chile, has been reported in other Latin American countries during the pandemic [20]. In October 2021, the Pan American Health Organization issued an alert on the emergence of and increase in new combinations of carbapenemases in Enterobacterales in the region [37]. Although we did not find CRE harboring more than one carbapenemase, several countries in Latin America have reported the detection of dualproducers after the pandemic [20]. The rapid dissemination of CP-CRKpn ST45 harboring bla NDM-7 observed in 2021 may suggest in-hospital transmission rather than multiple introductions. Hospitals, from different regions, reported challenges maintaining IPC practices, contributing to increases in healthcare-associated infections [38,39]. Importantly, as shown by our data and official reports, our study was performed in a setting of low CP-CRE prevalence pre-COVID- 19 [21], which provides a perfect setting to ARIMA (1,0,0)(0,1,1 The AR term refers to autoregressive order; the Ramp coefficient indicates the increment at each time point of the time series after the COVID-19 pandemic. The Step change coefficient indicates the augment rate immediately following the intervention; SM is for seasonal moving average. The model used the logged form of the difference in antibiotic consumption over time (by group); hence, coefficients should be transformed for interpretation. The logged time series and autocorrelation functions were computed to indicate if the time series was stationary. Our analysis of the model's residuals indicated they were uncorrelated and had a zero mean. Significance level, α = 5. assess the COVID-19 impact on the emergence of CP organisms. Our phylogenomic analyses of CRKpn revealed that the increase in CP-CRKpn during 2021 was primarily driven by the emergence of two genomic lineages. ST1161 carried bla KPC-2 , a class A enzyme frequently observed in CRKpn in different parts of the world. In contrast, strains of ST45 harbored bla NDM-7 , a class B metallo-enzyme against which there are very few, if any, reliable therapeutic options. While bla NDM-7 was also found in CP-CRKpn from other genomic lineages (ie, ST25 and ST528), bla KPC-2 was only observed in ST1161, suggesting that bla NDM-7 could be located in a mobile genetic element that facilitates its horizontal transmission into different genomic lineages and perhaps species. Moreover, the fact that bla NDM was observed in non-K. pneumoniae CRE prior to the pandemic and increased during the pandemic, mainly driven by E. cloacae complex, may hint towards horizontal transmission of this genetic trait. The study of genomic platforms with long-read sequencing analyses and transmission dynamics is part of our future research endeavors. In addition to an increase in CP organisms, we observed an increase in AU after the pandemic onset. Our findings are consistent with previous reports from China suggesting that approximately 70% of patients with COVID-19 received antibiotic treatment during the early stages of the pandemic [40,41]. We observed a prolonged and consistent increase in broad-spectrum β-lactams, carbapenems, and colistin after the first pandemic wave. Antimicrobial use peaked soon after the first year since the pandemic onset and coincided with the increase in CP-CRE. While there is a temporal correlation, our data do not allow us to establish causality. Therefore, the role of the increases in AU in selecting for CRE in general, and CP-CRE in our hospital, remains unclear. Several studies have demonstrated AU to be an independent risk factor for CRE colonization, including a meta-analysis focused on CRKpn [42,43]. Further studies are needed to evaluate the appropriateness and drivers of AU in the hospital and its role in the emergence of CP-CRE. Our study has several limitations. First, we only performed PCR detection for bla KPC , bla NDM, and bla VIM ; therefore, it is possible that we missed other relevant carbapenemases, leading to an underestimation of the number of CP-CRE isolates. Indeed, a recent communication reported the first detection of bla OXA-48 in CRKpn and Escherichia coli in Chile during the pandemic [36]. However, we did perform WGS in all CRKpn and no other carbapenemases were observed in these analyses. Second, while we analyzed the genomes of all CRKpn (which were by far the most frequent bacterial species), our WGS data did not include other organisms (eg, CR-E. cloacae complex), limiting our ability to draw conclusions about relevant observations such as the expansion of bla NDM -harboring organisms. Third, our analyses are ecological by nature and preclude conclusions regarding any causal effect. Although AU is one of the main drivers of AMR [44], our data do not allow us to rule out the influence of confounding factors, therefore hampering our ability to establish direct causality between AU increase and the emergence of CP-CRE. Despite these limitations, this is the first report examining the temporal association between COVID-19 and its impact on AU and AMR in Chile. It draws attention to the emergence of genomic lineages of CP-CRE that pose treatment challenges and emphasizes the need for improved antibiotic stewardship and enhanced IPC measures to prevent their spread within healthcare facilities. The use of genomic surveillance provides data to help understand whether there were multiple introductions of new strains or if there is an expansion of a single strain, which hints towards healthcare transmission. It is not known whether bla KPC-2 ST1161 or bla NDM-7 ST45 CRKpn will spread rapidly within Chilean or South American hospitals, but increased vigilance will be warranted. In summary, our analyses show that AU rate and AMR increased during COVID-19 surges in Chile. Additional studies are necessary to understand the specific ways in which the burden of the pandemic affected AU and AMR rates and whether the increases in AU observed in our data directly increased the risk of AMR among our population. Our findings also highlight the need to build capacity for IPC and antimicrobial stewardship programs. As we move into next phase of the COVID-19 pandemic and recovery, it will be critical to emphasize the need for strong IPC programs, one of the cornerstones of a resilient healthcare system. Lesson Learned Strengthening our capabilities to ensure appropriate AU, rapid genome-based surveillance of emerging multidrug-resistant pathogens, and efficient IPC programs is crucial to tackle AMR in the future. Supplementary Data Supplementary materials are available at Clinical Infectious Diseases online. Consisting of data provided by the authors to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the authors, so questions or comments should be addressed to the corresponding author.
2023-07-07T22:15:44.864Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "ca605faafe20952b2fd4d1988da4260650de5676", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/cid/article-pdf/77/Supplement_1/S20/50822425/ciad151.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "74efabb644ac1f12667ebed3ec8ebab4e1cf8141", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250953369
pes2o/s2orc
v3-fos-license
Treatment-Seeking Behavior of Patients With COVID-19 and Its Related Factors in Central Sulawesi, Indonesia COVID-19 has become a global health problem since the World Health Organization mentioned a pneumonia cluster with unclear etiology in Wuhan City, Hubei Province, China, on December 31, 2019. In Central Sulawesi Province, Indonesia, the disease has spread throughout the districts/cities. As of April 12, 2022, 60 680 positive cases were confirmed with 1716 deaths (CFR = 2.83%). The highest cases were found in Palu City with 13 121 cases and 239 deaths (1.79%).1 People with a history of contact with COVID-19 patients are recommended to be quarantined for 14 days. Staying at home is the best option for preventing COVID-19 transmission.2 The management of such patients should focus on preventing virus transmission and monitoring clinical conditions to be treated immediately in the hospital if needed.3 Contacting a health care provider to treat symptoms of cough, fever, and difficulty breathing in some cases is rare. Treatment-seeking behavior is influenced by the availability of health services, perceptions of susceptibility and severity of disease, and social and demographic characteristics of individuals.4 This study was conducted to determine the treatment-seeking behavior of people with confirmed COVID-19 in Palu City and its related factors. Introduction COVID-19 has become a global health problem since the World Health Organization mentioned a pneumonia cluster with unclear etiology in Wuhan City, Hubei Province, China, on December 31, 2019. In Central Sulawesi Province, Indonesia, the disease has spread throughout the districts/cities. As of April 12, 2022, 60 680 positive cases were confirmed with 1716 deaths (CFR = 2.83%). The highest cases were found in Palu City with 13 121 cases and 239 deaths (1.79%). 1 People with a history of contact with COVID-19 patients are recommended to be quarantined for 14 days. Staying at home is the best option for preventing COVID-19 transmission. 2 The management of such patients should focus on preventing virus transmission and monitoring clinical conditions to be treated immediately in the hospital if needed. 3 Contacting a health care provider to treat symptoms of cough, fever, and difficulty breathing in some cases is rare. Treatment-seeking behavior is influenced by the availability of health services, perceptions of susceptibility and severity of disease, and social and demographic characteristics of individuals. 4 This study was conducted to determine the treatment-seeking behavior of people with confirmed COVID-19 in Palu City and its related factors. Methods Observational research with a retrospective case series design was conducted to determine the behavior of treatment-seeking behavior in 268 polymerase chain reactionconfirmed cases of COVID-19 between March 2020 and May 2021 in Palu City using Pearson χ 2 and logistic regression with SPSS version18. The ethical approval of this study was obtained from the Health Research Ethics Committee of the Faculty of Medicine, Alkhairaat University, No 389/SR. KEPK/UA-FK/VI/2021. Results About 57.8% of patients were looking for health workers, 62.7% self-treatment, 58.8% were taking traditional medicine, and 69% were taking a combination of treatments (Table 1). Multivariate logistic regression shows that only those with a history of diabetes and symptoms of shortness of breath were identified as independent factors associated with seeking treatment from health facilities. Patients with a history of diabetes and shortness of breath have a greater chance of seeking treatment from health facilities with odds ratio (OR) of 8.14 (P = .015) and 2.54 (P = .016), respectively. Cases with symptoms of fatigue have a greater chance (adjusted OR = 2.3, P = .014) of seeking traditional treatment. Patients with a history of diabetes have a greater chance (adjusted OR = 4.41, P = .018) of seeking self-treatment. Cases with fatigue symptoms are three times more likely to seek several types of treatments (adjusted OR = 3.27, P = .003). There is no association between demographic factors, hypertension, cardiovascular disease, asthma, chronic obstructive pulmonary disease (COPD), obesity, tuberculosis, fever, diarrhea, headache, chest pain, and loss of smell with the treatment-seeking behavior, either professional or traditional, independent or combination (P > .05) ( Table 2). Discussion More than half of the COVID-19 cases seek treatment at health facilities, self-treatment, traditional medicine, or a combination of the three treatments. Self-treatment in patients experiencing COVID-19-like symptoms by purchasing medicines at pharmacies/stalls is common but may increase the risk of developing more severe symptoms, especially in comorbidities such as diabetes, chronic heart disease, hypertension, and chronic liver diseases. 5 Elderly, diabetics, hypertensive, and obese patients are at increased risk of severe illness and death from COVID-19 infection. 6 Diabetes and shortness of breath are associated with seeking treatment from health workers in the public health center and/or hospitals. Diabetes is one of the most common comorbid diseases found in COVID-19 patients. 7 This behavior is due to the patient's awareness of the importance and risk of respiratory involvement and taking the necessary precautions. A study in Iran showed similar results that symptoms of shortness of breath and a history of respiratory illness are related to the search for professional treatment. 8 Fatigue is the only symptom associated with seeking traditional treatment and a combination of treatments. Traditional Chinese medicine is one of the oldest medical practices globally, covering a wide variety of ways, ranging from herbs, and acupuncture, to Tai Chi. Traditional Chinese Medicine must be combined with modern treatment in treating COVID-19 patients. 9 A history of diabetes, dry cough, and fatigue is also associated with self-medication. In Russia, about two thirds of COVID-19 patients first decide to selfcare at home. People thought the patients were in critical conditions when hospitalized due to the slow care by the health workers. 10 Improper use of drugs will affect susceptibility to disease and has more severe consequences in patients with a history of congenital disease. However, we found no association between demographic factors, hypertension, cardiovascular disease, asthma, COPD, obesity, tuberculosis, fever, diarrhea, headaches, chest pain, and loss of smell with treatment-seeking behavior for seeking health workers, self-medicine, traditional medicine, or combination of the three treatments. The COVID-19 virus affects all ages, but evidence suggests that two groups of people are at higher risk of developing severe COVID-19 disease. Elderly groups with congenital diseases risk severe illness and death from COVID-19 infection. 6 Thus, the sensitivity of patients with congenital diseases and older age to seek professional treatment is crucial. Conclusion Most COVID-19 cases seek treatment at health facilities, selftreatment, traditional medicine, or a combination of the three treatments. Moreover, only diabetes, symptoms of shortness of breath, dry cough, and fatigue are associated with the treatment-seeking behavior. However, we did not find any predictors of treatment-seeking behavior in elderly patients, hypertension, obesity, and asthma, although there was a risk of severe illness and death from COVID-19. Thus, behavioral change interventions in cases with a history of comorbidity and severe symptoms such as shortness of breath are essential to improve behavior in search of appropriate treatment. Author Contributions All authors are the main contributors. M.A.N., A.M.A., W.H., L.S., N.D., H.A., O., and N.N.V. are responsible for drafting the original article, analyzing data, designing the study, methodology, investigation, and writing a review. All authors read and approved the final manuscript. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
2022-07-23T06:17:26.256Z
2022-07-21T00:00:00.000
{ "year": 2022, "sha1": "9ecc330da6ebd7d34b2a9f3203f09c128be8371e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Sage", "pdf_hash": "eaaff93d4425c0fcc2c3a89c5b2b944e686de047", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
267036088
pes2o/s2orc
v3-fos-license
Perinatal outcome of emergency cesarean section under neuraxial anesthesia versus general anesthesia: a seven-year retrospective analysis Objective An emergency cesarean section (CS), which is extremely life-threatening to the mother or fetus, seems to be performed within an adequate time horizon to avoid negative fetal-maternal denouement. An effective and vigilant technique for anesthesia remains vital for emergency cesarean delivery. Therefore, this study aimed to validate the impact of various anesthesia tactics on maternal and neonatal outcomes. Method This was a retrospective cohort study of parturient patients who were selected for emergency CS with the assistance of general or neuraxial anesthesia between January 2015 and July 2021 at our institution. The 5-min Apgar score was documented as the primary outcome. Secondary outcomes, including the 1 min Apgar score, decision-to-delivery interval (DDI), onset of anesthesia to incision interval (OAII), decision to incision interval (DII), duration of operation, length of hospitalization, height and weight of the newborn, use of vasopressors, blood loss, neonatal resuscitation rate, admission to neonatal intensive care unit (NICU), duration of NICU and complications, were also measured. Results Of the 539 patients included in the analysis, 337 CSs were performed under general anesthesia (GA), 137 under epidural anesthesia (EA) and 65 under combined spinal-epidural anesthesia (CSEA). The Apgar scores at 1 min and 5 min in newborns receiving GA were lower than those receiving intraspinal anesthesia, and no difference was found between those receiving EA and those receiving CSEA. The DDI of parturients under GA, EA, and CSE were 7[6,7], 6[6,7], and 14[11.5,20.5], respectively. The DDI and DII of GA and EA were shorter than those of CSE, and the DDI and DII were similar between GA and EA. Compared to that in the GA group, the OAII in the intraspinal anesthesia group was significantly greater. GA administration correlated with more frequent resuscitative interventions, increased admission rates to NICU, and a greater incidence of neonatal respiratory distress syndrome (NRDS). Nevertheless, the duration of NICU stay and the incidence rates of neonatal hypoxic ischemic encephalopathy (HIE) and pneumonia did not significantly differ based on the type of anesthesia performed. Conclusion Compared with general anesthesia, epidural anesthesia may not be associated with a negative impact on neonatal or maternal outcomes and could be utilized as an alternative to general anesthesia in our selected patient population following emergency cesarean section; In addition, a comparably short DDI was achieved for emergency cesarean delivery under epidural anesthesia when compared to general anesthesia in our study. However, the possibility that selection bias related to the retrospective study design may have influenced the results cannot be excluded. Introduction As one of the commonly performed surgical procedures for parturients, emergency cesarean section (CS) is representative of the escalation of an obstetric emergency as a result of life-threatening conditions for the newborn and/or the mother [1].Therefore, with respect to restricted time coupled with increased risk, the option of anesthesia technique is highly important for improving the fetal-maternal prognosis [2].While GA is expected to be a widely accepted choice in urgent situations due to its advantages of rapid induction and a shortened DDI, this procedure has several underlying side effects, including failed intubation and aspiration in high-risk populations, worse umbilical arterial pH and base excess [3].Despite the above potential risks, a retrospective survey reported the first preference for GA for emergency CS at their institution.Compared to GA, neuraxial anesthesia, recommended by the UK National Institute for Health and Clinical Excellence, has a more favorable safety profile for pregnant women indicated for emergency CS [4], with advantages covering the avoidance of potential complications, the difficulty of airway and neonatal exposure to anesthesia drugs used for intubation and maintenance of GA [5].Therefore, regional anesthesia is increasingly the preferred anesthetic technique for pregnant women who undergo CS in emergency cases in most hospitals. Surgical anesthesia can be established via epidural anesthesia with a well-functioning epidural catheter or rapid sequence spinal anesthesia for emergency CS, during which the onset speed of local anesthetic drugs plays an important role [6].Remarkably, recommendations regarding the choice of local anesthetics or adjuvants with respect to the optimal type and dose for abbreviating the onset as well as potentiating high-quality anesthesia have been vague.Lignocaine (2%), 2-chloroprocaine (3%), 0.75% ropivacaine, or 0.5% bupivacaine were commonly used for cesarean delivery.The median top-up volume ranged from 16 to 19 ml for lidocaine, ropivacaine, and chloroprocaine [7].A Bayesian network meta-analysis suggested that the onset of surgical anesthesia seemed fastest after epidural lidocaine 2% with bicarbonate, followed by 2-chloroprocaine 3% and lidocaine 2% [8].In addition, the inclusion of adjuvants composed of opioids (fentanyl, sufentanil, and morphine), or α2-agonists (clonidine and dexmedetomidine) could result in a faster onset of anesthesia, a decreased dose of intrathecal local anesthetics and decreased occurrence of adverse events from these drugs [9].Fentanyl combined with local anesthetics at an epidural dose of 50-75 µg or an intrathecal dose of 10-25 µg further decreased the onset time by a mean difference of more than 2 min and prolonged the postoperative analgesia duration to approximately 3-4 h [10,11].In addition to these advantages, α2-agonists, as additives to local anesthetics, were available for the treatment of CS patients to reduce side effects, including shivering, nausea or vomiting [11]. Although a large amount of emergency CS procedures are performed each year, to date, there is no consensus regarding the best selection of anesthesia method for emergency CS.Hence, we conducted this study to identify the influence of different types of anesthesia on maternal and neonatal outcomes and the discrepancies in DDI. Study design and data sources The study was conducted according to the principles of the Declaration of Helsinki.The study protocol was authorized by the Medical Ethics Committee of Nanjing Women and Children's Healthcare Hospital on July 8th, 2021 (2021KY023) and was registered in the Chinese Clinical Trial Registry on August 16th, 2021 (ChiCTR2100050120).This retrospective, single-center cohort study included all patients scheduled for consecutive nonelective emergency CS from January 2015 through July 2021 and was performed at the Nanjing Women and Children's Healthcare Hospital, a specialist maternity hospital.The data analyzed in this study were retrieved from an integrated electronic medical records system at our institution included patient hospitalization, coded diagnoses, medications, surgical and other procedures, patient characteristics, the DII, and the OAII, which was defined as the period from the end of drug injection until when the anesthesiologist would allow the surgeon to commence surgery if it was an emergency CS, DDI, and newborn or maternal condition.The 5-min Apgar score was documented as the primary outcome.The data were presented in an anonymous and standardized format. Participants The inclusion criteria included patients scheduled for emergency CS, classified under ASA physical status II-V, with indications such as acute severe fetal bradycardia, placental abruption, prolapse of the umbilical cord, uterine rupture, threatened uterine rupture, eclampsia, severe hemorrhage, amniotic fluid embolism, failure of instrumental extraction with fetal distress and other life-threatening conditions for both newborns and/or mothers.Individuals with incomplete information in the electronic file and those who underwent elective operations were excluded. Procedures The operating room designed for emergency CS in our obstetric delivery suite is available 24 h a day and is located just one minute away from the delivery ward equipped with monitoring facilities for both mothers and newborns.Upon receiving notification of an impending emergency, a senior obstetrician will assess whether the emergency poses a threat to the mother and/or fetus.Subsequently, the obstetrician immediately presses the emergency call button to alert the attending nurse, anesthesiologist, neonatologist, and midwife when an emergency CS is required for the parturient.An epidural top-up is administered whenever feasible using either 15 ml of 2% lidocaine or 15-20 ml of 3% chloroprocaine with or without sufentanil (20 µg) as an adjuvant.Alternatively, GA or CSEA may be performed on parturients who are contraindicated for neuraxial anesthesia or have inadequate T8 level for effective epidural labor analgesia.Parturients in the GA group received pure oxygen (100%) three minutes prior to induction of anesthesia, followed by rapid sequence induction via intravenous administration of propofol (1.5-2 mg/kg), remifentanil (1 µg/kg), and succinylcholine (2 mg/kg) to facilitate endotracheal tube insertion after loss of corneal and palpebral reflexes.After clamping of umbilical cord, midazolam (0.05 mg/kg) was administered, and maintenance of anesthesia involved continuous infusion of propofol at a concentration of 1% (80-120 µg/kg/min), sufentanil at a rate of 0.1 µg/kg/min, and cisatracurium at a rate of 2 µg/kg/min.In the CSE group, access to the epidural space was achieved using an 18G Tuohy needle at either L3-4 or L4-5 interspinous space, followed by the injection of 2 ml of 0.75% ropivacaine into the subarachnoid space through a 26G Quincke needle utilizing the needlethrough-needle technique along with an epidural top-up involving the administration of 15 ml of 2% lidocaine. Statistical analysis Patient characteristics were summed as descriptive statistics.The mean (standard deviation [SD]) and median (25th-75th percentile) were calculated for normally and nonnormally distributed quantitative variables, respectively.The normality of the distribution was determined using the Shapiro-Wilk test.Normally distributed values were analyzed using variance analysis or an independent samples Student's t test, whereas the Kruskal-Wallis H test or Mann-Whitney U test was used for nonnormally distributed covariates.The χ2 test or Fisher's exact test was used to compare the differences in categorical variables.Univariable logistic regression analysis was performed for each factor, which was filtrated as candidates for multivariable regression analysis with a P value below 0.1.Multivariable regression analysis was subsequently conducted to assess the associations between the possible factors and neonatal height and weight.Missing data were handled by listwise deletion.The data analysis was conducted with IBM SPSS version 24.0.A P value less than 0.05 was considered statistically significant. Results A total of 571 parturients underwent emergency CS between January 2015 and July 2021 at our institution, and 539 patients were eventually included in this study.GA was administered for 337 emergency CSs, 70 of whom received epidural labor analgesia before GA.EA was given to a total of 137 pregnant women, while CSEA was used for 65 individuals (Fig. 1).The characteristics of emergency CS under GA, EA, and CSEA are presented in Table 1. The Apgar scores at the first and fifth minutes were lower in GA group than those in the EA and CSE groups.The percentage of patients with an Apgar score < 7 at one minute was recorded as 10.4% under GA, whereas it was only 0.7% for EA and 1.5% for CSEA.There was no significant difference concerning the incidence of Apgar score < 7 at five minutes among three groups (P > 0.05).No statistically significant difference was observed between the EA and CSE groups concerning a two-point decrease in the Apgar score (P > 0.05).The rate of Apgar score < 3 at both one and five minutes in GA group did not statistically differ from that in the neuraxial groups (P > 0.05).GA administration correlated with more frequent resuscitative interventions, increased admission rates to NICU, and a greater incidence of NRDS in our analyzed patients.Nevertheless, the duration of NICU stay and the incidence rates of HIE and pneumonia did not significantly differ based on the type of anesthesia performed.No significant difference was detected regarding birth height or weight through multivariate logistic regression analysis, although the two indices were statistically lower in newborns who underwent GA (Tables 2 and 3). The overall median DDI was reported as 7 [6,7] min.A DDI ≤ 5 min occurred in 91 (16.9%) women following emergency CS, and for 357(66.2%)parturients, the DDI ranged from 5 to 10 min.A median DDI of 6 [6,7] was recorded for subjects following the anesthetic method of EA, and 26 (19.0%) of those clients had DDIs less than 5 min.Group EA exhibited a parallelly short DDI and DII intervals compared to the group GA (P > 0.05).Compared to those of subjects in the GA or EA group, the DDI of 14 [11.5, 20.5] combined with the DII of the CSE group were significantly greater (P < 0.05) (Table 4). For all obstetric patients in the GA group, labor epidural analgesia was performed for 70 parturients who underwent emergency CS before receiving general anesthesia.Compared to that of patients without labor epidural analgesia before the induction of GA, the DII of patients receiving preexisting labor epidural analgesia was lower combined with increased Apgar score of 5th min (P < 0.05).No significant association was found between labor analgesia and birth weight according to multivariate analysis (Table 3).There were no significant differences in terms of DDI, OAII, duration of surgery, birth height, blood loss, or hospitalization (P > 0.05) (Table 5). Discussion As one of the most commonly performed surgeries worldwide, cesarean section is one mode of labor for decreasing maternal and perinatal morbidity and mortality [12].Hence, more attention should be given to the effect of anesthetic patterns on perinatal outcomes.In recent years, with the maturity of neuraxial anesthesia coupled with the improved safety of general anesthesia, there has been a reduction in anesthesia-associated obstetric mortality [13].In this retrospective study, we demonstrated that epidural anesthesia had comparable potential relative to general anesthesia in terms of DDI, DII, and material outcomes, including blood loss, vasoactive drugs, and hospitalization.Moreover, the higher Apgar scores of the 1st and 5th min as well as lower admission to NICU of newborns were observed in newborns who received epidural anesthesia in comparison with those who received general anesthesia in our selected cases.In addition, the DDI in our institution was limited to within 30 min for nearly all obstetrics, with a rate of 86.2% for a DDI less than 10 min for GA and EA.Notably, the retrospective design limits the interpretation of the results without full consideration of selection bias. Guaranteeing security for pregnancy following CS in emergent cases remains a challenge for anesthesiologists.Typically, the indications for the type of anesthetic technique depend on the degree of urgency in relation to maternal and fetal status and comorbidities as well as on the difficulty or expected duration of procedures [5].Although general anesthesia is regarded as a generally accepted choice in emergent situations due to its rapid and predictable onset, the procedure has several underlying side effects [14].Notably, all obstetric patients are at a high risk for pulmonary aspiration when they are receiving general anesthesia [15], suggesting an eightfold-fold higher risk than that of non-obstetrical patients with respect to failed intubation and aspiration [16].An investigation by Kinsella et al. revealed that the incidence of obstetric failed tracheal intubation remained stable at 2.3 per 1000 GA for CS, and maternal mortality from failed intubation was 2.3 per 100,000 GA, while aspiration or hypoxemia was secondary to airway obstruction or esophageal intubation [17].In addition, pregnant women diagnosed with severe preeclampsia and undergoing emergency CS are prone to stroke due to high incremental blood pressure and neuroendocrine stress responses without the addition of opioids to execute GA [18].A retrospective cohort analysis of 194 code-red cesarean sections conducted by Cyril et al. verified the close relationship between GA and negative well-being of newborns [19].Algert et al. reported that among the infants who required intubation, those delivered via GA had a 5-min Apgar score of < 7, which was more common than that of infants delivered via regional anesthesia [20].In our study, babies delivered under GA had decreased Apgar scores at both 1 and 5 min compared with those [26,35] 28 [25,33] 0.004 who received intraspinal anesthesia, which was analogous to the findings of previous studies.Besides, more frequent resuscitations and transfers to the NICU were observed in patients who received GA in our cohort.A possible explanation we postulated, as mentioned in previous studies, is that general anesthesia may affect neonatal conditions to some extent because of transient sedation of the neonate from the anesthetic drugs [19].However, a causal relationship could not be drawn in our study considering the small sample size and the retrospective design. The preponderance of neuraxial anesthesia, as the preferred anesthetic technique for cesarean section in cases of emergent situations by anesthetists, is composed of the avoidance of the potential complications of difficult airways, aspiration of gastric content, neonatal exposure to anesthetic drugs applied during the period of anesthetic induction and maintenance of GA [5,14], and the requirement of a low dose and concentration of local anesthetics [2].Hence, some scholars have proposed that regional anesthesia should be executed whenever possible, as it was associated with shorter hospital stays, less maternal morbidity, and higher Apgar scores and umbilical blood pH values in neonates [2,20].In additions, the conversion of epidural analgesia to surgical anesthesia for emergency cesarean delivery in parturients with effective labor epidural catheter was not associated with poorer outcomes in newborns [21].In this retrospective study, a lower incidence of Apgar scores < 3 for infants delivered via EA or CSEA was recorded, and the hospitalization of patients in the CSE group was shorter than that of patients in the GA or EA group. The DDI was defined as the time taken from recognition of an abnormality on fetal heart tracing using cardiotocography and decision to proceed with operative delivery to the time of delivery of the fetus.Until now, no consensus has been reached concerning the ideal DDI, a quality indicator of emergency CS, or its influence on maternal outcome and neonatal well-being [22].A time recommendation limiting DDI to 30 min for emergency CS procedure has been advocated by the Royal College of Obstetricians and Gynecologists as well as the American College of Obstetricians and Gynecologists [23].However, the incidence of DDI ≤ 30 min was reported to be only 17.5% in a retrospective cross-sectional study of 510 mothers who underwent emergency CS [24].A prospective study analyzing 163 category-1 emergency cesarean sections reported that the average DDI was 42 ± 21.4 min, with a prevalence of only 19.6% of women having a DDI below 30 min [25].In our unit, delivery could be achieved within the recommended time interval, during which the DDI within 10 min was nearly 86.2% of the parturients combined with the rate of 93.7% for DDI ≤ 15 min.In addition, compared to patients without labor epidural analgesia before the induction of GA, no significant difference was found considering DDI for patients administered labor epidural analgesia beforehand.The DII of the GA group receiving preexisting labor epidural analgesia was lower than that without labor epidural analgesia before GA, which was ascribed to the effect of labor epidural analgesia to some extent. In addition, we found similarly short DDI intervals between the GA and EA groups, differing from several lines of evidence indicating that the technique of regional anesthesia was associated with prolonged DDI compared to general anesthesia [26].The epidural topup through an epidural catheter already inserted and providing effective analgesia might be the predominant factor responsible for shortening the time interval of epidural anesthesia [19,27].Another possible reason might be that the administration of chloroprocaine accelerated the onset of EA.In our unit, we also noted that performing CSEA was more time-consuming than performing GA or EA, as verified by the prolonged interval of DDI.However, the sample of CSEA patients in our study was so small that further study is warranted to verify the reliability of the results. There are some limitations in the present study.First, our study was a single-center retrospective analysis; therefore, confounding effects and bias are inevitable to some extent.Second, the present results should not be extrapolated to other surgical types considering the single-center nature of the study with small sample sizes and the retrospective design, and further investigations are warranted to confirm the results.Third, we did not record the umbilical blood pH values, which reflect neonatal outcomes.Finally, long-term outcomes were not measured in our study. Conclusion Epidural anesthesia may not be associated with a negative impact on neonatal and maternal outcomes compared to general anesthesia and could be utilized as an alternative to general anesthesia in our selected patient population following emergency cesarean section; in addition, comparably short DDI was achieved for emergency cesarean delivery under epidural anesthesia when compared to general anesthesia in our study. Table 1 Characteristics of parturients underwent emergency cesarean section under general or neuraxial anesthesiaThe data were presented as mean ± standard deviation (Mean ± SD) or median [P25, P75] or number (percentage); GA general anesthesia, EA epidural anesthesia, CSE combined spinal-epidural anesthesia, BMI Body Mass Index, ASA American Society of Anesthesiologist; Preexisting labor analgesia: Epidural labor analgesia was performed prior to emergency CS a p < 0.05 in comparison with CSE group Table 2 Outcomes of neonatus underwent emergency cesarean section under general or neuraxial anesthesia Data were expressed as mean ± standard deviation (Mean ± SD) or median [P25, P75] or number (percentage); NRDS Neonatal Respiratory Distress Syndrome, NHIE Neonatal Hypoxic Ischemic Encephalopathy, PDA Patent Ductus Arteriosus, PFO Patent Foramen Ovale a p < 0.05 in comparison with EA group b p < 0.05 in comparison with CSE group Table 3 Multivariate logistic regression analysis to factors of the height and weight of neonatus Table 4 Intraoperative outcomes of parturients underwent emergency cesarean section under general or neuraxial anesthesia Data were expressed as median [P25, P75] or number (percentage); DDI Decision to delivery interval, DII Decision to incision interval, OAII Onset of anesthesia to incision interval a p < 0.05 in comparison with EA group b p < 0.05 in comparison with CSE group Table 5 Data of parturients according to labor epidural analgesia before emergency CS under general anesthesia
2024-01-19T14:15:03.107Z
2024-01-19T00:00:00.000
{ "year": 2024, "sha1": "40486de8c4d30e6c5a1f4001d15271edad7a3acc", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "87c12bb3ceedfad4e5c4ba549029155f9e32ae0c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53520138
pes2o/s2orc
v3-fos-license
Lipid Oxidation and Color Changes of Fresh Camel Meat Stored Under Different Atmosphere Packaging Systems Extension of the shelf-life of meat was one of the technological necessities to meet the demands of consumers. In this respect, increasing attention was put on packaging techniques. Modified atmosphere packaging (MAP) is the recent innovation that has been gaining importance as preservation technique to improve the shelflife of meat. Retention of meat color was better in MAP than in either vacuum packaging or in air [1]. Modified atmosphere packing has been used for increased distribution range and longer shelf-life. The effects and roles of the gases normally used in the modified atmospheres (O2, CO2 and N2) have been extensively reported [2-4]. Hood and Mead (1995) indicated that the effects which the mixture of gas produces in meat quality, such as color and shelf life, are the principal factors that should be considered when choosing the gas mixture [5]. In addition, Gill affirmed that the principal factors to be addressed in the preservation of chilled meat are the retention of an attractive, fresh appearance for the product displayed, and the retardation of bacterial spoilage [3]. Several studies have been carried out on the physical, chemical composition, sensory properties and nutritive values of camel meat [6-11]. No data has been published on the preservation of fresh camel meat by modified atmosphere packaging. Our objective was to investigate the color and lipid oxidation changes of fresh camel meat using modified atmosphere packaging under refrigeration. Introduction Extension of the shelf-life of meat was one of the technological necessities to meet the demands of consumers. In this respect, increasing attention was put on packaging techniques. Modified atmosphere packaging (MAP) is the recent innovation that has been gaining importance as preservation technique to improve the shelflife of meat. Retention of meat color was better in MAP than in either vacuum packaging or in air [1]. Modified atmosphere packing has been used for increased distribution range and longer shelf-life. The effects and roles of the gases normally used in the modified atmospheres (O 2 , CO 2 and N 2 ) have been extensively reported [2][3][4]. Hood and Mead (1995) indicated that the effects which the mixture of gas produces in meat quality, such as color and shelf life, are the principal factors that should be considered when choosing the gas mixture [5]. In addition, Gill affirmed that the principal factors to be addressed in the preservation of chilled meat are the retention of an attractive, fresh appearance for the product displayed, and the retardation of bacterial spoilage [3]. Several studies have been carried out on the physical, chemical composition, sensory properties and nutritive values of camel meat [6][7][8][9][10][11]. No data has been published on the preservation of fresh camel meat by modified atmosphere packaging. Our objective was to investigate the color and lipid oxidation changes of fresh camel meat using modified atmosphere packaging under refrigeration. Sampling preparation and packaging Camel meat samples were obtained at a slaughter house (Tehran, Iran). Any visible fat was removed from the muscle tissues. A Turbovac packaging machine, model A 200, (Henkelman, Netherlands) was used for packing. Meat samples were randomly assigned to one of the three types of different atmospheres packaging (AP: Air packaging, VP: Vacuum packaging, MAP: 60% CO 2 +40% N 2 ) using sterile polyester polyethylene (PET/Poly) pouches (thickness -62 lm). Lipid oxidation Lipid oxidation was evaluated by the determination of thiobarbituric acid reactive substances (TBARS) using the extraction method described by Witte, et al. [12]. Twenty grams of the minced meat were blended with 50 mL of cold solution containing 20% trichloroacetic acid in 2M phosphoric acid for 2 min. The resulting slurry was then transferred into a 100 mL volumetric flask. The slurry was diluted to 100 mL with double-distilled water, homogenized by shaking and filtered through Whatman no. 1 filter paper. 5 mL of the filtrate was then pipetted into a test tube and 5 mL of fresh chilled 2-thiobarbituric acid (0.005 M in double distilled water) was added. The test tube was shaken well and placed in the dark at room temperature (25°C) for15 h to develop the color reaction. The resulting color was measured in a spectrophotometer at 530 nm to calculate the TBARS value. The results were expressed as mg malonaldehyde/kg meat. Color measurement Color was recorded using a Minolta Chroma meter CR-400 KON made in Japan. Readings at per sample, in the center of the steak was taken. CIELAB system, L*(lightness), a* (redness) and b* (yellowness) were measured [13]. Sensory analysis Camel meat samples were evaluated by eight semi-trained panelists. The panelists consisted of staff members in the Dept. of Meat Science, University of Tehran. Panelists were given an orientation for 30 min about appearance (color), odor, texture and overall quality of fresh camel meat. Acceptability of raw meat was evaluated using a 9-point hedonic scale, where 9=like extremely, 8=like very much, 7=like moderately, 6=like slightly, 5=neither like nor dislike, 4=dislike slightly, 3=dislike moderately, 2=dislike very much, and 1=dislike extremely [14]. Scores from 6 to 9 were considered acceptable [15]. Evaluation was performed under cool white fluorescent light in the sensory laboratory. The same meat samples were evaluated over storage times. The shelf life limit was defined as the point when 50% of the panelists rejected the sample. Statistical analysis The data were analyzed using analysis of variance to determine the effects of MA type (1, 2, 3, and 4) on the parameters of meat quality: color, lipid oxidation and SF. When the differences among types of MA were significant (P<0.05), Tukey's test was carried out to check the differences between pairs of groups. The effect of storage for each treatment packing on meat quality was analyzed using Tukey's test at a significance level of P<0.05. Data were analyzed using the SAS (1988) statistical package [16]. [17]. It could be supposed that the low intensity of oxidative processes was due to the raising mode of the studied animals which provided natural antioxidants, such as vitamin E, carotenoids, etc. [18]. It was shown that pasture [19] increased significantly the content of vitamin E in bovine muscles and hence reduces the development of oxidation in meat. Color Results of color measurement are shown in (Table 2). Initial values for L*, a*, b* and Chroma were 34.49, 20.12, 7.59 and 22.74 respectively. The L* value increase by 21 days with time in all groups and reached significant levels (P<0.05) in AP, with this usually being attributed to the oxidation of heme pigments [20]. The lowest L values after 21 days corresponded to samples under Vacuum packaging which showed significant differences with samples under Air. Parameter b* (yellowness) increased by 21 days for of storage only in for camel meat during storage under Air, but no significant differences were found among samples packed under vacuum and MAP at the end of 21 days for yellowness. Differences in b* along the storage period could be related to the intensity of the oxidation process that takes place during storage and might tend to increase yellowness of samples by rancidity, although no measures of oxidation intensity are available to support this hypothesis. The a* (redness) value in Air-Pakaging decreased significantly (P<0.01) at the same storage time. On the other hand, a decrease in a* values due to oxygen content in AP would reflect myoglobin oxidation. Mercier et al. (1998), have observed an increase in the hue angle (arc tan b*/a*) of stored turkey pectorals muscle, suggesting a degree of change from red to yellow, an indication of increased oxidation with time [21]. In the present study, calculation of hue angle values (not reported) showed an increase for Air-Packaged camel meat during storage, whereas in Vacuum-Packaged samples, values for camel meat remained relatively stable. The more rapid changes in L*, a* and b* value of Air Packaged samples suggest that this gas is responsible for the determination of colour. Moore and Gill (1987) also found increases in L* and b* values with time, in agreement with our results [22]. The increase in b* may be associated with the transformation of the meat pigment and the formation of meat myoglobin, which is faster at relatively low oxygen concentration [23]. Our results show that a mixture with 30% CO 2 and 70% N2 maintains a good colour for up to 21 days at 4 ± 1°C in the absence of O 2 . Chroma showed an opposite co-variation with L*. In both parameters there were significant differences among groups only from 21 days onward. The forward stepwise logistic regression model of acceptance was statistically significant (P<0.01) and showed that this acceptance was affected (P<0.01) by Chroma, time storage and MAP. Samples stored under MAP gases were accepted for a longer time than the other groups. Gas composition in packs (Table 1) was associated with the changes in colour and the probability of being accepted. In agreement with other authors [24] our study found a slower discoloration of samples stored with higher proportions of CO 2 , this being more evident in MAP treatment. Sensory analysis The camel meat was evaluated for changes in surface color, texture, and odor by semi-trained panelists. By the end of the storage time (at day 21), MAP were acceptable (scores >6) and significant differences (P<0.05) were found between other packaging system for all sensory attributes. The surface color of the samples in MAP was not severely discolored and remained acceptable even after 21 days storage. Storage time effect within treatment indicated that surface discoloration increased (P<0.05) especially at day 14 in Air-Packaged samples ( Table 2). At day 21, surface colour of samples packed with MAP remained unchanged (P>0.05). The data suggest that the MAP with high CO 2 protected the surface color. The colour and odor changes in meats are highly dependent upon packaging condition [25]. Panelists rejected Air-Packaged samples after 14 days storage at 4°C but MAP increased the shelf life of fresh camel meat refrigerated at 4°C by more than 21 days. Conclusion In this study we have observed the evolution of the main parameters that affect camel meat quality (colour, lipid oxidation and shear force) when preserved in modified atmospheres with different mixtures of gas. For colour, however, values obtained indicated that MAP was the best of those tested. Modified atmosphere packaged fresh camel meat with high CO 2 reduced the increasing rate of lipid oxidation during storage. Our study showed that even though oxidative rancidity (TBARS) increased with storage time in all packed samples, it did not result the deterioration of sensory quality in MAP. This indicates that lipid oxidation is not a major problem in MA-packaged fresh camel meat stored at 4°C up to 21 days. In summary, packaging with MAP (60% CO 2 +40% N 2 ) of fresh camel meat accompanied by refrigeration storage enhanced product shelf life at least for 3 weeks without undesirable and detrimental effects on its sensory acceptability.
2019-04-28T13:13:52.459Z
2012-10-04T00:00:00.000
{ "year": 2012, "sha1": "89376e15643080a2dbb74df378c39a06e8276dbe", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/lipid-oxidation-and-color-changes-of-fresh-camel-meat-stored-under-different-atmosphere-packaging-systems-2157-7110.1000189.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1398dc1827dc5c5c1bf50df1a379e842a412f3c0", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
4429502
pes2o/s2orc
v3-fos-license
Physical activity and fitness are associated with verbal memory, quality of life and depression among nursing home residents: preliminary data of a randomized controlled trial Background Few studies have simultaneously examined changes in physical, cognitive and emotional performance throughout the aging process. Methods Baseline data from an ongoing experimental randomized study were analyzed. Physical activity, handgrip, the Senior Fitness Test, Trail Making Test A, Rey Auditory-Verbal Learning Test, Quality of Life-Alzheimer’s Disease Scale (QoL-AD) and the Goldberg Depression Scale were used to assess study participants. Logistic regression models were applied. Trial registration: ACTRN12616001044415 (04/08/2016). Results The study enrolled 114 participants with a mean age of 84.9 (standard deviation 6.9) years from ten different nursing homes. After adjusting for age, gender and education level, upper limb muscle strength was found to be associated with Rey Auditory-Verbal Learning Test [EXP(B): 1.16, 95% confidence interval (CI): 1.04–1.30] and QoL-AD [EXP(B): 1.18, 95% CI: 1.06–1.31]. Similarly, the number of steps taken per day was negatively associated with the risk of depression according to the Goldberg Depression Scale [EXP(B): 1.14, 95% CI: 1.000–1.003]. Additional analyses suggest that the factors associated with these variables are different according to the need for using an assistive device for walking. In those participants who used it, upper limb muscle strength remained associated with Rey Auditory-Verbal Learning Test [EXP(B): 1.21, 95% CI: 1.01–1.44] and QoL-AD tests [EXP(B): 1.19, 95% CI: 1.02–1.40]. In those individuals who did not need an assistive device for walking, lower limb muscle strength was associated with Rey Auditory-Verbal Learning Test [EXP(B): 1.35, 95% CI: 1.07–1.69], time spent in light physical activity was associated with QoL-AD test [EXP(B): 1.13, 95% CI: 1.00–1.02], and the number of steps walked per day was negatively associated with the risk of depression according to the Goldberg Depression Scale [EXP(B): 1.27, 95% CI: 1.000–1.004]. Conclusions Muscle strength and physical activity are factors positively associated with a better performance on the Rey Auditory-Verbal Learning Test, QoL-AD and Goldberg Depression Scale in older adults with mild to moderate cognitive impairment living in nursing homes. These associations appeared to differ according to the use of an assistive device for walking. Our findings support the need for the implementation of interventions directed to increase the strength and physical activity of individuals living in nursing homes to promote physical, cognitive and emotional benefits. Trial registration ACTRN12616001044415 (04/08/2016). Background Aging is a dynamic and progressive decline in physical and cognitive performance leading to the loss of overall function for the activities of daily living. Increasing evidence supports an interaction between physical and cognitive impairment within the cycle of decline associated with aging [1]. In other words, brain health is strongly linked to physical health, and physical performance is, to a large extent, thought to be cognitively mediated. Moreover, physical activity and exercise, as beneficial lifestyle factors, may attenuate or prevent cognitive decline associated with aging [2][3][4]. Multiple studies have highlighted the beneficial effects of aerobic exercise [5,6], resistance training [7] and physical activity [8,9] on cognitive function in older adults, although the neurophysiologic mechanisms driving these effects are not well understood. Further, physical and cognitive function could be linked to health-related quality of life [10] (QoL) and affective conditions [11] in older adults. Previous works examining these relationships are largely restricted to people with cognitive impairments. Nevertheless, a longitudinal study performed in healthy older community-dwelling adults, found that greater levels of physical activity were independently associated with better long-term health-related QoL in a follow-up period of six years [12]. Despite the evidence supporting associations between physical, cognitive and affective aspects related to the aging process, few studies considered these conditions simultaneously. In addition, to our knowledge, no such studies have focused on older adults who live in nursing home settings, although this is one of the fastest-growing demographics worldwide [13]. Older adults living in nursing homes are characterized by old age, a high prevalence of multimorbidity, functional impairment, severe cognitive deficits, depression, and very low physical activity [14]. However, there is a subgroup of residents that maintains the ability to walk and some of these residents even present wandering behavior [15]. Many residents of nursing homes require assistive walking devices to carry out the activities of daily living. The need to incorporate the upper limbs for getting up from a chair or for discharging the body weight while walking will affect their physical performance, specifically those features associated with muscle strength of the upper limbs. Therefore, it may be pertinent to think that, if associations between physical, cognitive and affective aspects exist, they could be conditioned by the need to use assistive devices for walking. Further, although recent initiatives have aimed at improving the quality of care in nursing homes [16,17], physical and social inactivity remain a concern in these institutions [18,19]. Investigating the associations between physical, cognitive and affective aspects in older adults living in nursing homes may provide valuable insights for guiding clinical practice and consequently support nursing home management in evidence based decisions. With this in mind, we sought to evaluate the associations between physical fitness and physical activity, and cognitive performance, QoL and depression risk in older adults living in long-term (LT) nursing homes. We hypothesized that better physical fitness and higher levels of physical activity might be independent factors for better cognitive performance, better QoL and lower risk of depression in older adults living in LT nursing homes. Secondarily, we examined whether these potential associations could differ for residents who require an assistive device for walking (for example crutches or canes). Study design and participants Data from a multicenter, randomized study carried out in ten LT nursing homes between October 2016 and June 2017 were available for analysis in this study. Seven residents out of 206 potential participants did not meet the inclusion criteria, 83 declined to participate, and two did not sign the informed consent document, leaving 114 participants. A flow diagram depicting the selection process is shown in Fig. 1. Details of the methods for designing and conducting the study were previously published [20]. Briefly, eligible participants included men and women aged ≥70 years, who scored ≥50 on the Barthel Index [21], scored ≥20 on the MEC-35 Test [22] (an adapted and validated version of the Mini Mental State Examination in Spanish), and who were able to stand and walk independently for at least ten meters. The study was approved by the Committee on Ethics in Research at the University of the Basque Country (Humans Committee Code M10/2016/105). The protocol is registered under the Australian and New Zealand Clinical Trials Registry (ANZCTR) with the identifier: ACTRN12616001044415. Date of registration: 04/08/2016. Measurements Physical activity performed by the participants was objectively recorded with an accelerometer (Actigraph GT3X model, Actigraph LLC, Pensacola, FL, USA), worn on the hip with a belt for seven days. Activity was recorded using 60-s epochs. Data files recorded on the accelerometers were downloaded and processed with Actilife software (version 6, Actigraph, 2012). The analyzed variables were: number of steps per day and number of minutes per day spent in intensity specific categories. Selecting cut-off points to classify the intensity of physical activity in older people was difficult because there is no current consensus in the scientific literature. Thus, we followed the protocol developed by Freedson and collaborators [23], where the cut-off point for light physical activity was set in the range of 100-1951 counts per minute (cpm), and moderate to vigorous physical activity (MVPA) was defined as all activity ≥1952 cpm. The number of minutes per day at different intensities was calculated by summing all minutes where the count met the criterion for the specific intensity and then dividing by the number of valid days. Physical fitness was assessed through the handgrip strength test [24] (Jamar dynamometer) of the dominant upper limb and the Senior Fitness Test [25] (SFT), a battery of six independent tests encompassing: chair stand test (lower limb strength), arm curl test (upper limb strength), six minute walking test (6MWT) (aerobic endurance), chair sit and reach test (lower limb flexibility), back scratch test (upper limb flexibility) and the 8-ft up and go test (dynamic balance). Cognitive performance tests were evaluated by the same trained neuropsychologist; assessments were carried out individually in participants`own rooms. The MEC-35 test [22] was used for screening and scaling cognitive impairment. Trail Making Test A [26] was administered to assess the speed of information processing, and more concretely, aspects of motor control, motor speed and visual scanning speed. To administer the Trail Making Test A, participants were instructed to draw lines connecting consecutively numbered circles as quickly as possible. The resultant score is the number of seconds required to complete the task; shorter time indicates better performance. A Spanish validated version of the Rey Auditory-Verbal Learning Test [27] (RAVLT) was administered to participants to assess verbal memory. The test lasted approximately 15 min. Consisted of two lists that had to be read aloud; one of 15 words (List A) and a new list of 15 different words (List B). The participant was asked to freely recall the words read aloud by the evaluator in List A. Four more trials were performed in the same way. After five trials, the List B was presented, and a free recall trial was asked for the words in List B. Immediately after, participants were asked to freely recall again the words in List A. Twenty minutes later the participants were asked to recall the words on List A. Then, the evaluator read aloud the 30 words from List A and List B, and the participants were asked to recognize the words from List A. Even though RALVT can be analyzed trial by trial, the authors recommend to establish different measures for its clinical use [28]. In this study, the Total Learning measure (RAVLT-AT) was calculated, which evaluates the capacity to recall and to accumulate words through the 5 learning trials. The RAVLT-AT score resulted from the sum of the five consecutive learning trials (trial1 + trial2 + trial3 + trial4 + trial5). Health-related quality of life was evaluated by a Spanish validated version of the Quality of Life-Alzheimer's Disease Scale [29] (QoL-AD). Considering that many of the participants showed different levels of cognitive impairments or were at risk to develop dementia during the program, we selected the QoL-AD scale as the best tool for assessing health-related quality of life in our participants. The scale comprises 13 items (physical health, energy, mood, living situation, memory, family, marriage, friends, self as a whole, ability to do chores, ability to do things for fun, money and life as a whole). Each item is answered according to a Likert scale from 1(poor) to 4 (excellent), for a total score between 13 and 52, with higher scores indicating better QoL. Depression was measured by the Goldberg Depression Scale [30] (GDS), which comprises four screening items and five supplementary ones. Participants who respond positively to two or more screening items go on responding to the following five. Participants scoring two or more have a 50% chance of having a clinically important disturbance of depression. Statistical analysis Continuous variables were expressed as means with standard deviations (SD), and categorical variables as frequency counts and percentages (%). Taking into account that up-to-date reference values for the dependent variables MEC-35, Trail Making A, RAVLT-AT, and QoL-AD have not been reported for older adults living in nursing homes, the cut-off point in the current study was determined according to the median, as used in other studies [31,32]. Thus, the dependent continuous variables were transformed into binary variables according to whether they had a value above or below their median. Comparisons of sociodemographic characteristics, physical fitness and physical activity between participants who were above or below the median on MEC-35, Trail Making A, RAVLT-AT, QoL-AD and Goldberg Depression Scale were performed using appropriate statistical tests according to the type and distribution of the data: t-test or Mann-Whitney U-test for continuous variables and Chi-squared test for categorical variables. A p value < 0.05 was considered significant. We also performed logistic regressions, with demographic, accelerometry and physical fitness data as independent variables, and cognitive performance tests, health-related quality of life and depression risk data as dependent variables. Those variables that reached a p value < 0.05 on univariate analysis were considered eligible for entry into the multiple logistic regression analysis. Backward regression models were then fitted. All multiple models were adjusted for age and gender. In addition, Hosmer-Lemershow goodness-of-fit, Omnibus and Nagelkerke's R 2 values for each model were specified. A Hosmer-Lemershow test was used to determine the goodnessof-fit of the models, that is, to determine if the observed event rates matched expected ones; a number closest to 1 show a better goodness of fit. Omnibus was used to test whether the explained variance was significantly greater than the unexplained variance; a p value < 0.05 was considered significant. Nagelkerke's R 2 values estimated the proportion of the dependent variable explained by the independent variables. Then, the sample was divided according to the need of any assistive device for walking, and multiple regression models were performed for each group. Statistical analysis was performed using SPSS v.21 software. Characteristics of study participants This study adheres to the Consolidated Standards of Reporting Trials (CONSORT) guidelines. The study included 114 participants from ten LT nursing homes. Accelerometer monitoring showed that they performed very low levels of physical activity during the day (Table 1). Further, 46% of the participants scored lower than the median level of QoL-AD, and 25% of the participants had a 50% chance of having a clinically important disturbance of depression according to the Goldberg depression scale. In addition, 55% of the participants needed an assistive device for carrying out the activities of daily living. Participant characteristics according to their performance on MEC-35 test, trail making a test, RAVLT-AT test, QoL-AD test and Goldberg depression scale Individuals who scored below the median of the sample on the MEC-35 test presented less flexibility of the lower limbs (p = 0.011) compared to the participants scoring equal to or higher than the median of the sample (Table 2). Similarly, participants who scored below the median of the sample on the RAVLT-AT test presented lower muscle strength in both upper and lower limbs (Chair stand test p = 0.009; Arm curl test p = 0.005) along with lower flexibility in the chair sit-and-reach test (p = 0.015) than individuals scoring equal to or higher than the median of the sample. In addition, those perceiving their QoL below the median of the sample presented lower levels of physical activity (Steps/day p = 0.007; Light physical activity p = 0.013) and lower levels of muscle strength (Handgrip test p = 0.032; Chair stand test p = 0.025; Arm curl test p = 0.002) compared to those perceiving their QoL equal to or higher than the median (Table 3). Older adults with a 50% chance of having a clinically relevant disturbance of depression according to the Goldberg depression scale scored lower in terms of physical activity (Steps/day p = 0.004; Light physical activity p = 0.009) and lower body muscle strength (Chair stand test p = 0.048; Arm curl test p = 0.004) than older adults with no risk of depression. There were no significant differences in physical fitness or physical activity characteristics of the participants in this study between those individuals above and below the median in Trail Making A test performance. Logistic regression models We applied univariate logistic regression models to determine associations between each dependent and independent variable (Appendices 1, 2, 3, 4, 5 and 6). Those independent variables that reached a p value < 0.05 on the univariate analysis were included in the multiple logistic regression models that are detailed below (Tables 4, 5, 6 and 7). Factors associated with RAVLT-AT test performance After adjusting for age, gender and education level, the variables that were associated with a RAVLT-AT test score equal to or above the median of the sample were upper limb muscle strength and lower limb flexibility (Table 4). To address the second objective of the study, we divided the sample according to the need of any assistive device for walking and performed the same regression model in each group. After adjusting for age, gender and education level, the lower limb muscle strength was associated with a score in the RAVLT-AT test equal to or above the median in those individuals who did not need any assistive device for walking. Upper limb muscle strength and flexibility in the chair sit-andreach test were associated with a score equal to or higher than the median on the RAVLT-AT test in those individuals needing any assistive device for walking. Factors associated with QoL-AD test performance Regarding QoL, the multiple regression model performed with the whole sample revealed that upper limb muscle strength was associated with a score equal to or higher than the median on the QoL-AD test (Table 5). When we stratified the sample according to the use of any assistive device for walking, light physical activity was associated with a QoL-AD score equal to or higher than the median in those individuals who did not need any help for walking, while upper limb muscle strength appeared to be associated for those needing assistance. Factors associated with Goldberg depression scale performance In the regression model performed with the whole sample, the number of steps/day walked by the participants was associated with the absence of risk of depression according to the Goldberg Depression Scale (Table 6). In those individuals who did not need any assistive device for walking, the number of steps/ day was again associated with the absence of risk of depression. In addition, in those individuals needing walking assistance, female gender was associated with a 50% greater risk of depression. Factors associated with MEC-35 test performance Finally, when the whole sample was analyzed, chair sit and reach test was associated with performance on the MEC-35 test equal to or above the median of the sample (Table 7). No independent variables were found to be associated with higher score on MEC-35 performance Discussion The results of this study showed that physical fitness and, more specifically, upper limb muscle strength were associated with RAVLT-AT and QoL-AD tests in older adults living in LT nursing homes. Similarly, the number of steps taken by the participants per day was negatively associated with the risk of depression according to the Goldberg Depression Scale. Lower limb flexibility was also associated with a better score on the MEC-35 test. Additional analyses suggest that the factors associated with these variables are different according to the need for using an assistive device for walking. In those participants who used an assistive device for walking, upper limb muscle strength remained associated with RAVLT-AT and QoL-AD tests. In those individuals who did not need any assistive device for walking, lower limb muscle strength was associated with RAVLT-AT test, the time spent in light physical activity proved to be associated with QoL-AD test, and the number of steps walked by the participants remained a factor negatively associated with the risk of depression according to the Goldberg Depression Scale. The results of the current study partially support our hypothesis that better physical fitness and higher levels of physical activity might be factors associated with better performance in the RAVLT-AT test, the QoL-AD test, the MEC-35 test or the Goldberg Depression Scale. However, we found that specific parameters of physical fitness (muscle strength and the level of physical activity in particular) were associated with specific cognitive variables. Other studies have recently observed this specificity in the link between physical and cognitive performance in the older adult population. An intervention study [33] reported a dose-response effect of aerobic exercise on components of visuospatial function in a group of community-living older sedentary adults without cognitive impairment. Another prospective study [34] found a dose-response effect of resistance training on executive cognitive function of selective attention and conflict resolution among senior community-dwelling women aged 65 to 75 years. In addition, links between physical activity and processing speed have also been observed [3,35,36]. Nevertheless, to our knowledge, no study has assessed the specificity in the association between [37,38]. Thus, this is the first study identifying muscle strength and physical activity as factors that could explain a better verbal memory, better QoL and lower risk of depression in older adults living in LT nursing homes. The regression model showed that for a one-unit increase in the arm curl test (one repetition), the probability of performing at or above the median on the RAVLT-AT test increased by 16%. This is a novel finding of the potential mediating effects of muscle strength on the verbal memory capacity of the participants. This result is in agreement with other studies that have identified strength as a factor mediating cognitive adaptations in older adults [38][39][40]. Yet, data on the effects of resistance-based exercise programs on cognitive parameters are scarce. Including a combination of multiple exercise modalities, particularly resistance training, in long-term exercise programs is reported to enhance cognition in the older population to a greater extent than programs including only aerobic training [3]. In addition, the evidence concerning the possible association of muscle strength with QoL is more limited. Further, a one-unit increase in the arm curl test (one repetition) also led to a higher probability of performing at or above the median on the QoL-AD test by 18%. Thus, the current study provides new data on the potential associations between muscle strength and RAVLT-AT and QoL-AD tests that warrant further investigation. It could be hypothesized that encouraging older adult living in LT nursing homes to engage in exercise programs that include resistance training could benefit not only physical but also cognitive function. In addition, for an increase of 100 steps/day in the physical activity of the participants, the probability of being in the group with no risk of depression according to the Goldberg Depression Scale increased by 14%. Hence, physical activity could be proposed as a protective factor for reducing the risk of suffering from depression. This result aligns with other studies finding that depression in older people living in nursing homes is correlated, among other factors, with the activities performed outside the nursing home [41]. Thus, the higher their level of physical activity, the more opportunities could arise for residents to visit personally meaningful places and to interact socially with others. In fact, the objectively measured physical activity of the participants was extremely low, which is consistent with previous studies reporting that nursing homes residents`life-space (that is, the spatial extension of an individual's environment that s/he moves in during a specified time period [42]) is severely limited to private rooms and adjacent living units [43]. Thus, nowadays, there is sufficient evidence to support the urgent implementation of interventions aimed at encouraging physical activity of older adults living in nursing homes. For a one-unit increase in the chair sit-and-reach test (one cm), the probability of performing at or above the median on the MEC-35 test increased by 6%. This unexpected finding in the association between flexibility and MEC-35 could be masking the difficulty patients have to understand the chair sitand-reach test that we have observed during the assessments. Thus, it should be interpreted cautiously. Our results also showed that the associations between the muscle strength and RAVLT-AT and QoL-AD tests are different according to the use of an assistive device for walking. In those participants needing assistance, the regression models demonstrated that a one-unit increase in the arm curl test (one repetition) increased the probability of performing at or above the median on the RAVLT-AT test by 21%, and on the QoL-AD test by 19%. Thereby, the association between upper limb strength and RAVLT-AT test performance is higher than that found when the whole sample was analyzed (from 16% to 21%). In contrast, in those participants who did not need any assistive device for walking, lower limb muscle strength was the variable associated with RAVLT-AT test, and time performing light physical activity was the variable associated with QoL-AD test. Specifically, for a one-unit increase in the chair stand test (one repetition), the probability of performing at or above than the median on the RAVLT-AT test increased by 35%. Further, for a 10-min/day increase in light physical activity, the probability of being in the group with a QoL-AD test score equal to or higher than the median increased by 13%. We can only speculate regarding these findings, but it could be related to how the participants used their upper or lower limbs to carry out the activities of daily life. For example, those older adults who need to incorporate the upper limbs for walking, for maintaining balance or for getting up from a chair may have undergone adaptations in the muscle physiology that could somehow influence the associations. Thereby, we surmise that participants with higher levels of well-being also have a more active lifestyle, and this could explain why they might have higher strength (this assumption could also work in the inverse sense). However, an alternative explanation could be that those individuals with a more active lifestyle could have higher strength and, consequently, might have higher levels of well-being (and vice versa). According to the Goldberg Depression Scale and as seen for the whole cohort, the regression model in those participants that did not need aids for walking showed that for a 100-step/day increase in physical activity, the probability of being in the group with no risk of depression increased by 27%. In those participants who needed aids for walking, the regression model result showed that being female increased the probability of being in the group with 50% risk of depression, according to the Goldberg Depression Scale, by 11%. This result agrees with other studies where gender, specifically being female, has been identified as a risk factor for experiencing depression [44]. Nevertheless, an important limitation in this study when studying depression is the failure to consider other variables such as social support, comorbidity or pharmacology. The current study aimed at focusing only on the associations between physical conditions and depression risk, thus, these results should be interpreted cautiously. Several molecular and physiological mechanisms have been proposed to link strength and cognition, including insulin-like growth factor, brain-derived neurotrophic factor, myokines, fibroblast growth factor 2, and vascular endothelial growth factor [7,45,46]. These factors are thought to enhance neurogenesis and to play a key role in the positive effects of exercise on cognition, although the mechanisms need to be fully investigated. There are a few limitations to this study; first, it is limited by its cross-sectional nature, precluding any ability to ascertain temporality. Second, some variables that could also be relevant, such as social support, comorbidity or pharmacology, have not been assessed and thus the results should be interpreted with caution. Third, the results cannot be directly applied to all the nursing home residents; we could not ascertain whether these results would apply to those who refused participation or did not fulfill the physical and cognitive criteria. Finally, the strength of this study is that physical activity has been objectively measured through accelerometers and that the sample size is one of the largest among studies focused on the associations between physical, cognitive and emotional aspects of the aging processes that characterize nursing home residents. Conclusions The present work described the associations between physical, cognitive and emotional performance in a sample of older adults living in LT nursing homes. Specifically, muscle strength and physical activity were factors associated with a better performance on the RAVLT-AT, QoL-AD and Goldberg Depression Scale. These associations appeared to differ according to the use of an assistive device for walking. Further investigation is required to understand the physiological mechanisms underlying links between skeletal muscle physiology, cognition and well-being in this vulnerable population. The results offer further evidence to support the urgent need to implement interventions directed to increase the strength and physical activity of individuals living in nursing homes, as they might benefit not only physically, but also in terms of cognitive and emotional functioning.
2018-03-28T05:44:02.479Z
2018-03-27T00:00:00.000
{ "year": 2018, "sha1": "14308ea54f034fa3a127ae02adf2c9f8ff31088c", "oa_license": "CCBY", "oa_url": "https://bmcgeriatr.biomedcentral.com/track/pdf/10.1186/s12877-018-0770-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14308ea54f034fa3a127ae02adf2c9f8ff31088c", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263264180
pes2o/s2orc
v3-fos-license
Age and sex disparities in Latin-American adults with gliomas: a systematic review and meta-analysis This study aimed to identify if there are ethnic differences in the age and sex distribution of gliomas in the Latino adult population. A systematic review and meta-analysis were conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 recommendations. Databases used were MEDLINE, LILACS, Web of Science, and Scopus. Studies were included if they reported the age and/or sex distribution of gliomas in Latin adults, published in English or Spanish from January 1st, 1985, to December 1st, 2022. The quality of the studies was assessed using the Newcastle—Ottawa Quality Assessment Scale and the NIH Quality Assessment Tool. From 1096 articles, fifteen studies with information on 6,815 patients were selected for the systematic review, and thirteen were selected for the meta-analysis. The mean ages of diagnosis of glioma and glioblastoma were 50.9, 95\%\ CI [47.8–53.9] years and 53.33 years, 95 \% CI [51–55.6], respectively. The male-to-female incidence rate ratio of gliomas was 1.39. Our study found mean ages of glioma and glioblastoma were 6 and 10 years lower than those reported in the CBTRUS. Our study suggests disparities in the age and sex distribution of gliomas in Latin America compared to other regions. CRD42021274423. Introduction Gliomas are the most common malignant primary central nervous system (CNS) tumors in adults, accounting for approximately 27% of all brain tumors and 81% of malignant tumors [1].Although they represent less than 1% of all incident cancer cases, they carry considerable morbidity and mortality.Gliomas are classified according to their histologic and molecular features, glioblastoma (GB) being the most common and aggressive subtype [1]. Less aggressive gliomas, like pilocytic astrocytoma, are more common in children [2,3], while more aggressive gliomas, like glioblastomas, are more common in adults [4].In addition, the incidence of glioblastoma increases significantly with age.The reported mean age at presentation varies by ethnicity, with non-Hispanic whites having a higher mean age than Hispanics [4,5].Furthermore, sex differences have been well-established in many brain tumors, including glioblastoma [6]. Epidemiological studies of gliomas in the United States, Canada, Australia, and Western Europe rely on national or regional registries.However, just a few Latin American countries, like Uruguay and Costa Rica, have national registries.Some countries have population-based cancer registries (PBCR), such as Colombia, Argentina, and Brazil.Most Latin American countries do not meet the criteria for high-quality registries, and therefore the frequency of these tumors is determined from series obtained from second or third-level care reference centers [7].Notably, most of these registries classify the tumor type according to the International Disease Classification without considering the histology.Consequently, the information regarding the epidemiology of gliomas is predominantly based on the Hispanic population from developed countries.Therefore, the Hispanic population is frequently assessed as a single ethnic group, even though it is highly diverse [8].Thus, the age and sex distribution of these tumors in Latin American countries is scantly known, and there is a lack of reliable and updated systematic reviews on this matter.Understanding the epidemiology of gliomas can help identify at-risk populations.This study aimed to determine if there are ethnic differences in the age and sex distribution of gliomas in the Latino adult population. Methods This systematic review and meta-analysis were conducted in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines and recommendations and were registered in the PROS-PERO database (CRD42021274423) [9]. Eligibility criteria Studies were included in the systematic review if they reported the age and/or sex distribution of gliomas in Latino adults, published in English or Spanish.Studies published from January 1985 to December 2022 were included.Before this date, Magnetic Resonance Imaging (MRI), an essential tool in diagnosing brain tumors, was not commonly used [10].Publications were excluded if they comprised abstracts only, preclinical studies, case reports, less than 30 participants, randomized clinical trials, reviews, systematic reviews, meta-analyses, letters, editorials, studies with more than 10% of pediatric patients, and studies done in non-Latin American countries.The pediatric population was defined as those subjects under 18 years of age.Grey and non-electronic literature were not included.In addition, studies were included in the meta-analysis if they reported a mean or median age at diagnosis with standard deviation and sample size (or when these could be calculated). Information sources Studies were identified by searching MEDLINE´s electronic databases through PubMed, Latin American and Caribbean Center on Health Sciences Information (LILACS), Web of Science, and Scopus. Selection and data collection process Titles and abstracts were screened independently and standardized by two reviewers (A.S.P and J.L.O.).Then, these reviewers independently screened the full text of the selected articles and individually retrieved the data.If necessary, a third party resolved disagreements (R.V.H.).A data extraction table was developed with the following information: first author's name, publication date, country, sample size, study design, diagnostic criteria used, tumor type, mean age, age range, number of male participants, and number of female participants. Study risk of bias assessment To assess the quality of the included studies, two reviewers (R.V.H. and C.R.) worked independently and evaluated the articles according to the Newcastle-Ottawa Quality Assessment Scale (NOS) for cohort studies and the National Heart, Lung, and Blood Institute (NIH) Quality Assessment Tool for case series studies (14 questions are used to rate studies as "good," "fair," or "poor" quality).Articles with a very high risk of bias were excluded.NOS scores of 1 to 3 indicate low quality, 4 to 6 indicate moderate quality, and 7 to 9 indicate high quality [11][12][13]. Effect measures The outcomes were measured using mean age in years and male-to-female incidence rate ratio.If the median was reported, the mean was calculated using the formula reported by Hozo et al.X ≈ a+2m+b 4 where x is the mean, a is the smallest value (minimum), b the largest value (maximum), and m the median [14]. Meta-analysis The male-to-female incidence rate ratio was calculated using a weighted average of the 15 selected articles.Inverse-variance weighting summarized the effect size from each study.We compared the resulting summarized statistics with the median age reported by Ostrom et al. [1,5].Standard deviation was estimated by quartile and age-related range.Random effects model was performed with the Sidik-Jonkman error variance estimator paralleled with Knapp-Hartung standard error adjusted.SPSS® meta-analysis package for raw data was used (version 28) [15,16].Missing sample sizes and standard deviations were calculated when possible.We included 13 studies reporting the mean and median age at glioma diagnosis.Significance was established using confidence intervals at 95% or p < 0.05. Statistical model The random-effects model was used for the analysis.The studies included in the analysis were assumed to be a random sample from a universe of potential studies, and the research was used to infer that universe. Literature research The search strategy yielded 1096 articles.After removing duplicates, the titles and abstracts of 487 articles were screened.Of these, 66 articles were assessed for eligibility, and 51 were excluded due to lack of data, other types of tumors included, pediatric patients and insufficient sample size.Studies based on The Central Brain Tumor Registry of the United States (CBTRUS) and The Surveillance, Epidemiology, and End Results Program (SEER) were excluded, for they encompass the Hispanic population as a single ethnic group, excluding the geographical, cultural, socioeconomic, and genetic characteristics, which could be a source of bias [8].Thus, a total of 15 studies were selected to be analyzed in the systematic review.Two of these studies did not report the standard deviation, so they were not included in the meta-analysis.The study selection process is presented in Fig. 1. Characteristics of the included studies The studies were published between 2000 and 2022.The sample size ranged from 40 to 3346, and the age interval ranged from 3 to 96 years.There were six studies from Mexico, four from Brazil, two from Chile, two from Argentina, and one from Colombia (Table 1).All the studies used histopathology to confirm the diagnosis.One study included molecular tests to make the diagnosis.Ten studies used the 2007 WHO classification of CNS tumors.One study used the 2021 WHO classification of CNS.One study used the 2016 WHO classification of CNS tumors.One study used the St. Anne-Mayo grading system.Finally, two studies did not include the classification they used [17,18].Two studies [19,20] were excluded from the meta-analysis because they did not report the standard deviation. Risk of bias The risks of bias in the cohort studies (n = 14) were measured by answering the eight questions from the NOS Assessment Scale; all studies scored ≥ 5 points.Supplementary Table 1 summarizes the NOS Assessment Scale results.The study conducted by Martinez-Muñoz et al. [21] underwent an assessment of bias using the NIH Quality Assessment Tool for case studies; the results of all nine questions were positive, indicating high quality [11] Suplementary info. Age Gliomas (mean age at the time of diagnosis): The mean age was 50.88 years, with a 95% confidence interval (95% CI) of 47.83 to 53.94 years.The I 2 statistic was 97%, and the Tau 2 was 29.06 (Tau = 5.39).The estimated prediction interval was 38.53 to 63.24.The actual effect size was 95% Sex Male-to-female rate ratio: The estimated incidence ratio from the analysis was 1.39.Table 3 describes the analyzed studies. Geographic distribution The age distribution of different geographic regions of the world and the Americas are presented in Fig. 2. The data used for the Latin American countries were based on our systematic review and meta-analysis.The data for non-Latin American countries were retrieved from national and regional cancer registries and from articles that reported the mean or median age at diagnosis of; data used for the US and Canada was collected from the CBTRUS and the Brain Tumor Registry of Canada (BTRC).Although other countries have cancer registries, many report data concerning all CNS tumors and do not report specific data for gliomas.Figures were created using MapChart ® (Fig. 2). Discussion After conducting a thorough analysis of the research on gliomas in the Latin American population, we found that patients with glioblastoma and other gliomas tend to be diagnosed at a younger age compared to other regions around the world.This was determined through a systematic review and meta-analysis of the available literature.Neuroepithelial tissue tumors are typically diagnosed in individuals with a median age of 57, according to CBTRUS [1,5].The Latino mean age of diagnosis found in this study was 6.1 years lower, for a mean age of 50.89 years, 95% CI (47.8-53.9),suggesting that gliomas occur considerably earlier in Latinos than in non-Hispanic whites and is crucial because it represents a health problem and an economic problem since it is affecting an economically active population.Wegman-Ostrosky et al. proposed three possible explanations for the younger age at diagnosis: 1) differences in the Latin American population pyramid compared to those from the US, 2) environmental exposure, and 3) genetic factors, such as germline mutations in TP53, MSH2, MLH1, and MSH6 [22]. Moreover, the reported incidence of glioblastoma increases significantly with age, with a median of 63 years.As stated in the CBTRUS, the median age varies by ethnicity, with non-Hispanic whites having a higher median age (64 years) than Hispanics (60 years) [4,5,23].The mean age at diagnosis of glioblastoma in our systematic review was 53.33 years (95% CI 51.04 to 55.68), a decade earlier than the reported median age in non-Hispanic whites.Nonetheless, the difference we found is considerably larger than the CBTRUS described [5,23]. This fact has been asserted by Walsh et al. [8] in which the Hispanic population from the CBTRUS was divided into two categories: those with Mexican/Central American origin or Caribbean origin.The Mexican/Central American group The data used for the Latin American countries were based on our systematic review and meta-analysis.The data for non-Latin American countries were retrieved from national and regional cancer registries and from articles that reported the mean or median age at diagnosis of; data used for the US and Canada was collected from the CBTRUS and the Brain Tumor Registry of Canada (BTRC).Although other countries have cancer registries, many report data concerning all CNS tumors and do not report specific data for gliomas had a lower median age than the Caribbean group (45 years vs. 52 years).The authors hypothesized that the increased European admixture could explain this phenomenon.However, subsequent analysis implies that this is just a partial explanation and that other variables should be considered.Sex differences have been well-established for many brain tumors, including gliomas [6].The male-to-female incidence rate ratio (IRR) of gliomas reported by the CBTRUS is 1.47.The reported Hispanic male-to-female IRR reported by the CBTRUS is lower (1.35) [5]; in our review, it was 1.39.In 2021, the US had a sex ratio of 97.94 males per 100 females, Chile 97.31 males per 100 females, Brazil 96.51 males per 100 females, Colombia 96.46 males per 100 females, and Mexico had the lowest sex ratio of 95.77 males per 100 females [24].The US has a higher male-to-female IRR than Latin America, which could explain the higher maleto-female incidence ratio reported by the CBTRUS.Additionally, gliomas are more common in men than in women [25].Moreover, the wide variation in the male-to-female IRR present in individual studies can be explained by the small sample sizes included.Histological subtypes included in each study can also affect the male-to-female IRR.For instance, glioblastoma is more frequent in men as opposed to lower grade gliomas, which are relatively more common in women.The studies with the highest male-to-female IRR [26][27][28] include glioblastoma exclusively, while the only study in which the male-to-female IRR favors women [20] does not include glioblastoma. Age and sex distribution were not the only disparities we discovered in the glioma literature from Latin America.For instance, there needs to be more epidemiologic information.Most of the articles we found were from Brazil [20,27,29,30] and from Mexico [17,19,22,[31][32][33].We found some from Colombia [21] Argentina [18,34], and Chile [26,28].Nonetheless, most Latin American countries still need to report their epidemiologic information on gliomas.Only one of the articles used the 2021 WHO classification [34].In addition, even though many of the articles were published after 2016, only one [31] used the 2016 WHO classification of CNS tumors.The rest used the 2007 WHO CNS tumors classification or the St. Anne-Mayo grading system.The 2016 and 2021 WHO classifications of CNS tumors incorporate molecular parameters with histological characteristics [35,36].Molecular parameters revolutionized CNS tumors' diagnosis and prognostic accuracy; however, these tools are only available to some patients in Latin American countries due to the elevated costs [37].The disparity in the use of molecular tests will most likely cause differences in how gliomas are diagnosed and treated.This will have important repercussions in the patients' outcomes and will make adequate cross-country comparisons very difficult. Our study presented a number of limitations.The most noteworthy is that the data we retrieved is based on published articles and not on cancer registries, for these registries are lacking in several of Latin American countries.The classification of CNS tumors has changed considerably, and most reviewed manuscripts did not include molecular reports.One study incorporated glioma subtypes not considered in the newest WHO5 classification [19], and only one study incorporated molecular parameters [34].The wide variation in sex ratios can be attributed to the small sample sizes and the preponderance of glioblastoma as the underlaying histology in several of the included studies.The quality score for most studies was low because many did not select a non-exposed cohort.We limited publications to English and Spanish so that we might have missed publications written in other languages in Latin America, i.e., Portuguese or French.Gliomas are rare cancers; therefore, the sample size of most articles is small; this is a possible explanation for the high heterogeneity described in our meta-analysis.Furthermore, the main objectives of these studies were different and not about age and sex distribution; this could be another explanation for the high heterogeneity in our meta-analysis. Finally, our study's results propose disparities in the age and sex distribution of gliomas in Latin America compared to other continents.There is a need for prospective registration of patients with gliomas in Latin America to consolidate the epidemiology of these CNS tumors and identify at-risk populations.This study represents the first systematic review and meta-analysis of the age and sex distribution of gliomas in Latin American people. Fig. 1 Fig. 1 PRISMA Flow Diagram.Illustrates the selection process of literature research based on PROSPERO:.1096 articles were identified, 487 articles were screened, 15 articles were included in the review and 13 articles were included in the meta-analysis Table 1 Table 2 describes the studies used for this analysis.Glioblastoma (mean age at the time of diagnosis): The mean age was 53.36 years, with a 95% CI of 51.04 to 55.68 years.The I 2 was 91%, and Tau 2 was 11.86 (Tau = 3.44), with an estimated prediction interval of 44.96 to 61.76.The actual effect size was 95% of all comparable populations.This analysis was based on the studies mentioned in Table 2. Characteristics of the included studies WHO world health organization, NR not reported, GBM glioblastoma multiforme *These articles were not included in the meta-analysis Table 2 Forest Plot of observed mean age at the time of diagnosis of gliomas and glioblastomas in years Table 3 Observed male-to-female incidence rate ratio (IRR) Global mean age distribution of gliomas by country.Illustrates the mean or median age of diagnosis of gliomas worldwide.
2023-10-01T06:17:39.735Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "ad9a4d7eb4cf4f3726a0cec79265fbd5bb74a986", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-3118485/latest.pdf", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "8134e6565f2cbe08eacffe5ef7b53a920f019995", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232060098
pes2o/s2orc
v3-fos-license
A mitochondrial genome phylogeny of voles and lemmings (Rodentia: Arvicolinae): Evolutionary and taxonomic implications Arvicolinae is one of the most impressive placental radiations with over 150 extant and numerous extinct species that emerged since the Miocene in the Northern Hemisphere. The phylogeny of Arvicolinae has been studied intensively for several decades using morphological and genetic methods. Here, we sequenced 30 new mitochondrial genomes to better understand the evolutionary relationships among the major tribes and genera within the subfamily. The phylogenetic and molecular dating analyses based on 11,391 bp concatenated alignment of protein-coding mitochondrial genes confirmed the monophyly of the subfamily. While Bayesian analysis provided a high resolution across the entire tree, Maximum Likelihood tree reconstruction showed weak support for the ordering of divergence and interrelationships of tribal level taxa within the most ancient radiation. Both the interrelationships among tribes Lagurini, Ellobiusini and Arvicolini, comprising the largest radiation and the position of the genus Dinaromys within it also remained unresolved. For the first time complex relationships between genus level taxa within the species-rich tribe Arvicolini received full resolution. Particularly Lemmiscus was robustly placed as sister to the snow voles Chionomys in the tribe Arvicolini in contrast with a long-held belief of its affinity with Lagurini. Molecular dating of the origin of Arvicolinae and early divergences obtained from the mitogenome data were consistent with fossil records. The mtDNA estimates for putative ancestors of the most genera within Arvicolini appeared to be much older than it was previously proposed in paleontological studies. Introduction The subfamily Arvicolinae Gray, 1821 (Rodentia: Cricetidae), voles, lemmings and muskrats, is a highly diverse, young and fast-evolving group within the order Rodentia. Currently, representatives of the subfamily occupy most of the temperate and cold-climate terrestrial habitats across the Northern Hemisphere. Modern global fauna of Arvicolinae consists of more than 150 recent species grouped into 28 genera, with new species being constantly discovered and described [1,2]. Phylogeny of Arvicolinae has been explored using both morphological and genetic methods, allowing comparisons of reconstruction from different datasets for further cross-validation. Application of molecular phylogenetic methods resulted in a series of revisions of phylogenetic relationships and taxonomic structure of several genera and species [3][4][5][6][7][8][9][10][11][12], generally supporting the hypothesis of three successive waves of adaptive radiations in the evolutionary history of the group [7]. According to the paleontological records, the ancestors of Arvicolinae were specialised "microtoid cricetids'', sharing the dental adaptations of increased hypsodonty. Rich fossil records of these lineages in the late Miocene sites of Eurasia and their absence in North America indicates that the origin and primary evolutionary centre for arvicolids was situated in the northern parts of Asia [13,14]. Arvicolinae sensu stricto presumably emerged during the rapid species diversification in late Miocene-early Pliocene, when global climate became cooler and drier and woodland habitats were largely replaced by open grassy landscapes. The members of this first radiation were muskrats (Ondatrini), lemmings (Lemmini and Dicrostonychini) and long-clawed mole voles (Prometheomyini). Second radiation wave is characterized by a divergence of the ancestors of modern Clethrionomyini, the compact tribe of red-backed and mountain voles [15], currently abundant in boreal forests and highlands. The third radiation wave resulted in a formation of the steppe lemmings (Lagurini), mole voles (Ellobiusini) and the richest species group-Arvicolini [7]. The branching order both within the first and the last radiation waves also remained unresolved since all attempts to untangle complex phylogenetic relationships within subfamily were made with the use of only a few mitochondrial or nuclear markers, or the study included an insufficient number of taxa in the analysis [3,5,7,9,10,12,[16][17][18][19][20][21]. Mitochondrial DNA has many advantages for the phylogenetic studies, as it possesses a strict maternal inheritance [38] with high mutation rate due to a limited repair system (5-10 times that of nuclear DNA) [39] and simple conserved structure. Due to the high number of mitochondria in the cell almost the complete mitochondrial genome may be sequenced from old historical samples from museum collections thus the currently hardly available taxa may be included in the data set. Complete mitochondrial genome datasets provide a robust phylogeny with highly resolved trees having strong branch support [40]. However, the mitochondrial genomes can conflict with a phylogeny estimated from the nuclear genome due to the replacement of organelle genomes from one species or populations with those of another mediated by hybridization and introgression [41][42][43][44]. The use of mitochondrial genome datasets may also provide evidences of adaptive radiations, as signatures of selection were detected during reconstructing mitochondrial genome phylogenies for the number of vertebrate taxa including deep-sea fishes [45,46], marine turtles [47,48], and mammals [49][50][51][52][53][54]. Yet, mitochondrial genomes remain an important tool both for phylogenies resolution and for the species identification by barcoding using individual mitochondrial genes or complete genomes. Consecutive sequencing of new mitochondrial genomes recently provided new insights into many animal groups [55][56][57][58][59], several mammal phylogenies including e.g. fruit bats [58], carnivores [41], wood mice [60] and zokors [61]. Though molecular studies during the last decades have considerably extended and refined our knowledge of the pattern and timescale of Arvicolinae phylogeny, there are important issues that remain to be elucidated. In this study, we were aimed at estimating the phylogeny of voles and lemmings using complete mitogenomes generated using high-throughput sequencing. By significantly increasing the number of newly sequenced mitogenomes representing major tribes of voles and lemmings, we implement the phylogenetic and molecular dating analysis on a dataset consisting of almost all living genera within the subfamily. The following questions were specifically addressed during the study: (1) the order of divergence and interrelationships of taxa within the first, most ancient radiation, (2) the interrelationships of three tribe level taxa Lagurini, Ellobiusini and Arvicolini (3) relative phylogenetic placement of genera Dinaromys Kretzoi, 1955 and Lemmiscus Thomas, 1912, (4) tangled interrelationships of genera and subgenera in the most speciose tribe Arvicolini, (5) the position of Agricola agrestis Linnaeus, 1761 and Iberomys cabrerae Thomas, 1906, and (6) the timing of arvicoline divergences. Ethics statement Our study was performed using tissue and vaucher collection of the Zoological Institute Russian Academy of Sciences and the research did not require field work or live animal experimentation. DNA samples for this study were obtained following the guidelines of the Animal Care and Research Committee of the Zoological Institute RAS, Saint Petersburg and in accordance with respective national and international animal care and use policies Permission N 3-7/15-06-2021 (See S1 Table). Tissues of specimens used in the study are publicly deposited and accessible by others in a permanent repository of Zoological Institute Russian Academy of Sciences, laboratory of molecular systematics. Taxonomic sampling Fifty-eight species of Arvicolinae, belonging to 27 genera and all the tribe level taxa, as well as six outgroup taxa were used in this study. Complete mitochondrial genomes for 30 Arvicolinae species were sequenced in the current study (including 15 species belonging to the Arvicolini tribe, one for Dicrostonychini, two for Lagurini, three for Lemmini, six species belonging to Clethrionomyini and three crucial species without stable taxonomic position: Prometheomys schaposchnikowi Satunin, 1901 (Prometheomyini), Dinaromys bogdanovi Martino, 1922 and Lemmiscus curtatus Cope, 1868). For 28 species belonging to Arvicolini, Ellobiusini, Clethrionomyini, Dicrostonychini and Ondatrini tribes, sequences were available in the NCBI nucleotide database (https://www.ncbi.nlm.nih.gov/nuccore). The detailed information, GenBank accession numbers, and the voucher IDs for new sequences are given in S1 DNA isolation, NGS library preparation and sequencing Muscle tissue samples of fresh specimens were collected between 1996-2019 years and stored in 96% ethanol at -20 degrees Celcius in a tissue and DNA collection of the Group of molecular systematics of mammals (Zoological Institute RAS). Historic specimen of Lemmiscus curtatus (sampled in 1927) was obtained from the collection of the laboratory of theriology (Zoological Institute RAS), see S1 Table for details. Homogenization of tissues was performed using the Qiagen TissueLyser LT (Qiagen). For most samples, genomic DNA was extracted using the Diatom DNA Prep 200 (Isogen, Russia) except for the L. curtatus museum specimen. To reduce the potential contamination, all manipulations with the L. curtatus were carried out using a PCR workstation (LAMSYSTEMS CC, Miass, Russia) in a separate laboratory room isolated from post-PCR facilities, being used exclusively for studies of historic samples from the collection of Zoological Institute. All the working surfaces, instruments and plastics were sterilized with UV light and chloramine-T. DNA from the museum skin sample (2 × 2 mm piece from the inner side of the lip, dissected by a sterilized surgical blade) was isolated using the phenol-chloroform extraction method according to a standard protocol [64]. The following ultrasound fragmentation of the total genomic DNA was implemented using Covaris S220 focused ultrasonicator instrument (Covaris). The resulting fragmented DNA was purified and concentrated using paramagnetic bead-based chemistry AMPure XP (Beckman-Coulter) using standard workflow. DNA concentration was evaluated using a Qubit fluorometer (Thermo Fisher Scientific). NGS libraries were prepared using the NEBNext Ultra II DNA Library Prep Kit for Illumina (New England Biolabs). The resulting PCR products were purified and concentrated using AMPure XP beads (Beckman-Coulter). The concentration of samples was measured using a Qubit fluorometer, and quality control of the libraries was implemented using Bioanalyzer 2100 instrument and the DNA High Sensitivity kit (Agilent). Sequencing was performed on an Illumina HiSeq 4000 system, resulting in pair-end reads of 75bp. DNA quality was checked with Qubit, the final distribution of lengths of the libraries adapter content checking was conducted using Bioanalyzer2100 (Agilent). DNA extraction (except the museum specimen of L. curtatus), library preparation and sequencing were performed using resources of the Skoltech Genomics Core Facility (https://www.skoltech.ru/research/en/shared-resources/gcf-2/). Standard precautions were taken to avoid contamination according to the Illumina's recommendations at all stages of library preparation and sequencing [65]. Read processing, mitogenome assembly and annotation The quality of raw reads was evaluated using FastQC [66], and parts with the quality score below 20 were trimmed using Trimmomatic-0.32 [67]. Bowtie 2.3.5.1 [68] was used to filter reads with contamination. Complete mitochondrial genomes of other Arvicolinae were used as reference sequences. Also, this was made for the museum specimen to enrich reads with mitochondrial DNA. Nucleotide misincorporation patterns that can often be observed during the studies of ancient or old museum sample DNA as a result of post-mortem DNA damage in reads from L. curtatus were achieved using mapDamage 2.0 [69]. Complete mitochondrial genome was assembled using plasmidSPAdes [70] with default settings. The resulting contigs were filtered by length, the most similar in size to mitochondrial DNA were selected (size about 16 kb for mammals). Raw reads of L. curtatus were mapped on the mitochondrial genome of Mynomes ochrogaster Wagner, 1842 with manual settings using Geneious Prime 2019.1 (Biomatters Ltd.) due to the fragmentation of the DNA extracted from the museum sample (S1 Fig). The contigs were annotated using the MITOS web server [71] (http://mitos2.bioinf.uni-leipzig.de/index.py), with default settings and the vertebrate mitochondrial genetic code. Gene boundaries were checked and refined by alignment against 28 published mitogenome sequences of Arvicolinae (see details in S1 Table). All positions of low quality, low coverage, as well as fragments that greatly differed from the reference Arvicolinae mitochondrial genomes, were replaced by N manually. Assembled sequences of protein coding genes (PCGs) were checked for internal stops manually. Raw read data are available from the a SRA database (PRJNA590630), and all assembled and annotated mitogenomes were deposited in NCBI GenBank (S1 Table). In several studies, it has been convincingly shown that protein-coding sequences may have a strong resolving power for inferring phylogenetic interrelationships and divergence time estimates derived from PCG may be quite accurate [58,75,76]. We used this approach, however complete mt genomes will serve as the starting point for further analyses. For the subsequent analyses, the concatenated alignment of 13 PCGs using MAFFT version 7.222 [77] was produced. Third codon position has previously been shown to bias phylogenetic reconstructions [78]. The phylogeny on a smaller dataset of Arvicolinae turned out to be very poorly resolved with the exception of the third codon position. So we masked transitions in 3rd codon position by RY-coding (R for purines and Y for pyrimidines) as described in Abramson et al. [74]. Thus, two datasets were subsequently analyzed-total alignment of 13 PCGs, where all three codon positions were considered (with a length of 11,391 bp) and RY-coded alignment with transitions in third codon position masked. Analysis of base composition The base composition was calculated in Geneious Prime 2019.1 (Biomatters Ltd.). The strand bias in nucleotide composition was studied by calculating the relative frequencies of C and G nucleotides (CG3 skew = [C − G]/[C + G]) [58,79,80]. Both analyses were calculated using full-length mitogenomes. The PCG-alignment of 64 mitochondrial genomes was used to calculate relative frequencies of four bases (A, C, G and T) at each of three codon positions in MEGA X [81]. The 12 variables, each representing base frequency in first, second or third position, were then summarized by a Principal Component Analysis (PCA) using the PAST v.4.04 [82]. Saturation tests The presence of phylogenetic signal was assessed with a substitution saturation analysis using the Xia test [83] in the DAMBE 7.2.1 software [84] for the whole alignment of the PCG dataset and 13 separate genes following the procedure described by Xia & Lemey [85], particularly when (a) 1st and 2nd codon position considered and 3rd position is masked from the alignment, and (b) when only 3rd codon position is included in the analysis. The analysis is based on Index of substitution saturation-Iss, and Iss.c is the critical value at which the sequences begin to fail to recover the true tree). Once Iss.c is known for a set of sequences, then we can calculate the Iss value from the sequences and compare it against the Iss.c. If Iss value exceeds the Iss.c, we can conclude that the sequence dataset consists of substitution saturation and cannot be used for further phylogenetic reconstruction. The proportion of invariant sites was specified for tests considered 1st and 2nd codon positions. The analysis was performed on a complete alignment with all sites considered. Additional analysis of saturation for each of the PCG was estimated using R packages seqinr [86] and ape [87]. P-distances were plotted against K81 distances for transitions and transversions of each codon position. Phylogenetic analyses We used PartitionFinder 2.1.1 [88] applying AICc and "greedy" algorithm, when an analysis is based on the a priori features of the alignment, to select the optimal partitioning scheme for each dataset. Our analysis started with the partitioning by codon positions within PCG fragments, each treated as a unique partition. For the complete 13 PCG alignment, GTR+I+G model was suggested for almost all the partitions except ND6 3rd codon position, for which the TRN+I+G model was selected. For the alignment with RY-coded 3rd codon position, two partitions were suggested-1+2nd and 3rd codon positions with GTR+I+G and GTR+G models respectively (S2 Table). Maximum Likelihood (ML) analysis was performed using IQ-TREE web server [89] with 10,000 ultrafast bootstrap replicates [90]. Bayesian Inference (BI) analysis was performed in MrBayes 3.2.6 [91] with next settings: nst = mixed, rates = invgamma and partitions as suggested with PartitionFinder results (S2 Table). Each analysis started with random trees and performed two independent runs with four independent Markov Chain Monte Carlo (MCMC) for 10 million generations with sampling every 1,000th generation, the standard deviations of split frequencies were below 0.01; potential scale reduction factors were equal to 1.0; stationarity was examined in Tracer v1.7 [92]. A consensus tree was constructed based on the trees sampled after the 25% burn-in. We also conducted ML analysis for each PCG separately with partitions by codon positions and models supposed automatically in IQ-TREE. Hyperacrius fertilis True, 1894 sequence was excluded from the ND4 alignment since this gene was highly fragmented [74]. The mitogenome of Craseomys rufocanus Sundevall, 1846 (accessed from GenBank) completely lacked the ND6 sequence (S2 Table), so this species was excluded from the analysis for this gene. Divergence dating Divergence times were estimated on the CIPRES Science Gateway [93] with Bayesian approach implemented in BEAST v.2.6.2 [94] using both complete PCG dataset and the one in which the transitions in the third codon position were masked with RY-coding. Datasets were partitioned according to the recommendations of PartitionFinder. All site model parameters were chosen for separate partitions with corrected Akaike's information criterion (AICc) in JMO-DELTEST 2.1.1 [95]. Eight fossil calibrations were used (S3 Table). Lognormal prior distributions were applied to all the calibrations with offset values and 95% HPD intervals based on first appearance data (FAD) and stratigraphic sampling downloaded from the Paleobiology Database on 01.12.2020 using the parameters "Taxon = fossil species, Timescale = FAD" (S3 Table). To check the stability of the result, we performed an analysis with the alternate exclusion of each of the eight calibrations used (S4 Table). BEAST analyses under the birth-death process used a relaxed lognormal clock model and the program's default prior distributions of model parameters. Each analysis was run for 100 million generations and sampled every 10 000 generations. The convergence of two independent runs was examined using Tracer v1.7 [92], and combined using LogCombiner, discarding the first 25% as burn-in. Trees were then summarized with TreeAnnotator using the maximum clade credibility tree option and fixing node heights as mean heights. Divergence time bars were obtained automatically in FigTree v1.4.3 (http://tree.bio.ed.ac.uk/software/figtree/) from the output using the 95% highest posterior density (HPD) of the ages for each node. The comparison with an empty 'prior run' showed that the data were informative for estimating the divergence dates. Mitochondrial genome assembly and annotation We sequenced, assembled and annotated mitochondrial genomes for the 30 new taxa of Arvicolinae. The mapDamage analysis implemented on the raw reads of Lemmiscus curtatus (S2 Fig) showed a low variation of deamination misincorporations values. C to T misincorporations varied from 10 to 15%, G to A from 10 to 12% and were equal to the results of similar studies [96]. Since the relative level of observed misincorporations was not significantly different from the other substitution variants, the mitogenome of L. curtatus was assembled using the same pipeline as for the rest of taxa. The assembled mitogenomes, circular double-stranded DNA of the same organization as in other mammals, contained 13 PCGs, 22 transfer RNAs (tRNA), two ribosomal RNAs (rRNAs), and a non-coding region corresponding to the control region (D-loop). Nine genes (ND6 and eight tRNAs) were oriented in the reverse direction, whereas the others were transcribed in the forward direction. All the assembled mitogenomes contained all the genes listed above, but in some species demonstrated incomplete gene sequences (see S5 Table for details). Mitochondrial genome sequences were deposited in GenBank under accession numbers indicated in S1 Table. In the subsequent analyses, the PCG dataset, containing 11,391 bp was used. Variation in base composition Comparison of base composition calculated using the alignment of full-length mitogenome sequences of Arvicolinae showed that mitochondrial genomes of taxa from tribes Clethrionomyini (28.36% С) and Ellobiusini (28.7% C) have the highest GC-content. Arvicolini, Dicrostonychini and Lemmini had slightly smaller values: 27.69, 28.03 and 27.77% C respectively. Lagurini were found to have the most AT-skewed base composition of mitogenomes: 31.20 and 31.30% of adenine, respectively (S1 Table). Lagurini and Arvicolini also demonstrated the highest GC-skew values (-0.32 in both cases). Ellobiusini and Clethrionomyini occupied an intermediate position in terms of it with -0.33 and -0.34, respectively. Dicrostonychini and Lemmini with equal value -0.35 have the smallest GC-skew values. The base composition (frequency of the nucleotides A, C, G, and T) was further analyzed at the three codon positions in the concatenated alignment of PCGs for each species separately (S1 Table). The 12 variables measured for 64 taxa were summarized by a PCA, based on the first two components, which contributed 73.7% and 18.8% of the total variance, respectively (Fig 1). Most of the observed variation was related to the percentage of base composition in a third codon position (S6 Table). The first component (PC1) demonstrated a high positive correlation (0.98) with the percentage of C3 (percentage of cytosine in the third position) and a high negative correlation (-0.93) with T3 (thymine in the third position). The second component (PC2) positively correlated with G3 (0.67) and negatively correlated with A3 (-0.88). Most of the Arvicolinae formed a compact group on the PCA graph. Three species of genus Cricetulus, C.longicaudatus, C. migratorius and C. griseus clustered separately from the main group of Arvicolinae along the PC1. Both species demonstrated approximately 10% lower percentage of C3 and 10% higher percentage in T3 (S6 Table). The other outgroup taxa, Urocricetus kamensis and Akodon montensis grouped with Arvicolinae, and A. montensis showed similar base composition to Hyperacrius fertilis. Among Arvicolinae, the most dissimilar base composition was observed in Mynomes longicaudus Merriam, 1888, Chionomys gud Satunin, 1909 and Arvicola amphibius Linnaeus, 1758, showing higher than group average percentage of T3 and lower than group average percentage of C3. The mitochondrial genome of Ondatra zibethicus Linnaeus, 1766 was characterized by the highest percentage of adenine in the third position (45.7%) compared to other Arvicolinae. Ellobius lutescens Thomas, 1897 demonstrated the highest percentage of С3 (37.1%) among the complete PCG dataset (Fig 1). Since most of the variation has been observed in the third codon position (S6 Table), the complete dataset was subsequently used in the following phylogenetic reconstructions. Substitution saturation analysis Substitution saturation decreases phylogenetic information contained in the sequences and plagues the phylogenetic analysis involving deep branches. According to the analysis implemented in DAMBE software (S7 Table), the observed Iss saturation index was significantly (P<0.0001) lower than critical Iss.c value for both symmetrical and asymmetrical topology tests indicating the lack of saturation in the studied Arvicolinae dataset. The results of saturation plots for separate genes (S1 File) show the same pattern of negligible saturation. As a result for all 13 PCGs, no significant saturation for the 1st and 2nd codon position, and they are all suitable for the phylogenetic inference. CYTB, ND1 and ND6 show Table. https://doi.org/10.1371/journal.pone.0248198.g001 the same for 3rd codon position in contrast with ND2, ND3, ND4, ND4L. For other genes there was no significant saturation even for the 3rd codon position considering symmetrical topology for more than 32 OTUs (number of operational taxonomic units, S7 Table). Time-calibrated mitochondrial genome phylogeny of Arvicolinae The maximum-likelihood (ML) and Bayesian inference (BI) trees reconstructed using complete and RY-coded alignments of PCGs had similar topology (Fig 2, S1 File). Overall, ML analysis demonstrated lower node supports compared to BI analysis. In total 70% of the nodes were highly supported by ML and BI, with Bayesian probabilities BP>0.95 and ML bootstrap support BS>95 (Fig 2, nodes with a black dot). The monophyly of subfamily Arvicolinae was strongly supported by BI and ML analyses. However, several nodes, predominantly the internal nodes representing deeper phylogenies, which were highly supported by Bayesian analysis, did not receive high BS values. The divergence time between Arvicolinae and Cricetinae was estimated as Late Miocene, ca. 11.31 (9.48-13.3) / 10.7 (8.42-13.31) Ma, based on the complete and RY-coded alignments respectively (Table 1). When one of the eight calibration was discarded by turn, the results generally remained stable (S4 Table), with the exception of the analysis with excluded calibration of MRCA for the subfamily Arvicolinae (node"A", Fig 2). This run yielded in younger dates for most of the nodes (S4 Table)). The earliest radiation of the proper arvicolines (tribes Lemmini, Prometheomyini, Ondatrini and Dicrostonychini) dates back to the Late Miocene with mean at 7.36 (7.04-7.78) / 7.33 (7.05-7.73) Ma. Despite the high node support for the nodes marking Lemmini and Dicrostonychini, the basal part of the phylogenetic tree remains unresolved and represents a polytomy with several nodes not receiving significant BI and ML support. Analysis based on the PCG dataset where the third codon position was not masked with RY-coding, indicated significantly high Bayesian support for node C, combining Ondatrini and Dicrostonychini (Fig 2). The time to MRCA of node C is about 5. The tribe Clethrionomyini representing second radiation of Arvicolinae received high BI and ML support, and nodes within the clade were also highly supported. The MRCA for Clethrionomyini dates back to 4.02 (3.33-4.72) / 4.46 (3.35-5.64) Ma. The cluster containing tribes Ellobiusini, Lagurini, Arvicolini and genera Dinaromys, Arvicola Lacepede, 1799 and Hyperacrius Miller, 1896, i.e. the third radiation of Arvicolinae, was robustly supported by BI using both alignments and received reliable support by ML only with RY-masked alignment (Fig 2). Within this cluster, nodes marking tribes were highly supported by BI and ML. At the level of terminal branches within this cluster Dinaromys bogdanovi, Hyperacrius fertilis and Arvicola amphibius were the only to lack a certain phylogenetic position. D. bogdanovi grouped with Ellobiusini showing high BI support and no ML support. The water vole, Arvicola amphibius, clustered with Lagurini (high BP and no BS support) thus being paraphyletic to Arvicolini. The sagebrush vole Lemmiscus curtatus was sister to snow voles, Chionomys gud and C. nivalis Martins, 1842 with a robust support obtained in all analyses. The cluster of Chionomys Miller, 1908 and Lemmiscus was the earliest derivative in the highly supported group uniting all known vole genera of Arvicolini tribe. Arvicolini sensu stricto (excluding Arvicola) was fully resolved: both node H marking the whole tribe and all nodes within the tribe received robust support in ML and BI analyses. The estimated time of the largest radiation event within the subfamily and TMRCA for the trichotomy Arvicolini-Ellobiusini-Lagurini dates back to 6.2 (5.65-6.76) / 6.11 (5.17-6.92) Ma. All following divergence events within this radiation according to the obtained estimates took place very close to each other, the 95% HPD of the diverging branches leading to MRCA of existing tribes are highly overlapping. Thus the date estimate to the MRCA of Ellobiusini Table 1). The estimate for the earliest split within the Arvicolini tribe radiation (not including Arvicola and Hyperacrius) with all recent genera is about 4.9 (4.33-5.47) / 5.02 (4.12-5.89) Ma, that coincides with the onset of Pliocene period, whereas the major part of recent genera, excluding early derivating Chionomys, Lemmiscus and Proedromys Thomas, 1911, according to obtained estimates appear either in the middle of the Pliocene or close to the boundary of Late Pliocene-Early Pleistocene (Fig 2, Table 1). Gene trees The topology of the Arvicolinae phylogeny varied between the 13 PCG trees (S1 File). While tribe level nodes received good support at most of the trees, the phylogenetic relationships between the taxa remained unresolved. The ATP8-and COX2-based trees lacked resolution at both deep and shallow nodes, and therefore, these trees resulted in a complete polytomy. The only node at the ATP8-based tree that retained its integrity with high support was the tribe Clethrionomyini. Noteworthy that this Clethrionomyini node had high support and was consistent at the majority of the gene trees, except for the ND3. The node containing the taxa of PLOS ONE the Arvicolini tribe (excluding Arvicola amphibius) received high support on the COX1, ATP6, ND3, ND5, ND6, ND1 and CYTB gene trees. The ND4 gene tree yielded in a highly supported node grouping the semiaquatic species-Ondatra zibethicus and Arvicola amphibius, the result was not supported by any other gene tree and mitogenome BI and ML phylogenetic reconstructions (Fig 2). Positions of these two species, as well as Dinaromys bogdanovi were very unstable across the individual gene trees. These taxa often occupied different positions and clustered with other species randomly. Remarkable that even in the case when tribal support and content was consistent across various trees and with the mitogenome tree, the interrelationships between tribes at the individual gene trees were unresolved. The lack of resolution especially at the deep nodes may be related to high saturation that is demonstrated with some genes, particularly ATP6, ATP8, ND1, ND2, ND3, ND5 and ND4 (maximum saturation, S7 Table), since phylogenetic signal disappears when divergence is over 10%. Discussion Our phylogenetic reconstruction of the subfamily Arvicolinae is based on a PCG dataset of mitochondrial genome sequences of 58 species of voles and lemmings with the outgroup of six hamsters. The dataset included 30 original sequences, and for 10 genera the mitogenomic sequencing was implemented for the first time. To date, this is the most comprehensive mitogenome dataset aimed at the revision of the Arvicolinae phylogeny considering almost all recent genera represented by nominal species. While the monophyletic origin of Arvicolinae has always been considered indisputable, previous attempts to resolve phylogenetic relationships within the subfamily using either morphological analysis or combinations of mitochondrial and nuclear markers yielded in several hard polytomies [16] or conflicting topologies [3,[5][6][7]9,12,21,97,98]. The more taxa and more markers were considered in the analysis, the better resolution for the nodes marking major tribes within Arvicolinae has been obtained [7,20,21]. However, the diversification events within major radiation waves remained unresolved. The phylogenetic position of the genera Prometheomys Satunin, 1901, Arvicola, Ondatra Link, 1795 and Dinaromys in reconstructions performed with mitochondrial and nuclear markers was controversial [3,5,7,20,21] and genera Hyperacrius and Lemmiscus received little attention, their phylogenetic position was arguable. Mismatches between the Bayesian and Maximum Likelihood support for the tribes and three waves of radiation within Arvicolini The topology of the mitochondrial genome tree of Arvicolinae obtained in this study, in general, was in good agreement with previous large-scale phylogenetic reconstructions of the group based on mitochondrial and nuclear genes [7,20,21,97,98]. Using the concatenated alignment of 13 PCG, the present reconstruction resulted in high support for the nodes marking tribes in both Bayesian and Maximum Likelihood analyses. While Bayesian analysis also provided high posterior probability support for the basal nodes, ML approach failed to recover relationships and order of divergence between the basal branches. Previously, these deep divergences were identified as three waves of rapid radiations [7]. The first radiation within the subfamily is represented by four tribes-Lemmini, Prometheomyini, Dicrostonychini (including Phenacomys Merriam, 1889) and Ondatrini. The order of divergence and phylogenetic relationships between these ancient tribes remains unresolved using mitochondrial genome data. In particular, the absence of resolution is highlighted by the unstable position of the Prometheomys or Lemmini as the basal lineages. The basal position of Lemmini and Prometheomys with Ondatrini/Dicrostonichini both postdating them was also obtained earlier [20] on a very large set of Arvicolinae taxa analyzed using several mitochondrial and nuclear markers. It is important to underline that in each case this clustering has no support. In a number of other reconstructions [3,6,21,23] Prometheomys is the earliest split within the subfamily. To check if this inconsistency may be related to nucleotide composition bias we withdrew three species from the outgroup of the genus Cricetulus, (C. longicaudatus, C. migratorius and C. griseus) that demonstrated approximately 10% lower percentage of C3 and 10% higher percentage in T3 than most Arvicolinae and left in the outgroup only Urocricetus kamensis and Akodon montensis that showed similar base composition with species of Arvicolinae and carried out 4 variants Bayesian (on full and RY-coded alignment) and ML, respectively. Only Bayesian inference on full alignment resulted in the basal position of Prometeomys, but with minimal support (S3 Fig). With both BI and ML and RY masked alignment the same topology as on the full set of taxa (Fig 2), with basal Lemmini (but again without support) was obtained. The second radiation is represented exclusively by the large monophyletic tribe Сlethrionomyini (Fig 2). These are predominantly forest-dwelling taxa that originated in Eurasia with only a few species penetrating North America during the Pleistocene. According to our data, the monophyly of Clethrionomyini was supported in analyses of either concatenated alignment or individual mitochondrial genes except for the short ND4L (S1 File). With all the nodes receiving high BP and ML support, the internal topology of branches within Clethrionomyini obtained in this study was similar to previous reconstructions of this tribe based on one mitochondrial and three nuclear loci [11]. The third radiation comprises three tribes Arvicolini, Ellobiusini and Lagurini. While these tribes, as well as most genera within the tribes, received strong support, and our reconstruction demonstrates that all the taxa of the third radiation share the same putative common ancestor, their interrelationships within this large clade also were not recovered, actually representing polytomy. According to our data, the genera Dinaromys, Hyperacrius and Lemmiscus whose assignment to certain tribes has previously been doubtful (Fig 2) also belong to the third radiation. Their taxonomic position, as well as the position of the genus Arvicola that suddenly appeared to be paraphyletic to other Arvicolini, are discussed below. Phylogenetic relationships of the genus level taxa. Monotypic and lowdiverse genera of uncertain position The subfamily Arvicolinae includes several seriously understudied genera of unclear taxonomic position. For these genera, molecular data include either only mitochondrial CYTB sequences [5,[16][17][18] or several additional mitochondrial and nuclear markers [3,7,[9][10][11]21]. These genera are often the orphan genera, i.e. being represented by a single extant species. Considering such taxa is of remarkable importance for the reconstruction of high-level phylogenies, but their position on a tree can often be contradictory due to long-branch attraction [99,100]. While the resolving power of the phylogenetic reconstruction increases with the number of genes in analysis, several studies of rapid radiations based on organellar genomes pointed out the effect of long branch attraction [56,101,102] and references therein. Our study, among other, considers five genera of the unclear position either within the first (Prometheomys and Ondatra) or third (Dinaromys, Hyperacrius and Lemmiscus) radiation waves. The Balkan vole, Dinaromys bogdanovi, endemic species from Balkan Peninsula was attributed to either Ondatrini [103] or Prometheomyini [104], but conventionally to Clethrionomyini [62,105,106]. Morphologically Dinaromys is mostly close to the extinct Pliocene genus Pliomys Méhely, 1914 [62,[107][108][109], which distinguishes it from the rest of extant vole taxa. The genus Pliomys, in turn, has generally been considered the ancestral form for the whole Clethrionomyini tribe. That was the main reason [62] to distinguish a separate subtribe Pliomyi within the latter consisting of the two genera-extant Dinaromys and extinct Pliomys. Until recently, CYTB was the only studied locus for Dinaromys, and it was placed as sister to Prometheomys, another monotypic genus, and both were close to the Ellobiusini-Arvicolini -Lagurini group [5]. This grouping was strongly rejected by the following attempts to build molecular phylogeny of Arvicolinae showing the position of Prometheomys as the earliest derivative within the subfamily [3,7,9,21], and Dinaromys within the clade uniting Ellobiusini, Lagurini and Arvicola, i.e. the third radiation [9,20,21]. According to mitochondrial genome phylogeny (Fig 2), Dinaromys does not have putative MRCA with monophyletic Clethrionomyini tribe and most likely belongs to the third radiation, yet the certain position of this genus within this large group remains unclear. The analysis of partial mitochondrial CYTB sequence [11] demonstrated that genus Hyperacrius does not seem to belong to the Clethrionomyini tribe. By analysing the set of mitogenomes of Clethrionomyini and Arvicolini it was recently suggested that Hyperacrius has the basal position within the tribe Arvicolini [74]. Here, using the broader taxonomic sampling, we confirm these previous findings showing that Hyperacrius predates the diversification of all main genera of Arvicolini. Reconstructions performed using the individual mitochondrial genes often placed genus Ondatra as sister to Arvicola [5,16] or Clethrionomyini tribe [97] with low support. In all studies involving varying sets of nuclear genes Ondatra was among early diverging lineages [7] and sister to Neofiber True, 1884 if it was included in the analysis [21,97]. Such a position better corresponds to conventional taxonomy and paleontological data. Our results placed Ondatra sister to the Dicrostonychini tribe, hence with low support (except BI with transitions in the 3rd position included). Similar topology was observed by Lv et al. [20]. Lemmiscus curtatus-the sagebrush vole-is the only extant representative of the genus, it inhabits semi-arid prairies on the western coast of North America. For a long time, Lemmiscus was considered as closely related to the steppe voles Lagurini of the Old World and even as a subgenus within Lagurus Gloger, 1841 [62,[110][111][112]. The close affinity between Lemmiscus and the Palearctic Lagurini was then seriously criticized from the paleontological perspective. Morphological similarities among the two groups were interpreted as a result of the parallel evolution at open, steppe-like landscapes, and Lemmiscus was proposed to be close to the tribe Arvicolini, particularly the genus Microtus Schrank, 1798 [113]. These data corroborated the previous grouping of Microtus and Lemmiscus in phylogenetic reconstruction based on restriction fragment LINE-1 [114], yet their taxonomic sampling did not include Lagurini and most genera of the Arvicolini tribe. In a recent reconstruction using mitochondrial CYTB and the only nuclear gene Lemmiscus clustered with Arvicola amphibius, yet with no support [21]. Using mitochondrial genomes to reconstruct Arvicolinae phylogeny, we sensationally show that Lemmiscus appears to be sister to the snow voles genus Chionomys. This clustering was obtained in all variants of the analysis, and node support values were significant. The snow voles unite three species occurring only in the Old World, particularly mountain systems of Southwestern, Central and Southeastern Europe and Southwestern Asia. Snow voles inhabit rocky patches of a subalpine and alpine belt from 500 up to 3500 m above the sea level [62,115]. Reliable pre-Pleistocene fossil remains of Chionomys are unknown, and the origin of the genus was previously attributed to the mid-Pleistocene [116]. Our data strongly contradicts this conventional view, and both Lemmiscus and Chionomys probably are more ancient taxa. Also, Lemmiscus and Chionomys occur at different continents and occupy contrasting ecological niches; they are also very dissimilar morphologically. These findings, broadly discussed below, are important for the understanding of the migration events of Arvicolinae from Eurasia to North America. Phylogenetic relationships within the tribe Arvicolini sensu stricto By using the mitochondrial genome data, we obtained good support for the nodes within the tribe Arvicolini except for the Arvicola amphibius that clustered with Lagurini (Fig 2). This evident artefact may be related to nucleotide composition bias (see Fig 1, S6 Table). Note that with the third position masked as RY this clustering has weak or no support. On the other hand the evidently wrongposition of A. amphibius can be a consequence of the long-branch attraction effect, and further studies should consider including sequences of the e.g. southern water wole, A. sapidus Miller, 1908 and nuclear genome data for better phylogenetic position resolution of the genus. The other nodes within Arvicolini, hereafter called Arvicolini sensu stricto marking the genus and subgenus level taxa, were recovered as monophyletic and clearly resolved. The phylogenetic pattern indicates two major migration waves of voles to the Nearctic. The earliest derivative from the MRCA is a branch leading to Chionomys-Lemmiscus node and this gives clear indication on the first dispersal of common ancestors of the group from Palearctic to Nearctic. The only recent descendant of this lineage in North America is Lemmiscus. The next split of ancestral lineage evidently took place in Asia and is represented by poorly diversified genus Proedromys and highly diversified cluster, uniting all the rest recent vole genera. This latter cluster further splits into highly supported clade of Asian voles showing sister relationships of genus Neodon Horsfield, 1841 and genera Alexandromys Ognev, 1914 and Lasiopodomys Lataste, 1887 and a cluster uniting two sister clades: Nearctic voles with following fast radiation resulting in nearly 20 recent species (here named Mynomes Rafinesque, 1817 after the earliest name of the generic group level), and a clade that further splits into Western Palearctic (Microtus s.str., Terricola Fatio, 1867 and Sumeriomys Argyropulo, 1933) and one containing taxa distributed in Central Asia (Blanfordimys Argyropulo, 1933), Westernmost Europe (Iberomys Chaline, 1972) and wide-ranged Agricola Blasius, 1857 (from Western Europe to Siberia). It is important to note that trees uniting Nearctic "Microtus'' species in one cluster were obtained in various studies [12,18,19,21], but for the first time this cluster receives robust support, justifying the genus level status under the name Mynomes. Another significant finding is the more clear assignment of Iberomys cabrerae and Agricola agrestis, both species conventionally assigned to Microtus [62], but whоse position at the molecular trees within the Arvicolini tribe was always uncertain. The tendency for clusterization of A. agrestis and Blanfordimys, though without support, was shown earlier [9,12,18,20,21]. In the paper where both I. cabrerae and A. agrestis were analysed in a comprehensive dataset with CYTB [18] these species appeared in different clusters: A. agrestis with Blanfordimys, while I. cabrerae within the cluster of Nearctic voles, however later on [10] in a detailed study of Asian voles came up with analogous to reported here clustering of I. cabrerae and A. agrestis with Blanfordimys. An important contribution was recently made by Barbosa et al. [34] who used a genomic approach for resolving phylogeny of speciose Microtus voles. According to their results both species appear to be monophyletic, however this study was based only on eight species and lacked most of the genera of the group. According to our results the cluster showing close relationships of these species with Blanfordimys is highly supported. Divergence time estimates for the major Arvicolinae lineages in the context of fossil record Dating the origin of Arvicolinae. Our data estimated the time of radiation from the MRCA of all Arvicolinae as ca. 7.3 Ma (Table 1), i.e. the Late Miocene and divergence time of Cricetinae and Arvicolinae from common ancestors around 11 Ma. These dating estimates correspond with fossil records [117] and molecular dating obtained by previous studies [21]. Between the 11.1 and 7.75 Ma (from Early Vallesian to Late Turolian) in Eurasia appeared many taxa, conventionally referred to as microtoid cricetids. These forms were characterised by the arvicoline−like prismatic dental pattern with variously pronounced hypsodonty [117][118][119][120] and are generally considered as the ancestors for arvicolids [62,[117][118][119]121]. There is no single opinion concerning the earliest known form assigned to the true Arvicolinae. Several authors attribute Pannonicola (= Ishymomys) sp. As the first fossil representative of Arvicolinae, dated as ca. 7.3 Ma from Middle Turolian, Hungary [122], and Asia [123]. Fejfar and coauthors [117] considered Pannonicola as the starting point of arvicolid evolution, though it is far from the generally accepted point of view. According to other, more widely distributed opinions, the first fossil Arvicolinae is Promimomys Kretzoi 1955 known from the Miocene-Pliocene boundary, dated around 5.3 Ma [14,124,125]. While calibrating the offset of Arvicolinae by Pannonicola at 7 Ma, we received the mean divergence date for the clade Arvicolinae as 7.36 Ma, which is unrealistically too close to FAD of Pannonicola. By excluding this calibration point from analysis, we obtained the 1 Ma younger estimate for the Arvicolinae node (S4 Table). However, during the consistent removal of one calibration point in turn, the estimate of the Arvicolinae origin between 7.36-7.43 remained stable. Ancient radiation of Arvicolinae and the first migration event from Palearctic to Nearctic. It is generally agreed that Arvicolinae originated from arvicolid-like cricetids that first appeared in Eurasia but not in North America [13,14,124]. Chronology of immigration events supported by fossil evidence was detaily considered earlier [14,126,127 and references therein]. Below we examine how divergence dating obtained in this study is consistent with fossil evidence. The divergence between Dicrostonychini and Ondatrini took place ca. 6 Ma according to our data, indicating that the ancestors of this group were very closely related to the first arvicolines. Feifar et al. [117] pointed that the molar pattern of Pannonicola Kretzoi, 1965, according to the opinion of the authors, the oldest known fossil Arvicolinae, show similarity with Dolomys Nehring, 1898 the putative ancestor of Ondatrini, and possibly Dicrostonychini, indicating their closer relationships. Our data corroborate this grouping. The conventional idea of Promimomys as the first true arvicoline taxon indicates the first immigration timing around 5.5 Ma (Martin, 2010(Martin, , 2015 as there is no record of this taxon in the older sediments. From the mitochondrial genome data, the date for the MRCA of Lemmini was estimated as 4. 81-4.37 Ma. This molecular dating consider ancestors of Lemmini almost a million years older than the fossil remains reliably attributed to Lemmini in Europe [128,129] and Asia [130] dated as the Early Villaniyan (Mammal Neogene zone MN16, 3.2 Ma), while North American fossils of Lemmini were dated at ca. 3.9 Ma or Late Blancan according to Ruez & Gensler [131]. However, these lemming fossils are characterized by very advanced unrooted teeth and masticatory patterns, close to the recent forms of lemmings. Among the potential ancestors here can be mentioned Tobenia kretzoi Fejfar, Repenning, 1998, a species with rooted molars known from the early Pliocene of Wölfersheim, Germany [132]. This record refers to MN15 that is ca. 4 Ma, close to our molecular dating. The molecular estimate of the two major radiation waves of Arvicolinae, leading to the tribes Clethrionomyini, Arvicolini, Ellobiusini and Lagurini, dates back to 6 Ma, Late Miocene, MN 13 (Late Turolian). These findings correspond to paleontological data and confirm the estimates received previously in the study based on nuclear genes [7]. Slightly later, firstly in North America [14] and then in Europe [117,125] and Western Asia [133], the most plesiomorphic arvicoline genus Promimomys Kretzoi, 1955 appeared. This form is considered ancestral to numerous species emerged in the Early Pliocene and conventionally assigned to highly mixed and species-rich genus Mimomys Forsyth-Major, 1902. According to the generally accepted view, different forms of this complex "Mimomys'' group represented the starting point for all subsequent lineages of Arvicolines. The concept of common ancestry for these forms within this geological period does not contradict the data obtained in the present study and hypothesis proposed by paleontologists [117]. The radiation of common ancestors for all Clethrionomyini species starts later, since Late Ruscinian (MN 15), around 4 Ma. The origin of the tribe Arvicolini sensu stricto: Second trans-Beringian dispersal. The molecular estimate for the MRCA of node H, Arvicolini s.str. (Fig 2) is ca. 5 Ma. (4.33-5.47). Considering that most primitive forms of the genus Mimomys are among the MRCA candidates for all main genera within the tribe Arvicolini s.str, the obtained time estimate i.e. the very beginning of the Pliocene, may be considered as consistent with the fossil record. However, there is a certain probability of the overestimation of the age of this node during the divergence dating analysis. One of the earliest records of Mimomys in North America was dated as 4.75 Ma [134], while fossil remains from Asia are slightly older [135]. This is the time of the second dispersal of arvicolids from Asia to the Nearctic. The only recent descendant of these immigrants in North America is Lemmiscus curtatus. According to our data, the starting point of evolutionary history for this lineage is around 4 Ma. The earliest remains assigned to the genus are known from the end of Early Pleistocene from the SAM Cave in New Mexico [136] in the sediments according to paleomagnetic and faunistic data that may be dated as 1.8 Ma. Repenning [136] deduced Lemmiscus from plesiomorphic Allophaiomys Kormos, 1932. Remains assigned to the latter taxon are widely distributed among the Early Pleistocene sites dated between 2.2 and 1.6 Ma in both the Palearctic and Nearctic. Yet, Allophaiomys is a rather lumped taxon [137] presumably accepted as ancestral to most Microtus species and associated genera. Tesakov and Kolfschoten [113] suggested the hypothesis of a Mimomys-Lemmiscus phyletic lineage. However, their hypothesis also presumes that ancestral Mimomys (Cromeromys Zazhigin, 1980), a form having rooted molars and inhabiting vast areas from Western Europe to Beringia dispersed southwards across North America in the late Early Pleistocene and evolved there into rootless Lemmiscus. Thus, our dating conflicts with both views and supports the idea of dispersal and further evolution from "Mimomys" stage [113,136] in the middle of the Pliocene, ca. 4 Ma. The fossil remains of Chionomys are known only from the Pleistocene sediments [116]. According to our dating based on mtDNA, the diversification of ancestral lineage may have started in Western Palearctic as early as in the Middle Pliocene, i.e. either late Zanclean or early Piacenzian periods. Diversification within Arvicolini sensu stricto: Late Pliocene exchange between Palearctic and Nearctic faunas. The other genera within Arvicolini were monophyletic according to Bayesian and ML analyses with high node support. According to conventional view, this group originated from Allophaiomys [62,136], a highly complex taxon common in the Early Pleistocene (ca. 2 Ma) faunas of the Nearctic and Palearctic. Our results on divergence dating raise another hypothesis on the starting point for the group taking place at the level of "Mimomys'' stage, i.e. in the Pliocene. The genus Proedromys is the first derivative from this common stem, most likely in the middle of the Pliocene (approx. 4 Ma). The standalone position of this genus among other genera of Arvicolini that plausibly derived from Allophaiomys has been earlier underlined by Gromov, Polyakov [62] and Repenning [136]. A further split within Arvicolini took place in the late Pliocene and resulted in the entirely Asian lineage which is currently represented by genera Neodon, Alexandromys and Lasiopodomys. The other, sister lineage emerged in the Early Pliocene, around 3.8-4 Ma, also from the pre-Allophaiomys stage and diverged into two branches. Ancestors of the first branch (Mynomes) penetrated the Nearctic during the third Nearctic immigration event, where they diversified into 20 species. Descendants of the other, Palearctic branch, further produced two lineages. Among them, the first apparently originated in Central Asia and dispersed westwards during the Late Pliocene. Iberomys cabrerae, inhabiting the Iberian Peninsula and the foothills of Pyrenees is a relict descendant of this line [138] and references therein. Our mitogenomic data supports the hypothesis of long independent evolution of Iberomys cabrerae lineage previously confirmed by several unique morphological, biological and ecological traits [138]. The first fossil remains of Iberomys were found in the Early Pleistocene [139] sediments in Spain predating the Jaramillo reversal event (approx. 1.2 Ma). According to the scenario set by Cuenca-Bescós et al. [138], the genus Iberomys is a basal sister group of Arvicolini evolved since its origins in the Early Pleistocene in the western Mediterranean region. The most probable origin and vicariant speciation according to these authors was linked to the stock of Allophaiomys taxa with plesiomorphic character states. The results reported here, in a whole are in a good agreement with this scenario although indicate an earlier time of origin and speciation from the stock predating Allophaiomys-stage and going far back to the Mimomys stage of the Late Pliocene. The other descendants of the same stock in the recent fauna are represented by Agricola and Blanfordimys. The range of Agricola covers whole Europe and stretches to the east up to the Lake Baikal and watershed between the Yenisey and Lena Rivers [62,63]. Recent studies showed that Agricola is represented by three highly divergent lineages, possibly a species level taxa [140]. Three species of Blanfordimys occur in high mountain forests and steppes of Central Asia and are characterized by a very primitive molar pattern, similar to Allophaiomys. The idea that Agricola and Iberomys represent relicts of a very early colonization of Arvicolini to Western Europe was earlier suggested by Martinkova and Moravec [19] and well agrees with the given data. The second lineage of Palearctic branch has evolved in Western Palearctic and in the modern fauna is represented by species-rich genus Microtus (with subgenera Microtus s.str and Sumeriomys) and Terricola (around 14 species, mainly found in South and Southwestern Europe). The divergence between these lineages corresponds to the Late Pliocene, however the speciation events coincided with Early Pleistocene for genus Terricola and early Middle Pleistocene for subgenera Sumeriomys and Microtus. The latter dating matches with known fossil records [109,141]. Summing up the comparison between the molecular estimates of divergence times reported here and known paleontological data, it is curious to note that while the dates for MRCA for most genera within Arvicolini s.str. are significantly older than was previously supposed [109,117,142], dating of speciation events within the genera (Lasiopodomys, Alexandromys, Terricola) are consistent with fossil records [109,134,136,[141][142][143] and others. Although our results are consistent both with fossil record and previous molecular dating results it is worth to mention, that the lack of calibration points for terminal nodes in our analysis may be the source of underestimation of basal node ages, while ages of several youngest nodes in Arvicolini could be overestimated. Systematic remarks While systematic relationships of higher taxa within Arvicolinae undoubtedly require further studies involving genomic approaches, some amendments to the current taxonomic system could be made already at this step of the research. Our study provided significant input for the potential review of taxonomic structure and composition of the tribe Arvicolini (S2 File). Our data shows that the position of the genus Arvicola is still unresolved. On the contrary, genera Lemmiscus and Hyperacrius certainly should be considered as members of the tribe Arvicolini. The further grouping of species into genera and subgenera within this highly diverse tribe always was very subjective and debatable. Most arguable was the composition of the genus Microtus. The current system [62] where Blanfordimys, Neodon and Lasiopodomys have generic status, while Alexandromys, Stenocranius and Terricola are referred as subgenera within the genus Microtus is strongly outdated and contradicts the data of recent phylogenetic studies. The last checklist [2] partly modified this scheme and following Abramson and Lissovsky [63] elevated Alexandromys to full genus and Stenocranius considered as subgenus of Lasiopodomys, and Neodon as a genus. However, despite the accumulated evidence from several previous papers [9,19,144], in this reference book without substantiation the status of Blanfordimys was downgraded [145] while three well differentiated lineages (Blanfordimys, all Nearctic microtines, Terricola, Microtus and Sumeriomys) were illogically united in one genus Microtus. These well-differentiated lineages together form the sister branch to one with similar branching pattern and recognized three genera: Alexandromys, Lasiopodomys and Neodon. It is widely known, that the better is the phylogenetic resolution of any species-rich group the more complicated it matches the conventional hierarchical categories of Linnean system. Trying to keep as much stability of nomenclature retaining the already commonly used names that correspond to certain lineages from one hand and to reflect robust phylogenetic nodes in a formal classification from the other, here we suggest the following system of generic group taxa within the tribe Arvicolini sensu stricto which is given in S2 File. Conclusions Our phylogenetic analysis based on a complete mitochondrial genomes confirmed the monophyly of the subfamily, monophyly of the most tribes originated during the three subsequent radiation events. While order of divergence between ancient genera belonging to the first radiation were not uniformly supported by Bayesian and Maximum Likelihood analyses, our study reports the high node statistical support for the groups of genera within the highly diverse tribe Arvicolini. Mitogenome phylogeny resolved several previously reported polytomies and also revealed unexpected relationships between taxa. The robust placement of Lemmiscus as sister to the snow voles, Chionomys in the tribe Arvicolini, in contrast with a longheld belief of its affinity with Lagurini, is an essential novelty of our phylogenetic analyses. Our results resolve some of the ambiguous issues in phylogeny of Arvicolinae, but some phylogenetic relationships require further genomic studies, e.g. the evaluation of the precise positions of Arvicola, Dinaromys and Hyperacrius. Here, we provide the evidence of high informativeness of the mitogenomic data for phylogenetic reconstruction and divergence time estimation within Arvicolinae, and suggest that mitogenomes can be highly informative, when the number of extant and extinct forms are comparable (the case of Arvicolini) and insufficient when extant forms represent single lineages of once rich taxon (most cases of early radiation in the subfamily). The accuracy and precision of previous divergence time estimates derived from multigene nDNA and nDNA-mtDNA datasets are here refined and improved. The estimates for subfamily origin and early divergence are consistent with fossil record, however mtDNA estimates for putative ancestors of most genera within Arvicolini appeared to be much older than it was supposed from paleontological studies. Table. Test of substitution saturation. Analysis performed on all sites for 1&2nd and 3rd codon position separately. Iss-index of substitution saturationIssSym is Iss.c assuming a symmetrical topology, IssAsym is Iss.c assuming an asymmetrical topology, NumOTU-number of operation taxonomic units. Red color indicates P-value < 0.05. (XLSX) S1 File. A molecular phylogeny for the subfamily Arvicolinae reconstructed using the complete PCG dataset and each of the 13 individual genes with the corresponding saturation plots indicated. Major Arvicolinae tribes are indicated by color coding (Arvicolini-light blue, Lagurini-blue, Ellobiusini-purple, Clethrionomyini-magenta, Dicrostonychinidark green, Ondatrini-light green, Prometheomyini-yellow, Lemmini-red, nomen nudum species-black). Bayesian topology was used to plot the tree for the complete 13 PCGs dataset. Node labels display the following supports: BI complete / BI RY-coded 3rd codon position / ML complete / ML RY-coded 3rd codon position. Black circles show nodes with 0.95-1.0 BI and 95-100 ML support. For each of the PCGs, maximum likelihood topology is given, node labels display ultrafast ML bootstrap above 50%. Saturation plots are indicated on the side insets, where colors mark the following partitions: 1st codon position transitions (ts)-brown, 1st transversions (tv)-red, 2nd ts-blue, 2nd tv-green, 3rd ts-pink, 3rd tv-black. (PDF) S2 File. The proposed system of generic group taxa within the tribe Arvicolini sensu stricto. (DOCX) relationships. Genomics. 2020; 112 (5)
2021-02-27T14:15:28.944Z
2021-02-23T00:00:00.000
{ "year": 2021, "sha1": "6f783a03731f7da489a13963dcfb4b9009cec22a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0248198&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1025f7720c6adc6f8bba128d9b35d5ec18b42dee", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
244034342
pes2o/s2orc
v3-fos-license
Expression of Sport Experiences between Cadet and Junior Basketball Players Background. The hypothesis of this study is formulated as follows: the experience of junior basketball players in competitions will be more valuable than that of cadet basketball players. The aim of our study was to examine the peculiarities of youth and basketball players’ sport experiences. Methods. A total of 104 basketball players, 47 cadets and 57 juniors participated in the study. Survey questionnaire is used for the study. The following methodologies were used: Athlete’s Personal Experience Survey (Athletic Coping Skills Inventory, ACSI-28) and the Sport Experiences Questionnaire (SEQ). Results. The results of the study revealed statistically significant differences (p <.05) in personal sport experiences (athletic endurance skills) among basketball players of different age groups according to the following indicators: the coach’s influence on basketball players, concentration, athletes’ self-confidence and resilience. The results of the study of athletes’ competitive experiences revealed that there were statistically significant differences (p <.05) between cadet and junior basketball players in competition experience. No statistically significant differences were found in terms of risk and progress parameters. Conclusions. The coach’s influence was greater for the cadet basketball players. Concentration, self-confidence and resilience were better among junior basketball players. This shows that when competing, junior basketball players have higher levels of concentration compared to the cadet group, as they are more confident and can better cope with tension. In addition, it was found that the experience of junior basketball players in competitions is richer than that of cadet basketball players. INTRODUCTION "Experience is a system of personally significant meanings that captures knowledge, abilities, skills and values" (Jackūnas, 2008, p. 22). The knowledge of recurring phenomena, its frequency, characteristics, connection and regularities are well reflected in an athlete's sport experience (Jackūnas, 2008). Engagement in sports activities in childhood and adolescence is associated with positive physical, cognitive, and psychosocial development as well as early experiences in sport activities (Brown, 2006). This situation is worrisome. Over the past three decades, many scholars have stressed the impact which enjoyment has over understanding people's dedication to participate in sports and how it helps with developing sport initiatives. According to Self-Determination Theory (Deci & Ryan, 2000), participation in sport is driven by personal interest, and enjoyment is the most beneficial factor for participants' well-being and long-term motivation in sports and physical activity (Hagger & Chatzisarantis, 2008;Owen, Smith, Lubans, Ng & Lonsdale, 2014;Ryan & Deci, 2007;Texeira, Carraca, Markland, Moussa et al., 2012). Conversely, more people stated that the engagement in sports for external rewards brought on more negative outcomes. Public health researchers have recently argued that health promotion initiatives could be effective in improving and developing empirical and theoretical knowledge on enjoyable experiences in the context of physical activity (Jallinoja, Pajari & Absetz, 2010;Phoenix & Orr, 2014;Ekkekakis, 2017). In order to understand the motivation behind participation in sport, it is necessary to find out the resulting experiences and answer the question of why people take part in the first place. However, for a long time, the questionnaires assessing the most valued sport experience were non-existent (Luiggi, Maïano & Griffet, 2019). In research of incentives in sport there is a number of surveys that question the motivation for sport such as the Participation Motivation Questionnaire (Gill, Gross & Huddleston, 1983), The Behavioral Regulation Questionnaire (Lonsdale, Hodge & Rose, 2008), Validation of the Revised Sport Motivation Scale (Pelletier, Rocchi, Vallerand, Deci & Ryan, 2013) and the Physical Activity and Leisure Motivation Scale (Molanorouzi, Khoo & Morris, 2015). However, factual knowledge of what is an enjoyable experience in sport has been largely observed in qualitative and experimental studies. In qualitative research, some of the previous findings have shown that the competitive environment, stress, and competition pressure are keys to understanding participation in sport (Belanger, 2011;Craike, Symons & Zimmerman, 2009;Uijtdewilligen et al., 2011). However, these results were contradicted by other studies revealing that sporting experience was sometimes an important reason for nonparticipation (refusal to exercise) (Allender, Cowburn & Foster, 2006;Brooks & Magnusson, 2007;Coleman, Cox & Roker, 2007;Craike & others, 2009;Cote & Vierimaa, 2014;Knowles, Niven & Fawkner, 2011;Yungblut, Schinke & McGannon, 2012). There were also discoveries that have demonstrated the importance of the social surroundings of the participants. For instance, finding oneself in a hostile environment during play has been cited as a reason for dissatisfaction and withdrawal from sports (Yungblut et al., 2012;Belanger et al., 2011). While executing the experimental studies, special attention was paid to sports programs where the importance of intensity determines future participation in sports activities. Previous findings have shown that people did not experience any pleasure beyond the intensity threshold and thus did not have a repeat sport experience (Ekkekakis, Parfitt & Petruzello, 2011). In sports, great attention is paid to confrontation with risks and dangers (injuries, painful defeats). Following this logic, risk culture in sport has been extensively studied and many authors have stated that sport culture increases risk-taking behavior and presents it as normality which contributes to progress and victory (Nixon, 1992;Saragiotto et al., 2014;Schnell, Mayer, Diehl, Zipfel & Thiel, 2014). Nixon (1992) has shown that athletes' rivalry, risk-taking, suffering, pain, and trauma are part of the sport, because exercise can cause suffering and it should be considered normal. As Howe (2004) noted, "the initial worries about risk in sport begin when people are forced to risk it all and win at any cost". Lately the possibility of injury has become the main risk (Saragiotto et al., 2014;Schnell et al., 2014). Recent research has shown that past experiences are related to the likelihood and frequency of behavior recurrence (Kiviniemi, Voss-Humke & Seifert, 2007;Van Cappellen, Rice, Catalino & Fredrickson, 2017Wang, 2011Wirtz, Kruger, Scollon & Diener, 2003). Hence, it was believed that a questionnaire of recalled past experiences could help determine the experiences teens would look for in their chosen sport. Previous questionnaires were derived from motivation theories and focused on why people participate in sport (Gill et al., 1983;Lonsdale et al., 2008;Pelletier et al., 2013;Molanorouzi et al., 2015). Thus, the participants were asked to share their reasons for participation. The Sport Experiences Questionnaire (SEQ, Luiggi et al., 2019) asked participants to respond to their enjoyment and experience in specific situations. The responses are expected to help understand reasons behind people's participation. Previous findings have demonstrated how the experienced pleasure directly affects the ability to understand the people's future behaviors. Thus, adolescents who have strongly agreed to value past risk experiences (for example) are likely to seek such experiences in the future through participation in sports (Luiggi et al., 2019). The results of SEQ prove that the experience, progress, and risk of competition are perceived differently by adolescent athletes (Luiggi et al., 2019). This implies that the knowledge of different personal experiences could be utilized by sports organisers and coaches to stimulate the enjoyment factor of participating in sport. Moreover, when analyzing said experience, it is necessary to keep the coach's characteristics in mind. For example, in previous studies that analyzed the experiences of elite athletes (Becker, 2009), the participants of the study described their coach as one having positive qualities. One special feature that all players discussed was their coach's sense of humor: "He's just a funny person. He could make one laugh for days". Additionally, the players also described their coach as knowledgeable, passionate and energetic: "He knows what he's talking about", "He eats, sleeps and lives basketball" (Becker, 2012, p. 49). The players' understanding of the coach's philosophy, system and style of play also were influenced by his beliefs about basketball and the way he was coached. The players described how "He came up with a set system and he didn't think about it twice", "He said, 'I've been doing this my whole career and I'll continue it'. It was a success, and that was really impressive," (Becker, 2012, p. 50). It appeared that the coach was very successful in changing players' perception towards the game. These findings highlight the importance of a strong philosophy of the coach, and a desire to remain true to that philosophy. Until now most research on coaching effectiveness has focused on the study of coaching behavior, despite the important role of coaching philosophy in players' athletic experiences (Becker & Wrisberg, 2008;Smith & Cushion, 2006). It should be noted that the analysis of experiences revealed a dimension in coaching style when players described the coach as "more of a players' coach". This terminology is more commonly used within the sports community, "players' coach" represents player-centered leadership that has not been systematically studied in the leadership literature because research has primarily focused on democratic and autocratic styles (Gilbert & Trudel, 2004). Under this line of research, coaches who adopt a democratic style allow their athletes to set team goals, work at their own pace, express their opinions, and share the decision making (Chelladurai & Saleh, 1980). "Players' coach" represents a style that can involve both autocratic and democratic behaviors. This style is characterized by the fact that it is player-oriented, meaning that the coach's behavior depends on what is best for the players or the team, at a given moment. The key feature of this coaching style are the players (Becker, 2012). Many recent studies have looked into ensuring fair behavior in teams, and research shows that many coaches allow more time to play for the players they consider to be more talented (Solomon, Striegel, Eliot, Heon, Maas, & Wayda, 1996;Solomon et al., 1996). This can often result in negative experiences for the players. Negative experiences related to coaches often result in players trying to avoid player-coach contact: "I hated having to analyze the games, I just did not want to be near [the coach]. I haven't even gone to take additional shots". While some coaches may not be fully aware of the negative experience (De Marco, Mancini & West, 1997;Krane, Eklund, & McDermott, 1991), others may ignore it. It is pointed out that both positive and negative coach behaviors can affect players' personal and competitive experiences (e.g., Kenow & Williams, 1999). In conclusion, the topic of the peculiarities of sports experiences of cadet and junior basketball players is relevant because the parameters of both personal sports and competitive experiences, such as concentration, failure, and resilience, are among the key factors determining the success of sporting activities. In addition, studies do support the importance of said factors (Fraser-Thomas & Côté, 2009;Gencer & Öztürk, 2018). According to research data (Fraser-Thomas & Côté, 2009), all the best athletes state that overcoming failures, resistance to pressure, and resilience are the key to success, so the relevance of the study of the peculiarities of sports experiences of both cadet and junior basketball players is undeniable. There is a lack of research in the literature on the sporting experiences of adolescent basketball players. There is a problem yet to be solved: what is the difference between cadet and junior players' sports experiences? The formulation of the problem allowed to generate the hypothesis of this research: the experience of junior basketball players in competitions will be richer than that of cadet basketball players. The hypothesis is formulated on the basis of Malinauskas and Zablockis (2020) research data. The aim of the study is to determine the peculiarities of cadet and junior players' sports experiences. Research tasks: 1. To study and compare the personal sports experiences (athletic endurance skills) of cadet and junior basketball players. 2. To determine the experience of cadet and junior basketball players in competitions. Research participants. A targeted selection procedure was used. A total of 104 basketball players participated in this research, 47 cadets and 57 junior players from Kaunas, Tauragė, Raseiniai, Šiauliai and Šilutė. Participants were asked to fill in the questionnaire prior to training. The questionnaire was anonymous and confidential; only generic information was used. This research was permitted by the Social Research Ethical Committee of the University, 2020 02 10 No. SMTEK-9. Measures. Research was completed using a questionnaire. Two methods were used. First method -Description of Sportsmens' personal experience (Athletic Coping Skills Inventory, ACSI-28; Smith, 1995). The questionnaire consisted of 28 statements, which were to be given a score from 1never; 2 -seldom; 3 -often and 4 -almost always. Questionnaire ACSI-28 researched target setting, coach's influence, concentration, overcoming of failures, self-esteem, and performance under pressure: all these aspects are reflected in sport experience. Second method -Sport Experiences Questionnaire (SEQ)) (Luiggi et al., 2019). The questionnaire included 14 statements, i.e., "Do I do things I have never done", "Do I risk even if I can lose it all", "Am I amongst the best" etc. Participants had to mark their answers from 1 (totally disagree) to 7 (absolutely agree). Statistical analysis. PSS for Windows version 21.0 software was used to calculate the results of this survey. Average (M) and standard deviation (SD) of indicators were calculated. To determine the reliability of the mean difference between the age groups, the student's t criterion was applied to independent samples. RESULTS Based on ACSI-28 methodology, basketball players' personal sport experience can be subdivided into 7 categories: overcoming failure, coach's influence, concentration, self-esteem, target setting, pressure resilience and resistance to anxiety. Analysis of received results is listed below. According to the following parameters: overcoming failures, goal setting, resistance to anxiety, no statistically significant differences were found (Table 1). When analyzing concentration, self-confidence, and resilience to pressure, junior basketball players performed better than cadet basketball players. Research showed that the role of the coach, according to the averages, is significantly more important for cadets 2.33 ± 0.55 points, than for junior age group participants 2.01 ± 0.56 points. A statistically significant difference t (102) = 2.89 was observed between the two age groups; p <0.01 (p = 0.005). Thus, it can be said that the coach's influence on cadet basketball players is greater than on junior players. When analyzing the concentration results, it was revealed that the average concentration of cadet players was 1.80 ± 0.48 points, while juniors had it higher and it amounted to 2.01 ± 0.54 points. Statistically significant differences were found according to the student's t -test, because t (102) = -2.03; p <0.05 (p = 0.045), so it can be stated that the concentration levels of junior basketball players during the competition are higher than that of cadet basketball players, and they are able to concentrate better. After analyzing the results of self-confidence, it became clear that the average self-confidence of young basketball players was 2.10 ± 0.51 points, and that of young basketball players 2.31 ± 0.38 points. Using student's t criteria for independent samples, statistically significant differences were found between groups t (102) = -2.45; p <0.05 (p = 0.016). The data of the pressure resilience study showed that the resilience to pressure of cadet basketball players corresponded to 1.57 ± 0.61 points, junior basketball players to 1.81 ± 0.49 points. Using student's t -test, statistically significant differences between groups were revealed t (102) = -2.20; p <0.05 (p = 0.030). Thus, it can be concluded that junior basketball players cope much better with the tension of the competition than cadet basketball players. The results of the Sport Experiences Questionnaire (SEQ), according to the research methodology, are presented by showcasing the results of three subscales: risk indicators, competing indicators, and progress indicators. The risk analysis showed that the average risk for cadet players was 4.93 ± 1.04 points, and for junior basketball players 5.03 ± 1.01 points. There were no statistically significant differences between age groups because t (102) = 0.47; p> 0.05 (Figure 1). It was revealed that the experience of junior basketball players (competition experience) is greater than that of cadet basketball players, because the average score of junior basketball players is 5.13 ± 0.85 points, and that of cadet basketball players 4.71 ± 0.74 points, and this difference is statistically significant because, using student's t-test, t (102) = -2.67; p <0.01. (Figure 2). Analysis found that the average progress of cadet basketball players was 5.44 ± 0.78 points, and that of junior basketball players 5.58 ± 0.74 points. No statistically significant differences were found between the two groups, t (102) = -0.95; p> 0.05 (Figure 3). DISCUSSION The main aim of this research was to identify cadet and junior basketballers' sport experience peculiarities with the hypothesis that junior players' sport experience will be more valuable compared to cadet players. This hypothesis was based on Malinauskas and Zablockis (2020) research results. This hypothesis is proven to be correct. After having analysed questionnaire results, it has been noticed that coaches have a much bigger influence on cadets than on juniors. Becker (2012) states that the role of coach is one of the most important factors in the success of an athlete. The coach is usually the person responsible for the team's optimism or pessimism. Coaches who positively engage their team are usually the ones who win. Researched basketballers showed significant differences in concentration, self-esteem and pressure resilience. Junior basketball players showed better concentration, self-esteem and pressure resilience compared to cadet players. The conclusion can be drawn that junior basketball players are way more self-confident, more focused during the game and are more resilient towards pressure. The analysis of competitive experience according to risk, competition and progress factors revealed statistically significant differences between the groups: junior basketball players' competition experience is greater than cadet basketball players, because this indicator is higher for junior basketball players. Based on experience with risk and progress, no statistically significant differences were found. According to Luiggi et al. (2019), adolescent athletes perceive competition experience, progress, and risk differently. This means that each of these experiences could be used by sports organizers or coaches to increase the enjoyment of participation in the sport. Sport covers aspects like sharing, pressure, stress, feelings of unfairness, due to which people have different views to these experiences (Stalsberg & Pedersen, 2010;Luiggi, Travert & Griffet, 2018). The outcome from researchers Belanger et al. (2011), showed that competitive environment, stress and pressure to compete are some of the experiences to take into consideration in order to understand why youth want to participate in sports and compete. Other research identified these factors to be the one of the key factors causing people not to participate in sports (Yungblut et al., 2012). Researchers Hagger and Chatzisarantis (2008), Owen and others (2014) discovered people get involved in sports due to their curiosity, meanwhile pleasant experiences become the long term motivation to do sports. To sum up, one could say that all sportsmens' experiences in any sport are an essential part of their journey to success. (Nixon, 1992(Nixon, , 1993(Nixon, , 1996Saragiotto et al., 2014) also emphasizing the importance of experience in sport from the start to the end of a career in sports. Good, positive experiences can become the main drive to reach the top, meanwhile bad experiences can destroy sportsmen both physically and mentally and at times can be the reason to quit. Schnell, Mayer, Diehl, Zipfel & Thiel, (2014) state that competitiveness, risk taking, physical pain and injuries are part of sport. In order for sportsmen to have good experiences that would serve them well, the role of coach is very important according to Horn (2008) and Becker (2012). According to Clifford & Randolph (2020), coaches who positively engage with their team are more effective and victorious as they understand the importance of athlete excellence. CONCLUSIONS AND PERSPECTIVES Results of the study of personal sports experiences (athletic endurance skills) revealed statistically significant differences between basketball players of different age groups according to the following indicators: coach's influence on basketball players, concentration, athletes' selfconfidence and resilience to pressure parameters. For cadet players the coach's influence was greater. Concentration, self-confidence and pressure resilience were better among junior basketball players. This demonstrates that during the competition junior basketball players have higher levels of concentration compared to the cadet group, as they are more confident and can better cope with tension. In addition, it was found that the experience of junior basketball players in competitions (competition experience) is greater than that of cadet basketball players. No statistically significant differences were found between risk and progress parameters. When discussing the prospects for further research, it may be interesting finding out what is the most enjoyable experience for teens in each sport. From a psychological standpoint, it would also be interesting to know whether the reported pleasure depends on social status or gender (these would be counted as factors related to participation in sport). For example, it is well known that girls with low social status are the least likely to play sports (Stalsberg & Pedersen, 2010;Luiggi et al., 2018). Better knowledge of the experiences that adolescents want could help create appropriate programs that meet their expectations and encourage participation in sports activities.
2021-11-12T16:14:28.197Z
2021-11-10T00:00:00.000
{ "year": 2021, "sha1": "eefdd26605550880a556089301ce0303ff86f9c7", "oa_license": "CCBY", "oa_url": "https://journals.lsu.lt/baltic-journal-of-sport-health/article/download/1109/925", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "262f61e5d7399c254d670ec6b2fb954924f13878", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
235727675
pes2o/s2orc
v3-fos-license
Minimizing couplings in renormalization by preserving short-range mutual information The connections between renormalization in statistical mechanics and information theory are intuitively evident, but a satisfactory theoretical treatment remains elusive. Recently, Koch-Janusz and Ringel proposed selecting a real-space renormalization map for classical lattice systems by minimizing the loss of long-range mutual information [Nat. Phys. 14, 578 (2018)]. The success of this technique has been related in part to the minimization of long-range couplings in the renormalized Hamiltonian [Lenggenhager et al., Phys. Rev. X 10, 011037 (2020)]. We show that to minimize these couplings the renormalization map should, somewhat counterintuitively, instead be chosen to minimize the loss of short-range mutual information between a block and its boundary. Moreover, the previous minimization is a relaxation of this approach, which indicates that the aims of preserving long-range physics and eliminating short-range couplings are related in a nontrivial way. The connections between renormalization in statistical mechanics and information theory are intuitively evident, but a satisfactory theoretical treatment remains elusive. Recently, Koch-Janusz and Ringel proposed selecting a real-space renormalization map for classical lattice systems by minimizing the loss of long-range mutual information [Nat. Phys. 14, 578 (2018)]. The success of this technique has been related in part to the minimization of long-range couplings in the renormalized Hamiltonian [Lenggenhager et al., Phys. Rev. X 10, 011037 (2020)]. We show that to minimize these couplings the renormalization map should, somewhat counterintuitively, instead be chosen to minimize the loss of short-range mutual information between a block and its boundary. Moreover, the previous minimization is a relaxation of this approach, which indicates that the aims of preserving long-range physics and eliminating short-range couplings are related in a nontrivial way. Despite neither being able to experimentally probe nor theoretically precisely describe the microscopic details of the physical systems that surround us, via renormalization we are still able to make predictions and verify them to remarkable degrees of accuracy. A renormalization process progressively removes degrees of freedom from a physical system, mapping it to an effective system having the same physics at large scales [1,2]. One may regard the renormalization map as removing unimportant short-range information while leaving long-range information intact, and therefore possible connections to information theory have been explored in several different approaches [3][4][5][6][7][8][9]. One difficulty in the renormalization enterprise is finding an appropriate renormalization map. In real space renormalization [10], for example, there is no unique way to remove degrees of freedom, and a several maps can plausibly be used. Some work noticeably better than others [11], but there is no clear criterion for choosing the best map. Recently, Koch-Janusz and Ringel [12] proposed choosing real-space renormalization maps based on an information-theoretic criterion, as follows. Consider a spin model on a lattice Λ, and divide the lattice into non overlapping blocks A j . Let R be a renormalization map on a single block, specifically a stochastic transformation on the random variables describing the spins in the block, and call its output on the jth block A ′ j . In the renormalization procedure R is applied to each A j , but here we need only focus on a single block A with output A ′ = R(A). In particular, dividing the lattice into the block in question A, its neighbors within some distance B, and the remainder of the spins C, as illustrated in Figure 1a, Koch-Janusz and Ringel propose choosing where P = 1 Z e −βH is the Gibbs distribution of the spin system and I(A : C) P is the mutual information of random variables A and C under the distribution P . Due to the data processing inequality, it follows that I(A : C) P ≥ I(A ′ : C) R(P ) , and hence R KJR retains the most mutual information between the block and the long range parts of the lattice. Koch-Janusz and Ringel argue that it therefore extracts the relevant degrees of freedom and that it results in a renormalized Hamiltonian with short-range couplings. They also propose a machine-learning algorithm to determine R KJR on a parametrized subset of all possible maps. The resulting Real Space Mutual Information (RSMI) algorithm produces good results when benchmarked on various physical models. Lenggenhager et al. [13] further showed that R KJR does not create any long-range couplings within C when I(A : C) P = I(A ′ : C) R(P ) . Their theoretical work was expanded to field theory [14] and their algorithm improved by using deep learning techniques [15]. In this Letter we argue that, contrary to the above intuition, to minimize long-range couplings one should instead choose the renormalization map to retain shortrange mutual information: As we show in detail below, in fact no map R can result in long-range couplings within C or from A to C, and R ⋆ additionally minimizes coupling within the boundary B. This approach has several other advantages. For one, the optimization is considerably simpler, as it only involves the block in question and its boundary. Moreover, it is the case that I(A ′ : B) R(P ) ≥ I(A ′ : C) R(P ) for every map R, and hence the optimization in (1) is a relaxation of the optimization in (2). We emphasize here that these two optimizations are born out of two different motivations: (1) identifies the degrees of freedom that are most relevant to the long range physics, while (2) aims to control the proliferation of couplings. It is not expected that these two motivations yield the same optimization problem, and the relaxation described above relates the two. Finally, the optimizer of (2) (as well as of (1)) is b) The random variables in the black region are conditionally independent of the those in the white region given the gray region, as the gray region shields the former from the latter in the Markov network. The regions need not be connected. a deterministic map, which makes brute-force optimization feasible for small blocks by searching the entire map space directly on the probability distribution, rather than by using sampling techniques. We illustrate how the optimization can be performed for 2 × 2 maps using tensor network representations for the 2D Ising model. Gibbs states as Markov networks.-To prove our claims we make use of the Hammersley-Clifford theorem of probability theory, which states that every Gibbs state of a local Hamiltonian is a Markov network. A Markov network is a (probability distribution on a) collection of random variables with conditional independence relations that are captured by an undirected graph. Consider a collection of random variables V = (V 1 , . . . , V n ) associated to vertices of a graph G and having a joint probability distribution P (V ). Vertices V j and V k connected by an edge in G correspond to dependent random variables, for which I(V j : V k ) = 0. Given three regions of the graph A, B, and C, corresponding to disjoint collections of the random variables, B is said to shield A from C if all paths connecting A to C pass through B. The regions themselves need not be connected, as depicted in Figure 1b. Then (G, P ) is a Markov network if every two regions shielded by a third are conditionally independent, i.e. A and C are independent given the value of B. Put yet differently, the correlations between A and C are mediated entirely by B. Conditional independence can be succinctly expressed using the conditional mutual information (CMI) as I(A : C|B) P = 0, where The Hammersley-Clifford theorem [16,17] then states that (G, P ) is a Markov network if and only if P (V ) = e h(V ) for some local function h, meaning h = c∈C h c , where C is the set of cliques of the graph (the fullyconnected subgraphs) and each h c is a function only of the variables involved in the clique c. The renormalization procedure begins with the Gibbs state of a local Hamiltonian P ∝ e H . Renormalizing a block A with map R results in a new probability P ′ = R(P ) = e h ′ , where we define h ′ = log P ′ . Renormalizing all blocks results in some distribution P ′′ , and the corresponding h ′′ is just the renormalized Hamiltonian, up to the inverse temperature β and normalization constant factors. By the Hammersley-Clifford theorem, h ′′ will not contain any couplings between random variables which are conditionally independent, and this property can be established by showing that the CMI vanishes. And by data processing, it is sufficient to consider just h ′ to determine where new couplings may arise. Ruling out couplings.-The presence of the boundary B around the block A ensures that R creates no couplings within C nor from A ′ to C. Consider two parts C 1 and C 2 of C which are not already coupled. Thus they are conditionally independent given the remainder R of the random variables comprising the system. Region A is a part of R, and the rest we can call D so that R = AD. Since B bounds A, it must be the case that D shields C 1 from C 2 and therefore I(C 1 : C 2 |D) P = 0. This does not change under application of any map R, I(C 1 : C 2 |D) R(P ) = 0, and therefore C 1 and C 2 are not coupled in h ′ . To show the same thing, the authors of [13] prove instead that I(C 1 : C 2 |A ′ ) = 0 by assuming that long range mutual information is preserved, i.e. I(A : C) P = I(A ′ : C) R(P ) . That A ′ will not become coupled to anything in C follows because all the correlations are mediated by B. Using the positivity of CMI and data processing, we have Hence, the main concern is couplings between parts of B which may be induced by R. In one-dimensional systems, as depicted in Figure 2, it turns out that coupling between B L and B R is related to the change in mutual information between the block A and the boundary B = B L B R . If the mutual information is unchanged after R, then B L and B R are uncoupled in h ′ . This is a consequence of the following more general statement. Typically, no nontrivial map R will precisely preserve the mutual information for reasons we shall explain in a moment. Nevertheless, minimizing the change in mutual information, by maximizing I(A ′ : B) R(P ) as in (2), minimizes the coupling between B L and B R . This is because the smaller the CMI, the closer the distribution R(P ) is to some P ′ in which B L and B R are conditionally independent, as measured by the total variational distance between distributions (see [18,Lemma 1]). Hence smaller CMI leads to an associated h ′ with weaker couplings. Somewhat counterintuitively, then, to minimize couplings it is more important to preserve mutual information between a block and its boundary rather than between a block and distant spins. For isotropic systems, we can translate the 1D argument to multiple dimensions by treating a D dimensional isotropic lattice as a 1D system in every direction, as proposed by Leggenhager et al. [13]. The lattice can be separated into disconnected regions by hyperplanes creating effectively a 1D system ( Figure 3) and the argument of Theorem 1 carries over, so that no couplings will appear between the spins in the boundary strips B L and B R . Couplings might still appear inside the central strip, but if the system is isotropic we can repeat the same argument with hyperplanes separating the renormalized block from the rest in a different dimension and expect that if a map maximized I(A ′ : B) in one dimension, it will do so also in the other dimension. This argument breaks down for non isotropic systems as the different directions may have different optimal maps. Before proceeding to examine the two optimizations in more detail, let us remark that a renormalization map which precisely preserves the mutual information can actually be undone by a suitable stochastic map. This accords with the idea that no information is lost along the renormalization flow in this case by assumption, but one does not typically expect renormalization to be reversible. Starting from I(A ′ : B) R(P ) = I(A : B) P and using the fact that I(A : C|B) P = I(A ′ : C|B) R(P ) = 0, it follows that the total mutual information is preserved, I(A : BC) P = I(A ′ : BC) R(P ) . Then we can appeal to Lemma' 1 of [18], which ensures that the so-called "transpose" map or Petz recovery mapR is such that R • R(P ) = P [19]. The transpose map depends on R and the marginal distribution of A under P , but we shall not go into further details here. Optimization.-Computing I(A : B) does not require handling the whole probability distribution, but only the marginal distribution on the AB subsystem. This simpli- FIG. 3. The dark and light gray strips indicate the blocks that are used when treating the system as one dimensional in each direction, while the square indicates a block to be renormalized. If the renormalization map is optimal, the light gray strips are uncoupled. If the system is isotropic, the optimal maps for the two directions are the same. fies the optimization relative to Koch-Janusz and Ringel's proposal, where the distribution on the entire spin system must be treated somehow. As mentioned above, (1) is a relaxation of (2) in that I In both (1) and (2) the optimal map R ⋆ is necessarily deterministic, i.e. all its transition probabilities are either zero or one. This follows because the objective function, the mutual information, is a convex function of the optimization variable, the map R, and the extreme points of stochastic maps are deterministic maps. Proposition 2. Let C be the space of channels from A to A ′ . For a fixed probability distribution P AB the function C → R + , W → I(A ′ : B) W (P ) is convex. Proof. Consider a collection of channels {W z } z∈Z indexed by the values of a finite random variable Z with distribution Q. The average channel W Z is just W Z (P AB ) = z∈Z Q(z)W z (P AB ) for any P AB , leading to mutual information I(A ′ : B) WZ (P ) . For simplicity, denote W Z (P ) just by P ′ . Meanwhile, the average mutual information is given by the CMI I(A ′ : B|Z) P ′ since and therefore the mapping is convex. When maximizing a convex function over a convex set, the optimum will occur at one of the extreme points [20,Theorem 32.2], which in this case are the deterministic maps [21,Theorem 1]. This simplifies the optimization by making the search space finite. While brute force might still be out of reach for interesting systems, more sophisticated methods such as machine learning techniques can be informed by this fact. The Ising model.-Consider renormalization maps on 2 × 2 blocks in the 2D square-lattice Ising model. To investigate which maps are optimal according to (2), we use the Corner Transfer Matrix algorithm [22] to extract the marginal distribution of a 4×4 block, and we measure the change in mutual information between the central 2 × 2 block and its boundary after each of the possible 2 16 deterministic maps mapping this block to a single spin. We then compute the change in mutual information for each map over the range of temperatures β ∈ [0.1β c , 1.9β c ] and find the optimal map at each temperature. In Figure 4 we show the change in mutual information compared with the minimum value for some common maps: 1. Decimation: the value of the renormalized spin is simply the value of one of the 4 spins in the block. 2. Majority vote: the renormalized spin is assigned a value +1 if the majority of the spins in the block are +1, and vice versa. Ties must be broken with a 2 × 2 block, we do this in 4 possible ways: using a predetermined fixed value (i.e. the ties are always resolved with +1 or −1), using one of the spins in the block (hence the map becomes decimation in case of ties), or choosing a value at random. Some of these maps are not symmetric under spin flips, namely the majority vote with fixed value tie breaker and the biased maps. Which version is optimal depends on the symmetry breaking low temperature state that has been selected during the simulation. We call the tie breaker or the biased map "aligned" (denoted ⇈ in the figure) if the relevant fixed value for the renormalized spin is aligned with the magnetization in the symmetrybreaking state, and "antialigned" ( ⇆ ) otherwise. At high temperature (β/β c 0.3554), the optimal map is decimation, afterwards, for 0.3554 β/β c 0.6109, majority vote with tie breaks decided by decimation. From that point up to the critical temperature, both version of fixed tie breaker majority vote are optimal, the aligned version remains so up to β/β c ≈ 1.0509, after which the low temperature symmetry breaking prevails and the best map is the aligned biased map. Interestingly, majority vote with random tie breaker is rather far from optimal (it cannot be optimal as it is not deterministic) and fares worse of all other tie breakers except the antialigned one at low temperature. It can 4. Difference of the mutual information change for each map above the optimal change, as a function of inverse temperature. Each shaded region indicates which map is optimal in the corresponding interval. Note that while both majority vote maps which break ties aligned (MV-⇈) and antialigned (MV-⇆ ) with the overall magnetization are optimal in the interval (0.6109,1), the random tiebreaker map (MV-rnd) is far from optimal. also be seen that decimation performs poorly, especially around the critical point. This is consistent with the observations of [11]. Conclusions.-In this Letter, we argued that maximizing the short-range mutual information between a block and its boundary yields a renormalized system with reduced long-range couplings. In particular, couplings are never introduced beyond the boundary region of the renormalization map, and are suppressed when more of the short-range mutual information is preserved. This gives an information-theoretic account of some aspects of renormalization. The optimization suggested by this approach leads to a simple brute-force algorithm for finding the optimal renormalization map which requires only the probability distribution of the input region of the map and its boundary. It is efficient enough for small systems, as demonstrated in the 2D Ising model. Further work is required to explore the robustness of this result when information is only approximately preserved, perhaps by using an approximate generalization of the Hammersely-Clifford theorem. Our approach contrasts with the focus of [12] and [13], which maximizes the long-range mutual information with the dual goals of capturing the relevant degrees of freedom and reducing long-range couplings. The fact that their long-range mutual information optimization is a relaxation of our short-range optimization implies some connection between these goals: If we view extracting the relevant information as the primary justification for the long-range optimization (an intuitively very plausible statement), then it will necessarily do this by minimizing long-range couplings in the renormalized Hamiltonian to some extent. The open question is how much. It would therefore be interesting to investigate under what conditions or in which models the optimal renormalization maps of the two approaches actually coincide. To this end it would also be interesting to modify the RSMI algorithm to focus on short-range mutual information, as exact optimization is computationally difficult for more complicated models. In either scenario one may also be able to take into account the fact that the optimal renormalization map is necessarily deterministic. Finally, it should be noted that the focus on shortrange versus long-range information here is reminiscent of the relation between the Tensor Renormalization Group (TRG) [23] and the Tensor Network Renormalization (TNR) [24] algorithms. The latter is a refinement of the former in which the additional steps are meant to remove short-range correlations, improving the algorithm near the critical point. Here the setting is block-spin renormalization, i.e. maps on the physical degrees of freedom and not the tensors in the tensor-network description, but again the focus is on the short-range couplings. It would be interesting to investigate if informationtheoretic methods can be used to give tensor network algorithms.
2021-07-05T01:15:49.192Z
2021-07-02T00:00:00.000
{ "year": 2021, "sha1": "67921aabc02e9ae9cd12e8595247fccfb76fae6f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1751-8121/ac8383", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "96c5e9200fba951ac27f1cf29464f569ddb75624", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Computer Science", "Mathematics" ] }
255223408
pes2o/s2orc
v3-fos-license
Building Inclusive Development for People with Disability in Post-Pandemic Era to Pursue ASEAN Community 2025: Learning from Asia-Pacific Development Center (APDC) This qualitative research conducted with the study case method tries to point out the impact of COVID-19 on the employment condition of disabled-person and what ASEAN might consider adopting from the Asia-Pacific Development Center (APCD) when it faces disability policy development. ASEAN itself has many legal commitment papers to establish an inclusive community, particularly for people with disabilities. COVID-19 has hit multi-sector of development in ASEAN and facing bigger challenges to establish an inclusive environment to achieve ASEAN Community 2025 due to the economic slump and mass unemployment. The study found that the pandemic's negative effect doubled when it hit people with disabilities and required rapid response. ASEAN established the Comprehensive Recovery Framework (ACRF), a set of principle guidelines, to respond to the challenge. However, it lacks procedures on how to implement the principles and guidelines at regional level. ASEAN may learn from Asia-Pacific Development Center for Disability (APCD) program, 60+Plus Project, as policy implementation guidelines for ASEAN Member States. I. Introduction Persons with disabilities are one of the groups most likely to be left behind, encountering a range of barriers including limited opportunities for health, education, and employment. Approximately 1 billion people, or 15% of the world's population, live with some form of disability, and 80% of them live in developing countries. It is estimated that 1 in every 6 people in Asia and the Pacific about 690 million people live with a disability. These 690 million people include individuals with physical disabilities; those who are blind or experience low vision, deaf, or hard-ofhearing; those with learning disabilities, cognitive/developmental disabilities, psychosocial disabilities, or are deaf-blind; and those with multiple disabilities (Crosta & Sanders, 2021). The qualitative case study is used to help explore the impact of COVID-19 on the employment condition of the disabled group. Various data sources from ASEAN legal policy documents related to the current guidelines on the post-pandemic period, statistics, and official websites from various world actors who have concerns about disability inclusion have been used to provide multiple perspectives. In the end, as the outcome of the policy and practice analysis between ASEAN and APCD, learning points can be recognized. The commitment of Asia and Pacific countries to the disability agenda after the adoption of the CRPD (Convention of The Rights of Persons with Disabilities) became more tangible with the Incheon Strategy to "Make the Rights Real" for persons with disabilities in ASEAN (M. Lusli, 2010). All 10 ASEAN countries have signed and ratified the UNCRPD and are now responsible for its implementation in their countries (Cogburn & Reuter, 2017). In 2013, The ASEAN Declaration on Strengthening Social Protection mentioned that people with disabilities should have equitable access to social protection. The Declaration calls upon ASEAN members state (AMS) to support the principle by adopting it into their national policies, strategies, and mechanisms to ensure the implementation of social protection programs as well as the tangible targeting system to assure that social protection services may go to those who need most. Furthermore, In 2015, The Kuala Lumpur Declaration on a People-Oriented, People-Centered ASEAN, promotes the protection rights of people with disabilities and facilitates their interest and welfare in ASEAN's future agenda (ASEAN Summit 33rd, 2018 plan's purpose is to mainstream the rights of people with disability across the three pillars by providing a framework for the integration of persons with disabilities across sectors (Singh, 2022). The commitment to include people with disability in ASEAN Community Agenda are emerge as the response of ASEAN as the key regional actor in Southeast Asia and contribute to the global development, of the 2030 Agenda on Sustainable Development. COVID-19 declared as pandemic in March 2020, and since then the world condition disrupted and filled with uncertainty crisis. Neither developed countries nor developing countries were secured from the rapid changing because of COVID-19 which generated global multi-dimension crisis; health, economy, and social disaster. II. Conditions of People with Disabilities in ASEAN Southeast Asia region has encountered similar experiences with the rest of the world. Figure 1 shows after expanding by an average of 5.3% over the last decade, the ASEAN region is now projected to contract by 3.8%1 in 2020, the first decline in economic growth in 22 years. Economically, in 2020, trade and investment in ASEAN were impacted by the pandemic. ASEAN's trade fell by 12.4% and FDI inflows by 32.9% compared to the previous year (ASEAN, 2020). Furthermore, an economic catastrophe is unavoidable as the response of the world economy fell off due to the pandemic. The COVID-19 crisis led to mass layoffs and contributed to rising poverty and inequality around the globe. International Labor Organization (ILO) estimated global working hours were lost by around 5.4% in the first quarter of 2020 compared to the last quarter of 2019 and the number worsened to 14% in the second quarter. The situation in ASEAN is even more delicate given the high levels of informality, uneven social security nets, and structural dependence on highly labor-intensive sectors in some AMS. Yet overall, job losses appear to rise in unemployment rates across ASEAN in the second quarter of 2020 (ASEAN, 2020). In detail, figure 2, unemployment in ASEAN from 2.5% in 2019 to 3.1% in 2021, stalling the 20year achievement of the region in terms of labor force participation (ASEAN, 2021). Unfortunately, there are less data on disability unemployment rate before and during COVID-19 period can be found to exactly point out the analysis impact. Assumed that the number shown in figure 2 includes unemployment status for people with disability. Even so, the percentage of people with disability unemployment rate may be higher than the data showed. Because, people with disabilities often do not register as either employed or unemployed, which means they are often invisible in labor market statistics and likely to be overlooked in policy initiatives (UN ESCAP, 2020). Moreover, the employment prospects for individuals with disabilities tend to be poor; they are likely to be in low-paying jobs in the informal economy without social protection; they are likely to be involved in corporate social responsibility programs; or they are likely to be self-employed. It is estimated that three quarters of employed persons with disabilities work in the informal economy, with informal workers accounting for 28 to 92 percent of the labor force across developing countries. It positioned people with disabilities at more critical point and broadened the inequality at the worst moment. It is estimated that in 1 in every 6 people in Asia and the Pacific, about 690 million people live with a disability. The emergence of the Pandemic COVID-19 in 2020 has significantly affected multi-dimensions of social life for the majority group of society and made the vulnerable group more suffer due to existing gaps in the status quo. People with disabilities in general, have experienced WIMAYA: Interdisciplinary Journal of International Affairs Vol.03/No.02, July-December 2022(e-ISSN: 2272 poorer health outcomes, lower access to education, reduced services, and support, and increased violence and abuse throughout the pandemic. Furthermore, persons with disabilities are not fairly represented in the workforce in Asia and the Pacific. As an outcome, people with disabilities have less overall consumption and less contribution toward economic growth. Persons with disabilities are systematically excluded from equal access to work across the region. According to UNESCAP, persons with disabilities work less or earn less because of the barriers they face. The domino effect of being excluded from the system has triggered a long record of social injustice. People with disabilities most likely experienced discrimination in their daily lives in many sectors even in normal conditions and aggravated due to the pandemic. Evidence shows that people with disabilities faced more threats due to the pandemic, such as; higher rates of infection and death from COVID-19, less access to healthcare information, worsened mental health, lack of involvement in response planning, loss of income, and poor assistance, reduced access to disability support and services, increased gender-based violence, and inaccessible remote learning (Crosta & Sanders, 2021). The prevalence of specific disabilities varies among the AMS, yet the obstacles people with disabilities face is similar. Despite the different variables among the AMS, they have similar weaknesses such as inadequate legislation, unequal employment, and inadequate physical access which correlated to education accessibility. Meanwhile, studies show that if persons with disabilities were paid on an equal basis as their colleagues without disabilities, the GDP of many Asian and Pacific countries could increase by 1 to 7% (Crosta & Sanders, 2021). Based on the founding, the contribution of people with disabilities might be one of the keys to supporting the region's economic pace to recovery from the pandemic if they were, to have more chances and access, included in the mainstream. ASEAN has designed the "ASEAN Comprehensive Recovery Framework (ACRF)" to provide a consolidated exit strategy for ASEAN to emerge resilient and strong from the COVID-19 crisis. ACRF observes 6 key principles; focused, balanced, impactful, pragmatic, inclusive, and measurable (ASEAN, 2020). ASEAN applied the principles into 3 phases: re-opening, recovery, and resilience. First, in brief, re-opening designed the smooth transition from lock-down/social rigid restriction into "new normal" conditions without undertaking the health procedure to prevent further COVID-19 waves. Secondly, concerns to support the sectors back to pre-COVID-19 potential and focused on assisting sectors and groups that have been affected by the pandemic, such as tourism, micro, small and medium enterprises, and vulnerable groups (ASEAN, 2020). The third phase, building resilience in society towards unprecedented crisis on fundamental vulnerabilities within economies and societies. WIMAYA: Interdisciplinary Journal of International Affairs Vol. 03/No.02, July-December 2022(e-ISSN: 2272 Indirectly, inclusive development for the vulnerable group has been involved in the 5 strategies of ACRF shown in figure 3. In the short term of the ASEAN Comprehensive Recovery Framework embedded in further strengthening and broadening of social protection and social welfare for a vulnerable group, social assistance programs need to be continued and scaled up. The proposal is necessary not limited only to mitigate the risk of the pandemic in socioeconomic impact at an individual level, but also to ensure domestic consumption going. One key challenge is to ensure that social assistance is accessible to those without social security or unemployment benefits. Accessibility to social care services should also be ensured especially for those facing higher risks during lockdown and containment measures, owing to their age, gender, disability, economic status, and other factors (ASEAN, 2020). ASEAN recovery efforts must follow inclusive principles and cover the intersection of age, disability, and gender in designing measures. Prioritizing human rights and the protection of vulnerable groups and sectors is uncompromised. Recovery efforts must be inclusive and consider the intersection of age, disability, and gender in designing measures. Human rights and the protection of vulnerable sectors must be prioritized. To sum up, COVID-19 has positioned ASEAN in an uncomfortable situation from multi-dimensions, such as economic instability to mass unemployed wave, and it affects the path of the region to establish equal environment for vulnerable groups. But the insufficient disaggregated data hampers a deeper analysis of socioeconomic impact of the pandemic on people with disabilities. ASEAN, directly and indirectly, stated inclusive development for vulnerable groups on many legal papers, yet full and integrated programs still turn out as the main issues. Thus, the framework offers very general overview of the inclusive development principle yet lacks implementation policy development. Considering the hidden economic potency towards domestic finance growth, building accessibility and integration program for people with disabilities is a vital foundation. Not only to tackle inequality but also to achieve rapid recovery, it will need double effort due to the pandemic development setbacks and stagnancies. IV. Asia-Pacific Development Center: The 60+ Plus Project APCD was established in Bangkok, Thailand on 31 July 2001 as a legacy of the Asian and Pacific Decade of Disabled Persons, 1993-2002. APCD was endorsed by the United Nations Economic and Social Commission for Asia and the Pacific (UNESCAP) as a regional collaboration between Thailand and JICA. ESCAP also identified APCD as the regional center on disability and development for the Incheon Strategy to Make the Right Real, 2013-2022 (APCD Foundation, 2021). The main mission of APCD is to nurture the capacities of person with disability and establish Community-Based Inclusive Development Disabled Organizations and Disability-Inclusive Business as agents of change. While the vision is to promote an inclusive, barrier-free, and right-based society for person and organizations of disability throughout empowerment program. After operating for 21 years, APCD has trained more than 7,000 persons with disabilities and stakeholders in the Asia-Pacific region. APCD provide capacity building and training for not only people with disabilities but also for parent who has disabled family members and staff who interact with disabled person. Moreover, APCD cooperates with more than 30 countries to establish disability and development To achieve an inclusive community, people with disabilities must be independent, and capable of empowering themselves and leading community-based development to support the entire agenda of inclusive development. Networking and collaboration among disability organizations and stakeholders are one of the main missions. Armed with sophisticated network and various collaborations will provide people with disability more access to be actively involved and contribute to the community. Considering how the issues have been left behind among any other inequality concerns, even without the pandemic, it is essential to build networking, initiate collaboration and share experiences to mainstream the issues of inequality among people with disabilities. Working closely and considering people with disability as resourceful individuals is the principle of APCD's capacity development (CD) project. Since it is important to build strong self-esteem in people with disabilities, considering the common societal prejudice against them. Once the PWDs got involved in their communities and began to work towards a barrier-free society, they came in closer contact with non-disabled persons, including their families and community members, local government officials, and even policymakers at the central government level. In this way, the APCD Project also created a comprehensive and multilayered CD impact (JICA & IFIC, 2008). Through effective training and capacity building, people with disabilities and stakeholders will be empowered with skills, knowledge, and a positive attitude toward disability and community development (APCD Foundation, 2021). The 60+ Plus Bakery & Chocolate Café Project is APCD's main activity. One of the settled projects is disability-inclusive business in the food industry which aims to develop the inclusive business skills of persons with disabilities in society, as well as provide sustainable on-the-job training and an inclusive environment for them. The project supports them to be professional bakers and chocolatiers, shopkeepers, and entrepreneurs based on Disability Inclusive Business and Inclusive Entrepreneurship. This initiative is implemented by APCD, with support from the United Nations Economic and Social Commission for Asia and the Pacific (UNESCAP) and other partners, as part of the Incheon Strategy to 'Make the Right Real' for Persons with Disabilities in Asia and the Pacific. Collaboration between the Ministry of Social Development and Human Security of Thailand, the Embassy of Japan, Thai-Yamzaki Co.ltd., and APCD, 60+Plus Bakery has been established. The project aimed youths with disabilities produce and sell baked goods at the shop. Thus, another project with food-based workshop is 60 + Plus Chocolatier by MarkRin, initiated by APCD and MarkRin Co., Ltd (Home | Asia-Pacific Development Center on Disability, n.d.). Almost 100 trainees with various disabilities (visual, hearing, physical, psychosocial, intellectual, learning, and autism) participated in both Thai Yamazaki and Chocolate training since 2015. The trainee employed by Thai Yamazaki in various branches in Bangkok while others were hired by cafes (i.e., Cafe Amazon, Black Canyon), hotels (i.e., Chaophaya Hotel), hospitals (i.e., Siriraj Hospital), and schools (i.e., Anglo Singapore International School). Trainees are also hired by 60+Plus Bakery & Chocolate Café or start their entrepreneurship career in food businesses. As the follow-up workshop program of 60 + Plus Chocolatier and 60 + Plus Bakery, APCD established 60+Plus Bakery & Chocolate Café. The original 60+Plus Bakery & Chocolate Café in the compound of Rajvithi home for the girls where APCD is located has been established at the end of December 2018. The café run by the disabled person has become the star and down-to-earth role model on how to include people with disability into the community in Thailand and internationally. As a result of this success, APCD has been invited to open another branch of 60+ Café in the Government House. The opening new branch in the Government House will increase the number of employment opportunities and real-life training facilities for a person with disabilities (Home | Asia-Pacific Development Center on Disability, n.d.). Fig. 4. End-to-End Inclusive Program Mapping The end-to-end development principle run by APCD on 60+Project has become the key to building inclusive development for people with disabilities. Overall, the end-to-end program required many actors to work together on the same track, from start to finish line, to empowering people with disabilities from human development, providing inclusive access and/or facility, and promoting. The cooperation between APCD, the Ministry of Social Development and Human Security as the representative of Thai government, Embassy of Japan and UNESCAP as the international actors, and the business sector represented by Thai-Yamzaki Co.ltd and MarkRin Co., Ltd has succeeded to established end-to-end program for inclusive development of people with disability. The program started with a workshop not only for people with disabilities but also for the disability staff who work within the community and was supported by government commitment and collaboration with the business sector. 60+Project equip people with disabilities-specific skill sets in the F&B sector. People with disabilities obtained intensive training on F&B services, from making to marketing. F&B, as one of the closest sectors to human services, is considered a strategic sector, to begin with, to make people with disability gain more exposure and public awareness towards the equal capability to provide services. Furthermore, due to unequal access to many sectors since they were born, as the result, people with disability cannot properly develop their potencies and most likely has low self-esteem as well. Training them with useful skills will increase people with disability specialty and self-esteem as the first step to competing in the labor market ( figure 4). The effort to actively include people with disability in the business sector is acknowledged as an effort of economic redistribution. The core problem with fully including people with disability in the economic sector, which later become one of the main reasons to increase independence level and self-esteem, is because employers are less likely to hire people with disabilities and prefer persons without disabilities. Therefore, it is essential to provide a place or facility where people with disabilities can show what they are capable of after finishing the workshop (figure 4). If there is no follow-up program after the workshop is finished, the resolution to achieve equality between the person with and without disabilities will end up in vain. Bear in mind that unequal access and the absence of fruitful networks are the main obstacles for the disabled person to explore their potencies. Positive discrimination against people with disabilities access to the workforce is needed due to the current condition of inaccessibility. The prejudice that peoples with a disability unable to work as efficiently as people without disability due to disabling conditions have to eradicate. It is vital for nondisabled persons in general to recognize the status of "disabled" is no barrier for person with disability to give equal service. If the majority of employers find it difficult to hire people with disabilities, then create a place and/or facility where disability may perform 100% with their conditions and at the same time promote cooperation with companies who may employ people with disabilities. Humans are more reluctant to believe in invisible changes, therefore it is important to show them that humans with disabilities have equal abilities. While having people with disabilities in a workplace full of people without disabilities will promote an inclusive environment as part of mainstreaming the issue. Through the efforts above, the next step is promoting the program as the community development endeavor. A successful project, such as 60+Project, will most likely become favorable case study. In this state, media exposure plays an important role to gain mass public awareness and build community esteem among people with disabilities. The process of spreading good news will contribute to reconstructing the definition of self-sufficiency widely. Due to the fact that the current notion of self-sufficiency pays amount of responsibility against discrimination against people with disability in the matter of workforce selection procedure. Employers will prefer to hire a non-disabled person based on the argument that efficiency in the workplace can be achieved if the workers have proper self-sufficiency (the absence of physical and intellectual disability). The argument failed to capture the root problem. Even people without disabilities will need a set of the facility to support their potency to become fully efficient laborers. Therefore, instead of a set requirement to meet a certain level of selfsufficiency during recruitment, the problem is how employers provide facilities to support their worker's efficiency points. The APCD actively working on promoting 60+Project, as one of the role models to establish inclusive model development, through seminars, TV shows, and joining a meeting with various actors either domestically or internationally. To sum up, the persons with disabilities in ASEAN are still facing difficulties to access fair employment and assistance either in their respective fields or entrepreneurship skill. Strengthening by ESCAP report that the gap in inclusive development in ASEAN, almost always leads to the lack of financial investment in accessibility, as well as a dearth of innovative investment forms outside of monetary values, ranging from high-level commitment and institutional buy-in, or the creation of strong legal accessibility frameworks to the development of human resources, as well as to the development of strong partnerships among governments and policymakers, organizations and other stakeholders. (Sano, 2021). ASEAN may adopt end-to-end development model to improve the regional capability to achieve ASEAN Inclusive Community Masterplan in 2025. Due to the fact, APCD has a tangible project which answers the challenge to build and provide equal access, as well as a network for people with disabilities. Indeed, the last policy product and implementation, in the end, will be AMS's responsibility. An end-to-end program such as the 60+Plus Project by APCD will automatically answer the world agenda against inclusive development for vulnerable groups. As a prior explanation, the hidden economic potency of people with disability is still less explored and even shrink due to the economic crisis because of the pandemic. Therefore, offering a clear path to develop an inclusive environment, which positively impacts economic inclusivity, for people with disabilities is considered a necessary option. The development framework with sharp and clear methods will be easier to be adopted and further adjusted by AMS. Especially in the current condition where recovery from the pandemic becomes a priority for vulnerable groups.
2022-12-29T16:01:20.331Z
2022-12-26T00:00:00.000
{ "year": 2022, "sha1": "7cef6fb1d3c9908b88c9940a5f611d8cb56a627a", "oa_license": "CCBYSA", "oa_url": "http://wimaya.upnjatim.ac.id/index.php/wimaya/article/download/82/39", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1785bf558b7e565c088c4b648e39cbc47ba0238a", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
5924908
pes2o/s2orc
v3-fos-license
Single shot, three-dimensional fluorescence microscopy with a spatially rotating point spread function : A wide-field fluorescence microscope with a double-helix point spread function (PSF) is constructed to obtain the specimen’s three-dimensional distribution with a single snapshot. Spiral-phase-based computer-generated holograms (CGHs) are adopted to make the depth-of-field of the microscope adjustable. The impact of system aberrations on the double-helix PSF at high numerical aperture is analyzed to reveal the necessity of the aberration correction. A modified cepstrum-based reconstruction scheme is promoted in accordance with properties of the new double-helix PSF. The extended depth-of-field images and the corresponding depth maps for both a simulated sample and a tilted section slice of bovine pulmonary artery endothelial (BPAE) cells are recovered, respectively, verifying that the depth-of-field is properly extended and the depth of the specimen can be estimated at a precision of 23.4nm. This three-dimensional fluorescence microscope with a framerate-rank time resolution is suitable for studying the fast developing process of thin and sparsely distributed micron-scale cells in extended depth-of-field. Introduction Three-dimensional (3D) optical microscopy capable of distinguishing depth discrepancies of the concerned components is drawing increasing interest in modern bio-medical imaging [1][2][3][4]. Conventional 3D imaging solutions are mostly implemented by successively scanning the focus of the imaging system through the interested axial region of the specimen. The simplest and most representative technique is the focal-stack (FS) method [5,6], which needs no hardware modifications in a routine microscope. The extended depth-of-field (DOF) image and its depth information at every transverse position can be recovered with focus-recognition algorithms. To obtain higher sectioning power and signal-to-noise ratio (SNR), a variety of novel technologies have been developed, including laser scanning confocal microscopy (LSCM) [7,8], structured illumination microscopy (SIM) [9][10][11], and light-sheet fluorescence microscopy (LSFM) [3,12], etc. These technologies differentiate the in-focus and the out-offocus components by introducing special illumination schemes, which ease the recovery process and eliminate the algorithm-related errors to extract the all-in-focus image and its depth map. However, the setups of these methods are complex and generally require elaborate calibration. In addition, the requirement of axial scanning in these systems severely decreases the time resolution, restraining their application in the dynamic situation. In contrast to the active illumination methods to scan the focal plane throughout the specimen, the point spread function (PSF) engineering methods attempt to obtain an all-infocus image within single snapshot by producing an elongated PSF, which was firstly implemented by Dowski et al. [13]. By generating a depth-invariant PSF, the extended DOF image of the object could be directly obtained via image deblurring algorithms. However, the recovered image suffers from abundant artifacts and contains subtle transverse translations [14] for 3D structures, and the object's depth information is abandoned. Although several improved reconstruction algorithms [15,16] have been reported to restrain the deconvolution artifacts, none of them can solve the problem fundamentally. Until several years ago, Zammit et al. [17] promoted an ingenious complementary kernel matching imaging method to recover the depth information from the transverse translation and to cancel the transverse translation in return. Unfortunately, this method requires for switching the complementary two phase masks, which sacrifices either the imaging speed or the system simplicity [18]. Recently, a single shot 3D imaging system was developed by Berlich et al. [19] in which a single thin, custom-built phase element was used to generate a spatially rotating PSF (termed as doublehelix PSF, DH-PSF), which, incorporating with the cepstrum-based reconstruction algorithm they proposed, was able to realize fast 3D imaging. This setup was built on a macro-scale imaging system with limited magnification and the depth-of-field of the system was fixed. In this paper, a wide-field fluorescence microscope with high numerical aperture (100 × , NA = 1.25) is constructed, extending the single shot, three-dimensional imaging concept to high numerical aperture fluorescence microscopy. A variable computer-generated hologram (CGH) is employed to generate the double-helix PSF, making the depth-of-field of the system adjustable. The impact of system aberrations on the double-helix PSF at high numerical aperture is analyzed to reveal the necessity of the aberration correction. A modified cepstrumbased reconstruction scheme is also presented to accommodate the new double-helix PSFs, significantly improving the precision of the depth estimation while simultaneously restraining the artifact of deconvolution. The imaging performance is verified by respectively observing the fluorescent beads and the bovine pulmonary artery endothelial (BPAE) cells. Figure 1 illustrates the optical configuration of the single shot, three-dimensional fluorescence microscope. The illumination beam from a solid-state laser (λ = 491nm, Calypso 491, Cobolt AB Inc., Sweden) is expanded by a factor of twenty-five with a telescope system consisting of Lens1 (f = 10mm) and Lens2 (f = 250mm). The collimated beam is then focused towards the back focal plane of the infinity-corrected objective lens (100 × , NA = 1.25, Nikon Inc., Japan) to provide an epi-illumination at the focal plane. The fluorescence emission light from the illuminated specimen is collected by the identical objective. The pupil phase of the emission light can be modulated by simply loading specific CHGs onto the spatial light modulator (SLM, 1920 × 1080pixels, Pluto ΙΙ, HoloEye Photonics AG, Germany) by relaying the back focal plane of the objective on the SLM with a 1:1 4f system composed of Lens4 and Lens5. In addition, a linear polarizer is placed before the 4f system to block the vertical polarization component rejected by the SLM. The modulated light is then filtered by a bandpass emission filter (520 ± 22nm, Semrock Inc., USA) and collected by the zoom lens, and the specimen is eventually imaged onto the cooled charge-coupled device (CCD) camera (3296 × 2472pixels, 14bit, 8051M-GE-TE, Thorlabs Inc., USA). Fig. 1. Schematic of the single shot, three-dimensional fluorescence microscope. The expanded and collimated laser beam is projected in parallel onto the specimen as an epi-illumination source. The wavefront of the fluorescence beam emitted from the specimen is modulated by the spatial light modulator, which is later focused onto the CCD camera. The inset shows an example of the loaded CGH on the spatial light modulator. DM: longpass dichroic mirror (λ c = 500nm), SLM: spatial light modulator, M1-M8: mirrors. Generation of the double-helix PSF The PSF engineering, achieved by placing a phase mask at the pupil plane of the imaging lens to encode the wavefront emerging from an imaging system, has been a routine technology in extended DOF imaging during the past decades [20][21][22][23]. The performance of the PSFengineered system is mostly dominated by the adopted CGH and the aberration in the optical system. Since Greengard et al. [20] promoted the rotating PSF in 2006, the double-helix PSF has enjoyed great attention in multiple imaging technologies [24,25] due to its high performance in depth estimation. The mainstream CGHs to generate the double-helix PSFs in these applications are generally derived from Gaussian-Laguerre (GL) modes, which were proved to be energy-efficient [26]. Unfortunately, the extended range of the PSF generated with this method is limited and cannot be fully controlled by varying the superimposed GL modes. Superior to the GL-mode-based approach, a novel approach based on spiral phase (SP) to generating rotating PSFs [27][28][29] was claimed to be capable of providing more compact main lobes, while the rate of rotation can also be easily adjusted by changing the number of Fresnel zones. The outstanding feature of this approach makes it possible to get higher lateral resolution as well as adjustable range and sensitivity of depth estimation by simply changing the loaded CGHs on the SLM. The simplified form of the CGHs to generate double-helix PSFs is shown in the equation below: where P N (ρ,φ) denotes the phase modulation at the polar coordinates (ρ,φ), R presents the maximal radius of the selected aperture, and N is the total number of the Fresnel zones. The phase mask is composed of a sequence of radial sampling of the spiral phase into the Fresnel zones. The rotation rate of the rotating PSF can be easily controlled by tuning the number of azimuthal phase sections [29]. To demonstrate the effects of CGH type on the double-helix PSF under high numerical aperture, the intensity distributions of the 3D PSFs were numerically calculated using the Fresnel diffraction theory. Without loss of generality, we selected a tailored CGH superimposed by five GL modes with respective indices of (2,2), (4,6), (6,10), (8,14) and (10,18) being an example of the GL-mode-based approach (GL CGH), similar to that used in Ref [19]. To display the changing tendency of the PSFs generated by the spiral-phase-based approach (SP CGH), the total number of the Fresnel zones of 4, 6 and 8 are adopted. In keeping with the parameters of the single shot, three-dimensional fluorescence microscope shown in Fig. 1, the numerical aperture of the objective is set as 1.25 and the central wavelength is selected as 515 nm that approximates to the emission wavelength. The simulative results are presented in Fig. 2. As expected, the rotation rate of the doublehelix PSF with varied amounts of defocus decreases with the number of the Fresnel zones, which leads to an opposite change of the PSF's extended range ( Fig. 2(a)). Quantitative analysis of two key features of the PSFs, namely the rotation rate and the distance between the two main lobes, further demonstrates the superiority of the spiral-phase-based CGHs ( Fig. 2(b) and 2(c)). When the rotation rate is analyzed as a function of axial position, the GLmode-based PSFs deviate from linearity at high and low positions. In contrast, the rotation rate of the spiral-phase-based PSFs is linear at all axial positions, revealing a more convenient and precise conversion of the rotation angle and the depth. In addition, the inter-lobe distance for spiral-phase-based PSF is unaffected by defocusing, whereas for the GL-mode-based PSFs this distance varies by as much as 400nm, especially at high and low positions ( Fig. 2(c)). It is also worth mentioning that although the extended range of the PSF increases with the number of the Fresnel zones, the sizes of the main lobes enlarge, which leads to resolution loss of the imaging system. This is mainly due to the linear increase in radial FWHM with increasing number of Fresnel zones, even though the angular FWHM keeps invariant ( Fig. 2(d)). As a hint, it is more reasonable to approximate the main lobes of the PSF with the elliptical Gaussian function than the circular Gaussian function in the restoration process, which will be discussed below. Image formation and restoration For the designed wide-field microscope, the image captured by the camera can be modeled as where i(ξ,η) is the collected image, o(x,y;z) presents the object's luminance distribution in three dimension, h(x,y;ξ,η;z) indicates the PSF of the system at the object point (x,y,z), and n(ξ,η) denotes the additive noise. The symbol * remarks the lateral convolution operation of the object and the corresponding PSF. If the system satisfies the isoplanatic condition, h(x,y;ξ,η;z) will remain invariant with (ξ,η). For the modulated double-helix PSF in our system, the shape and the distance of the main lobes can be treated as constant. Thus, the intensity distribution generated by a point-like object at the depth z can be approximated by where h 0 (x,y;0) presents one of the main lobes of the PSF at the focal plane, k θ(z) describes the linear dependency between the defocus distance z and the rotation angle θ, Δy indicates the distance of the two main lobes, and rot(h,θ) is the function to rotate the PSF h by an angle of θ around the center. Under the elliptical Gaussian distribution approximation of the main lobes, h 0 (x,y;0) can be further estimated as following where σ a and σ r are dimensional parameters that could be respectively derived from the angular FWHM and radial FWHM of the main lobes. Different from the image obtained by using a standard Gaussian PSF which contains only a single image, the coded image obtained by the double-helix PSF consists of twin blurred images that are located with depth-related orientations. By decoding the orientation of each of the twin images at each transverse position, the object can be recovered using deconvolution algorithms with the corresponding PSF estimated with the decoded depth. The image restoration process in this paper is based on the framework proposed by Berlich [19]. Some modifications are introduced to accommodate the new double-helix PSFs. The flowchart of the process is illustrated in Fig. 3. Before restoration, the key parameters of the corresponding double-helix PSF are experimentally measured, including the rotation rate, the inter-lobe distance, and the radial and angular dimensions of the main lobes. Besides, some preprocessing manners such as background removing and denoising are taken to minimize the impact of the white noise. To guarantee a seamless recombined image and minimize the ring artifacts caused by deconvolution of sharp edges, sliding two-dimensional Hann-windows are adopted to divide the field-of-view into sub-windows. It is assumed that all objects within a sub-window are located at uniform depth. Each sub-window will be processed separately and the final image is recovered by putting all the sub-windows together. The restoration process of each sub-window includes depth estimation and image reconstruction. Fig. 3. Flowchart of the extended depth-of-field image recovery and the depth estimation process. The captured raw image is divided into a series of sub-windows which are processed in parallel. The restoration process of each sub-window includes depth estimation and image reconstruction. The depth information is estimated by determining the maximal polar angle corresponding to the most densely populated cepstrum in a specific ring-shaped window. Meanwhile, the PSF corresponding to this sub-window can be estimated with the depth. The image is recovered by deconvolution with the estimated PSF. Depth map and image of the whole field-of-view can be recovered with ordered recombination of the results from all subwindows. As demonstrated in Ref [19], the cepstrum-based algorithm is suitable to decode the orientation of the depth-related-oriented twin images. The calculated cepstrum of the windowed image shows two centrosymmetric peaks at a ring-shaped window specified by the polar radius [0.8Δy,1.2Δy], and the polar angle of the two peaks exactly equals the orientation of the double-helix PSF at the corresponding sub-window. However, these two peaks are often submerged by the noise if the current sub-window is short of object features. This significantly impacts the recognition precision of the polar angle. To address this issue, a threshold (half of the maximum) was employed to wipe out the majority of noises which mostly remains low-level in the cepstrum. Afterwards, the cepstrum's distribution density varying with the polar angle is calculated to portray the concentration of the cepstrum. Supposing the high-level noise distributes randomly, the polar angle of the two peaks can be found at the maximum node of the distribution density. The depth of the object is easily calculated via dividing the obtained polar angle θ max by the advance-measured parameter k θ(z) . Once the depth is determined, the corresponding PSF of the sub-window can be obtained by using the measured parameters of the PSF, as described in Eq. (3) and Eq. (4). Afterwards, the image of each sub-window could be recovered by deconvolution. It's worth noting that an improper deconvolution algorithm will lead to significant artifacts in the recovered image. In our experiment, we found the Richardson-Lucy algorithm with suitable number of iterations performs much better than the Wiener-type filter used in Ref [19] in suppression of the artifact for the non-negativity prior. A simple performance comparison of these two algorithms will be shown in the experimental results. In the end, after all the sub-windows are processed, a depth map and an extended DOF image can be recovered by recombining the results in specific order. In the following, we simulate the procedure of the image formation and restoration to reveal the effect of the promoted algorithm. The simulated object is shown in Fig. 4(a). The six lines of texts are located at different depths so that they become increasingly out-of-focus toward the bottom. The simulated images with the conventional Gaussian PSF and the double-helix PSF (N = 6) are respectively illustrated in Fig. 4(b) and Fig. 4(c). In contrast with the conventional image, the defocused components of the DH-PSF-blurred image still remain bright. The extended depth-of-field image and its depth map are both recovered (see Fig. 4(d) and 4(e)) by applying the promoted algorithm to the simulative image blurred with the double-helix PSF. As expected, the contents in recovered image are consistent with that of the original object in Fig. 4(a) and the recovered depth map also agrees with the pre-set depths. Maintaining an ideal shape of the engineered PSF in the image formation procedure is of great importance. Unfortunately, the defects of the system often produce aberrations, which will distort the PSF, especially when high numerical aperture objectives are used. To ascertain the influence of the different types of aberrations [30,31] on the double-helix PSF, we calculated the intensity distribution of the double-helix PSF with N = 6 after imposing different types of aberrations shown in Fig. 5. The results show that the spherical aberration causes slight focal plane shift and generates a non-uniform rotation rate above and below the focal plane, while the coma aberration unbalances the energy of the main lobes, and the astigmatism aberration mainly brings about varying inter-lobe distance at different depths. All these deformations will affect the accuracy of depth estimation and moreover, deteriorate the deconvolution results. Influence of the aberrations Generally, provided that the objective is aberration-free, coma and astigmatism of the system can be eliminated by careful alignment of the optical path, and spherical aberration can be minimized by matching the refractive index of the immersion medium (oil or water), coverslip, to that of the specimen. Nevertheless, due to the defect of the SLM's production process, there often exist sorts of surface curvature on the reflective panel, which inevitably causes a deviation of the wavefront and leads to unwanted distortions of the PSF [32]. Therefore, whatever its source, aberration correction is an essential procedure, which will be discussed below. Experimental results and discussion Before examining a specimen, the engineered PSF of the microscope system was measured. A single, commercial fluorescent nanoparticle (F8803, Thermo Fisher Scientific Inc., USA) with a diameter of 100 nm is used as a probe. This particle is labeled with multiple, randomly oriented fluorescent dye molecules, which gives rise to its insensitivity to the polarization state of the excitation beam. To minimize the spherical aberration caused by the refractive index mismatch, the particle is firstly dried on the coverslip and then submerged in immersion oil. To correct the aberrations of the system (mainly SLM introduced), a simple but efficient method is employed by loading an additional phase mask opposite to the phase error caused by the surface curvature of SLM. The method is to apply a single image of a focused doughnut PSF created by the SLM to calculate the corresponding distortion phase hologram by using the Gerchberg-Saxton (GS) algorithm [32]. Here, the doughnut PSF generated with a helical charge l = 2 is adopted for its high sensitivity to the aberration and the enough sampling rate for the hologram at the Fourier plane ( Fig. 6(a)). The results show that the original doughnut PSF is distorted into two lobes ( Fig. 6(b)), while the corrected one more closely resembles the simulation results (Fig. 6(c)). Analysis of the calculated hologram opposite to the phase error caused by the SLM shows that as expected, flatness deviation of the SLM panel mostly contributes to the astigmatism (inset in Fig. 6(c)).To demonstrate the effects of aberration correction, the double-helix PSF of N = 8 was measured at different depths. The unstable inter-lobe distance in the uncorrected PSF presents the astigmatism in the system (Fig. 6(e)), which will significantly hinder the image recovery process. With our correction, the symmetry and the orderliness of the PSF are significantly improved (Fig. 6(f)). The depth-of-field provided by the double-helix PSF with N = 8 is measured to be 3.6μm. Similarly, the depth-of-fields by the double-helix PSFs with N = 4, N = 6, N = 10 are respectively 2.1μm, 2.9μm, 4.6μm, verifying a 4~10 times extension of the depth-of-field compared with a standard microscope (typically 500nm). As illustrated in Fig. 6(f), the diffraction zero-order remains near the focal plane (Δz = 0), which is mostly caused by the discrete structure of the SLM. For our system, we choose not to isolate the zero-order from the first-order with the frequently-used blazed grating phase. One reason is that the relatively wide spectrum of the fluorescence will give rise to serious chromatic aberration in the collected image if a blazed grating phase is appended. In addition, even if we ignore the chromatic aberration, blocking zero order also sacrifices the field-ofview of the system. Fortunately, the zero-order on the PSF is overall minor, as shown in Fig. 6(f), which brings little influence on the image reconstruction. To demonstrate the extended depth-of-field provided by the double-helix PSF, we first observed a tilted slide densely covered with fluorescent beads of a diameter of 100nm (F8803, Thermo Fisher Scientific Inc., USA). In contrast to the image acquired with the conventional Gaussian PSF (Fig. 7(a)), the image recovered with the double-helix PSF ( Fig. 7(b)) obviously provides more bright and clear image for the defocused beads, revealing the extended depth-of-field of the system. In addition, to testify the effect of the deconvolution algorithm on the result, we recovered the image of Fig. 7(b) with both the Wiener-type filter (Fig. 7(c)) and the Richardson-Lucy algorithm (Fig. 7(d)). The only difference between the two recovery processes is the type of deconvolution algorithms. Obviously, the image recovered with the Richardson-Lucy algorithm contains much less artifact in our system than that recovered with the Wiener-type filter. Because the Richardson-Lucy algorithm needs iteration process compared with the Wiener-type filter, this will sacrifice the computing speed of the recovery process. However, if the number of iterations is set as 8~12, the cost time for the image recovery will only increase by 1~2 times considering the computing expense of other processes, which is acceptable for most cases. Next, a section of bovine pulmonary artery endothelial (BPAE) cells (F36924, Thermo Fisher Scientific Inc., USA) was tested, which is tilted to provide an appreciable range of defocus. The F-actin of the cell labeled with Alexa Fluor 488 phalloidin is efficiently excited by the laser to emit green fluorescence and imaged by the camera. For the conventional Gaussian PSF, only the in-focus part of the specimen can be imaged clearly while the out-offocus regions appear dark and blurry ( Fig. 8(a) and 8(b)). In contrast, the image obtained with the double-helix PSF (N = 6) homogeneously bright in the entire FOV (Fig. 8(c)). Furthermore, the recovered object in Fig. 8(d) offers a sharp and bright view of the entire tilted specimen. When the Gaussian-and double-helix PSF-obtained images are compared in the magnified views, the results are even more striking. For the sub-regions (ROI1 and ROI2 in Fig. 8(a)), the specimen is only detectable in in-focus Gaussian PSF images (Fig. 8(a1) and 8(b2)), whereas in the double-helix images, the specimens in both sub-regions are visible and the F-actin filaments are clearly discernible (Fig. 8(d1) and 8(d2), respectively). Collectively, these results show that the recovered object contains more useful information than any single image with a Gaussian PSF, and in addition, the depth-of-field of the image system is extended properly. Fig. 9. Estimated depth map corresponding to the FOV of Fig. 8(a) and 8(b) are estimated depth maps of an identical FOV, where map (b) is obtained after moving the specimen 300nm downward axially relative to that in (a). The patches marked with darkest blue in the depth map represent regions with no in-focus specimen. (c) presents the statistical histogram of the difference of the above two depth maps. Gaussian fit of the statistical data is calculated, implying a population standard deviation of 33nm. The precision of single measurement is thus calculated to be 23.4nm. As described in Fig. 3, achieving a high quality recovered image mostly relies on the precise estimation of the PSF. In our system, the PSF is calculated with the estimated depth of the sub-window. Thus, the precision of the depth estimation is a decisive factor of the image quality. The corresponding depth map of the BPAE specimen is shown in Fig. 9. As expected, the whole slide is tilted with a specific slope. To provide a calibration on the reliability of the depth estimation, the depth map of the identical FOV is obtained again by axially moving the specimen downward 300nm (see Fig. 9(b)). The difference between the two measured maps is plotted as a histogram in Fig. 9(c), which presents a normal distribution. Gaussian fit of the statistical data gives a population standard deviation of 33nm. Considering the additive property of the statistical variance from the subtraction operation, the precision, namely the standard deviation of single measurement is estimated to be (33/ 2 )nm≈23.4nm. According to the linear equation z = θ max /k θ(z) , the precision of the estimated depth primarily depends on the linear factor k θ(z) and the recognition precision of the polar angle. The linear factor k θ(z) is uniquely dominated by the total number of the Fresnel zones. Supposing the recognition precision of the polar angle is free from the total number of the Fresnel zones, increasing the total number of the Fresnel zones will on one hand increase the range of extended depth-of-field, yet on the other hand decrease the precision of the estimated depth. Systematically, the recognition precision of the polar angle is sensitive to a number of factors, including the amount of the specimen's features, the SNR of the captured image and the size of the sub-window. The influence of the former two is readily comprehensible, for both of them directly affect the SNR of the cepstrum. The more features the specimen contains, the more remarkable the two centrosymmetric peaks shows in the cepstrum. So the system is more suitable for specimens with rich sharp structures. Meanwhile, the SNR of the captured image is affected by the illumination power and the performance of the detector. Thus, the devices used in the system should be carefully selected. For the third factor, things get more complicated. On one hand, compressing dimension of the sub-window will improve the lateral resolution of the depth estimation. On the other hand, excessive small sub-window will be short of object features and end up with low fidelity of the cepstrum algorithm. To balance the lateral resolution and the estimation fidelity, we choose the size of the subwindow to be 5-15 times of the PSF distribution area. In our experiment, the PSF spreads nearly 15 pixels, therefore choosing the sub-window size as 150 pixels is reasonable. Meanwhile, the step size of the sliding sub-window is set as 50 pixels. Even all the relevant parameters are carefully chosen, small amount of artifacts still occur in the recovered results, as shown in Fig. 8. This is mainly attributed to the initial assumption that the object in single sub-window distributes at a uniform depth. In the future, we will adjust the recovery framework by optimizing the segment strategy and introducing the feedback mechanisms between the depth estimation and the deconvolution to solve this problem. As demonstrated above, this imaging method is capable of obtaining the threedimensional information of the specimen in a single snapshot and the depth range can be adjusted by simply changing the loaded CGH on the SLM. With the modified algorithm, an extended DOF image and its corresponding depth map can be obtained with single snapshot, which imply a time resolution equivalent of the camera's maximal frame rate can be achieved while preserving the three dimensional information of the specimen. Summary We have built up a wide-field fluorescence microscope capable of obtaining threedimensional information of the specimen in a single snapshot. The depth range can be adjusted by simply changing the loaded CGH on the SLM. To provide a guideline for aberration correction, the impact of the aberrations on the double-helix PSF at high numerical aperture were analyzed. By employing a modified cepstrum-based reconstruction scheme, we have achieved extended depth-of-field images and estimated depth maps, demonstrated by both simulation and imaging a tilted section of BPAE cells. This three-dimensional microscope with a framerate-rank time resolution is suitable for studying the fast developing process of thin and sparsely distributed micron-scale cells in extended depth-of-field. Funding This research was supported by the Natural Science Foundation of China (NSFC) (61522511, 81427802, 11474352, 61377008), the Natural Science Basic Research Plan in Shaanxi Province of China (Program No: 2016JZ020), and the National Institutes of Health Grant GM100156 to P. R. B. Disclosures The authors declare that there are no conflicts of interest related to this article.
2018-04-03T03:25:20.723Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "6867cb8355afd1c0363f64c06a42b33d6273a311", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/boe.8.005493", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a1ec6eca2a310c34a736ae8e5094066f69f3f331", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
237335753
pes2o/s2orc
v3-fos-license
A comparison of laparoscopic and open surgery for early stage endometrial cancer with analysis of prognostic factors: a propensity score matching study Objective: We aimed to compare the shortand long-term outcomes of a laparoscopic approach with those of laparotomy for early stage endometrial cancer and attempted to identify factors predicting survival. Methods: Between 2007 and 2014, patients with clinical early stage endometrial cancer and a uterine size less than 10 cm receiving surgical treatment were reviewed. Kaplan-Meier and multivariate Cox regression model were used for survival analysis. Shortand long-term outcomes were compared between the two groups before and after 1 : 1 propensity score matching (PSM). Results: Finally 255 patients were enrolled, 177 received laparotomy and 78 received laparoscopic surgery. The patients receiving laparoscopic surgery had significant less blood loss and shorter hospital stay, but longer operative time. Before PSM, the 5-year disease-free survival (DFS) and overall survival (OS) rates were in favor of laparoscopic group (94.4 vs. 84.1%, p = 0.022; 97 vs. 90.5%, p = 0.060). Cox regression analysis showed that high-grade lesion (HR 11.35, 95% CI 4.06–31.07), nonendometrioid histology (HR 3.99, 95% CI 1.52–10.44), and age >60 (HR 3.35, 95% CI 1.60–7.00) were independent factors predicting recurrence while high-grade lesion (HR 10.38, 95% CI 2.44–44.15) and CA125>35 (HR 3.02, 95% CI 1.07–8.55) were independent factors predicting death. After PSM, two comparable groups of 59 patients each were obtained. There were no significant differences in 5-year DFS and OS between the two groups. Conclusion: Our results showed that compared with laparotomy, laparoscopic surgery improved shortterm outcomes, with similar survival results. Factors predicting survival were high-grade tumor, non-endometrioid histology, age >60, and CA125>35. Introduction From 1991 to 2010, the total number of uterine corpus cancer increased by 5.7-fold in Taiwan. In addition, the annual age-specific rate nearly doubled during 2001 to 2010 when compared with 1991 to 2000 [1]. In Taiwan, there has been a noticeable increase in the number of women adopting a Western-style diet and not having children in recent years, and thus changes in reproductive behavior and an increased rate of obesity may partially be responsible for the increase in endometrial cancer. The standard treatment for endometrial cancer is staging surgery with total hysterectomy, bilateral salpingooophorectomy and pelvic/para-aortic lymph node dissection followed by tailored adjuvant therapy. Surgery is traditionally performed via laparotomy. Since Childers and Surwit first described laparoscopic surgical staging for early endometrial cancer in 1992, many subsequent studies have shown that this approach is an effective alternative to open surgery with a much faster recovery and fewer complications [2][3][4][5]. However, these studies lack well-designed randomization, and most of them are retrospective in nature. In 2009, Walker et al. [6] published the initial results of a large randomized controlled trial (LAP2) by the Gynecologic Oncology Group (GOG). With a longer follow-up period, they concluded that laparoscopic staging surgery is an acceptable alternative for patients with presumed early-stage endometrial cancer for better short-term benefits including shorter hospital stay, fewer moderate-to-severe postoperative adverse events, and improved body image. They also demonstrated that this approach improved the patients' quality of life, and, more importantly, did not compromise overall survival (OS) compared to those treated with laparotomy [7]. In a subsequent randomized controlled trial (Laparoscopic Approach to Cancer of the Endometrium, LACE), the results also demonstrated equivalent disease-free survival at 4.5 years and no difference in OS [8]. In Taiwan, there was only one previous study described the results of head-to-head comparison between laparoscopic and laparotomy surgery for endometrial cancer with limited case number. We started to treat some patients with early endometrial cancer laparoscopically at our department since 2007. Therefore, in this study, we aimed to evaluate whether this surgical approach could be the preferred procedure for these patients compared with conventional open surgery at our institute. We used propensity score matching (PSM) analysis to eliminate the imbalance between groups and reduce the effects of confounding to achieve a random effect in this observational study. Furthermore, we attempted to analyze factors predicting prognosis. Patients We conducted this retrospective review to identify all cases of uterine cancer between January 2007 and December 2014 at our hospital. Four hundred and thirty-two patients were identified during this period. All of the patients received imaging studies with computed tomography or magnetic resonance imaging for preoperative evaluations of disease burden and extent once the diagnosis had been established. In order to maintain uterine intactness and prevent cancer cell spillage during its removal from vagina, in our clinical practice the selection criteria for laparoscopic surgery were clinical stage I disease with a uterine size less than 10 cm in maximal diameter based on imaging findings. In order to match the patient background properly in the laparotomic group, we excluded patients who received open surgery with a clinical stage of II or higher, and/or a uterine size more than 10 cm. Patients who did not receive surgery as the initial treatment and those with sarcoma histology were also excluded. Finally, 255 patients fulfilled the criteria and were enrolled in this study. Method of operation Surgical procedures including peritoneal washing cytology, total hysterectomy, bilateral salpingo-oophorectomy, and pelvic/para-aortic lymphadenectomy were performed via a laparotomic or laparoscopic route. The choice of surgical route was determined according to the patients' or physicians' preference. However, laparoscopic surgery was performed by only two well-trained laparoscopic oncologists (HL and YCO). The postoperative adjuvant therapy was arranged according to clinical guidelines based on surgical pathological findings. This study was approved by the Institutional Review Board of Chang Gung Memorial Hospital. Basic information of the patient Age, gravidity, parity, body mass index (BMI), levels of the pretreatment tumor marker cancer antigen-125 (CA125), and co-morbid medical conditions were recorded for each patient. Perioperative outcome The intraoperative complications included vascular injuries, intestinal injuries, bladder or urologic injuries, and conversion to open laparotomy. Peri-and post-operative data collected included operative time (defined as Veres needle insertion/skin incision to skin closure), estimated blood loss, pre-and post-operative hemoglobin values, need for transfusion, length of hospital stay, and re-operation or re-admission. Pathologic data were collected including the total number of lymph nodes retrieved, FIGO stage of the tumor, and the histology and grade of the tumor. Disease-free survival (DFS) and OS were estimated as the interval from the date of diagnosis to the first evidence of recurrence or death, respectively. Recurrent disease was defined as proof from a biopsy, image findings, and/or persistent elevation of tumor markers. Statistical analysis Comparisons of median and mean values were performed using the two-sample t-test. Frequency distributions between categorical variables were compared using the chisquare test. A Cox regression model was used for multivariate analyses using DFS and OS as end points. DFS and OS curves were estimated using the Kaplan-Meier method and compared using the log-rank test. Because some baseline characteristics were statistical different between patients received laparoscopic and laparotomic surgery, one-to-one PSM was performed with the nearest available neighbor matching to eliminate the imbalance using a 0.2 caliper. Propensity scores were calculated using a multivariable logistic regression model to estimate the conditional probability of a patient receiving a surgery approach. The degree of covariate imbalance in the unmatched and matched samples was measured using the standardized difference. A standardized mean difference (SMD) of less than 0.1 indicates very small differences; values between 0.1 and 0.3 indicate small differences; values between 0.3 and 0.5 indicate moderate differences; values above 0.5 indicate considerable differences. Data management and analysis were performed using Med-Calc and SPSS software for Windows version 22 (SPSS Inc., Chicago, IL, USA). A p value less than 0.05 was taken to indicate statistical significance, and a p value between 0.05 and 0.1 was taken to indicate a statistical trend. Basic characteristics of the patients A total of 255 patients were finally enrolled for this retrospective study. The basic characteristics of the patients are listed in Table 1. The median age at diagnosis was 57 years old (interquartile arrange (IQR) 50-61 years), and the median follow-up time was 56.0 months (IQR 42-71 months). Among the 255 patients analyzed, 30 had recurrent disease and 17 died. The 5-year DFS and OS rates were 87.3% and 92.5%, respectively (not shown). Eighty-three (32.6%) patients received post-operative adjuvant therapy, including 45 (17.7%) with radiotherapy, 28 (11.0%) with chemotherapy, and 10 (3.9%) with radiation and chemotherapy. Of the 255 patients, 177 (69.4%) received open laparotomy and 78 (30.6%) received laparoscopy. None of the patients selected for laparoscopic surgery were converted to laparotomy. There were no significant differences between the two groups in terms of median age at diagnosis, number of gravidity and parity, percentage of menopause, hypertension and diabetes mellitus, and FIGO stage distribution. How- ever, moderate to considerable differences (SMD >0.3) were observed in percentage of age older than 60 year-old, mean BMI and CA125 level, histologic type and grade, and adjuvant therapy between the two groups ( Table 2). Comparison of perioperative outcomes The peri-and post-operative events of both groups are shown in Table 3. The patients who received laparoscopic surgery had significantly less blood loss (150.0 versus 180.0 cc, p = 0.015), shorter hospital stay (7.0 versus 8.0 days, p < 0.001), but longer operative time (278.5 versus 220.0 minutes, p < 0.001) compared with the laparotomic group. The median lymph node yield was similar in both groups. Pelvic lymphadenectomy was performed in all of the patients, but para-aortic lymphadenectomy was omitted in 71 (27.8%) patients (54 (30.5%) in the laparotomy group and 17 (21.8%) in the laparoscopic group). The complication rates were low in both groups (9.1% and 5.1% in laparotomy and laparoscopic group, respectively) and only two major complications occurred, including one patient who died on postoperative day 6 due to acute myocardial infarction, and one patient with a great vessel injury during surgery which resulted in massive blood loss. Both of these patients received open surgery. Detailed descriptions of complications were shown in Table 3. Prognostic factors analysis and survival outcomes in all patients A significant better 5-year DFS (94.4 vs. 84.1%, p = 0.022) and a trend towards better 5-year OS (97 vs. 90.5%, p = 0.060) were observed for the patients receiving laparoscopic surgery (Fig. 1A,B). Although more patients in laparotomic group had recurrent disease, the recurrent pattern was similar for both groups. The first recurrent site in our study occurred mostly in the local sites, including vagina and pelvis, and distant lung was the second most common site (Table 3). To clarify the impact of the type of surgery on DFS and OS, we used multivariate Cox regression analysis to identify in-dependent factors that were probably associated with DFS and OS. After adjusting for multiple prognostic covariates, grade III tumor (HR 11.35, 95% CI 4.06-31.70, p < 0.001), non-endometrioid histology (HR 3.99, 95% CI 1.52-10.44, p = 0.005), and age older than 60 year-old (HR 3.35, 95% CI 1.60-7.00, p = 0.001) were the independent factors to estimate the relative risk of recurrence. Grade III tumor (HR 10.38, 95% CI 2.44-44.15, p = 0.002) and CA125 > 35 U/mL (HR 3.02, 95% CI 1.07-8.55, p = 0.037) were the independent factors to estimate the relative risk of death (Tables 4 and 5). Survival outcomes analysis after propensity score matching Patients treated with different surgical approached were matched one-to-one using PSM to eliminate confounding factors. Seven covariates entered in the propensity model, including age, BMI, CA125, FIGO stage, pathologic type and grade, and adjuvant therapy. In total, 59 pair patients were matched in both groups. There were only small differences (SMD <0.2) in the clinicopathological variables between the 2 matched groups indicating a good matching outcome of the propensity model ( Table 2). Comparisons of the DFS and OS curves between both groups after PSM are shown in Fig. 2A,B. There were no significant differences in 5-year DFS (95 vs. 92.5%) and OS (100 vs. 97.5%) between the two groups. Discussion In the present study, we demonstrated the safety and feasibility of laparoscopic surgery for the management of presumed early-stage endometrial cancer. In addition, this surgical approach provided better peri-and post-operative outcomes, and, most importantly, did not compromise survival outcomes. The major factors predicting poor survival for clinical early stage disease were high-grade tumor, nonendometrioid histology, age >60, and CA125 >35 U/mL but not surgical type. Although our results are similar to previous reports, there are some issues needed to be addressed. The question of whether to perform pelvic lymphadenectomy routinely has been debated. Similar to LAP2 trial, we performed pelvic lymphadenectomy in all patients during the study period. However in LACE trial, only half of patients had lymphadenectomy. When we looked at the pathological findings, 23% of ours and 12% of LAP2 patients had deep myometrial invasion while none in LACE trial [6,8]. Furthermore, as high as 70% of patients in LACE trial had disease limited to endometrium [8]. That's why surgeons in LACE trial aborted lymphadenectomy in some patients. Caution should still be made because it is difficult to preoperatively identify these low-risk patients based on gross observation and frozen section results. Studies had demonstrated the uncontrollable variables of change in grade and depth of myometrial invasion on final pathology [9]. Regarding para-aortic lymph node (PAN) retrieval, we performed this procedure in about 70% of patients while over 90% of patients in LAP2 trial had PAN lymphadenectomy. Previous studies have reported that para-aortic lym- phadenectomy did not improve clinical outcomes, because the presence of PAN metastasis indicates systemic disease [10][11][12]. In a study from Northern Taiwan, reported by Chu et al. [5], PAN lymphadenectomy was performed in only 2.8% and 13.6% of patients in the laparoscopic and laparotomic group, respectively. Another single arm study from Taiwan reported by Lee et al. [13], 14.3% of patients had received PAN lymphadenectomy. However, the 5-year DFS and OS rates in both studies were not inferior to the LAP2 result. The potential risks of routine para-aortic lymphadenectomy include a considerably longer operative time, greater blood loss, and higher rate of post-operative ileus [14,15]. Therefore, the NCCN (National Comprehensive Cancer Network) panel has changed their recommendations on PAN lymphadenectomy since 2014. They recommend such a procedure for selective high-risk situations, including those with positive pelvic nodes [16] or high-risk histologic features [17]. Another interesting issue is the survival outcomes between laparoscopic and laparotomic groups. Most studies have shown equivalent DFS and OS rates between patients undergoing different surgical approaches. Initially, we found better DFS and OS rates in the laparoscopic group, the survival benefits disappeared after a PSM analysis to balance the baseline clinicopathological characteristics such as age, CA125, and histologic type and grade. An unfavorable histology such as serous or clear cell and high-grade endometri- oid carcinoma tend to enhance extra-uterine spreading in the early stage of the disease [18,19]. The study reported by Chu et al. showed a better survival than ours could be explained by excluding non-dometrioid histology in their study. In addition, we previously found that a pretreatment CA125 level of more than 40 U/mL was a risk factor for lymph node metastasis [20]. For these reasons we prefer not to perform laparoscopic surgery for these high-risk patients at our hospital, and this may have led to the higher mean level of CA125 and higher proportion of unfavorable histology and grade, and also higher rate of postoperative adjuvant therapy in the laparotomy group. Our multivariate analysis confirmed that the type of surgery did not have any impacts on DFS and OS independently after adjusting for histology, grade, CA125 level, FIGO stage and age. It is generally believed that operating on obese patients can be challenging especially when a new surgical approach is introduced. Furthermore, the results of LAP2 trial showed that risk of conversion to laparotomy increased with increasing BMI. Based on these findings, we strictly selected our candi- dates for laparoscopic approach surgery in our study resulting in a low mean BMI of 24.9 kg/m 2 , and fortunately none had a conversion to laparotomy. Even though the mean BMI was higher in our laparotomic group, the value was still much lower than that of LAP2 and LACE trials (28-33 kg/m 2 ). The exact reason is not clear but may be due to the difference of obese prevalence between Asian and White ethnicity [21]. In LAP2 and LACE trials, almost 90% of patients enrolled were White race. In a Japanese study investigating different surgical approaches for early endometrial cancer showed a median BMI of 23-24 kg/m 2 in 120 cases, which was much close to our patients [22]. Last but not least, the recurrence in our study occurred mostly in the local site, including vagina and pelvis, and distant lung metastasis took the second spot for both laparoscopic and laparotomic groups. These results were similar to the LAP2 study, but different from the Chu et al. which showed more lung metastasis in laparoscopic group. According to the explanation of Chu et al., this difference might be related to the inconsistent criteria of receiving postoperative adjuvant brachytherapy between two different types of surgery. Because of the fear of vaginal stump recurrence in the laparoscopic group, more had brachytherapy and no had received chemotherapy although not significant. The real causative reason contributing to this different recurrent pattern in the Chu et al. maybe not clear, but one supposed mechanism is that increased intra-abdominal pressure during laparoscopic surgery may push tumor cells into the lymphovascular space and might cause distant spread of the tumor sequentially [23]. In our study, we had the same criteria of postoperative adjuvant brachytherapy for both groups. However, more multiple recurrent sites in laparotomic group were noted, this could be explained by more unfavorable histology or high-grade lesions. Although the patients enrolled were not assigned randomly to different surgical approaches which was a limitation of our study, we performed a PSM and successfully to eliminate a certain confounding factors. When randomized trial was limited by the objective conditions, PSM analysis could be applied to reduce selection bias. Another limitation of our study is a single institutional design; however, only 2 surgeons involved in laparoscopic surgery carried the strengths of unique operative procedures and maintaining sufficient experience during the period of patients' accrual. Conclusions This study demonstrated the feasibility of laparoscopic surgery in clinical early stage endometrial cancer patients with the benefits of shorter hospital stay and less blood loss. The survival outcomes were comparable to a laparotomic approach. Factors associated with survival were high-grade tumor, non-endometrioid histology, age >60, and CA125 >35 U/mL. In selected patients and under experienced surgeons, laparoscopic surgery can be performed safely considering both short-term and long-term outcomes. Author contributions HL, FTK, and YCO, design the study; HCF, CHW, CCC, and CCT, analysis or interpretation; FTK, CHW, YJC, CCT, and CCC, literature search; HL and YJC, writing manuscript; HL, HCF, and YCO, critical review. All authors read and approved the final manuscript. Ethics approval and consent to participate All clinical investigations are conducted according to the Declaration of Helsinki principles. The present study was approved by the Institutional Review Board of Chang Gung Memorial Hospital (approval number: 105-4364C), and the requirement of written informed consent was waived.
2021-08-27T16:44:31.665Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "5de48f056a9cd39795abef3aeef9cb5c3687b2f0", "oa_license": null, "oa_url": "https://ejgo.imrpress.com/EN/article/downloadArticleFile.do?attachType=PDF&id=6239", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e98414c2df3a13ba45b27fe5c3a37c882e3a43a1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
89504858
pes2o/s2orc
v3-fos-license
Antioxidant enzymes responses of polychaete Perinereis aibuhitensis following chronic exposure to 17β-estradiol Abstract The occurrence of 17β-estradiol (E2) in the aquatic environment can lead to damage to the reproductive system, along with other adverse effects including oxidative stress, in aquatic organisms. In the present study, Perinereis aibuhitensis were treated with E2 at 0.1, 1, 10, 100 and 1000 μg/L for 205 d, after which the activities of superoxide dismutase (SOD), catalase (CAT) and glutathione-S-transferase (GST), and the concentrations of glutathione antioxidant biomarker responses were studied. Weight gain and specific growth rate of P. aibuhitensis were not significantly affected by E2 treatment. Although no significant differences in mortality were observed, the group receiving the highest dose of E2 (1000 μg/L) experienced the highest rate of mortality. Treatment with E2 enhanced the levels of total glutathione (T-GSH), but levels of glutathione reduced (GSH) were significantly decreased by 17.02, 20.55, 23.45, 31.91 and 56.08%, with a concomitant increase in the levels of glutathione disulphide (GSSG) by 148.58, 213.05, 232.62, 294.63 and 306.45%, at 0.1, 1, 10, 100 and 1000 μg/L E2, respectively. The redox ratio of GSH/GSSG was significantly decreased (p < 0.01). The results of this study suggest that long-term exposure of P. aibuhitensis to E2 may inhibit antioxidant enzyme activities, thereby reducing the capacity of its detoxification system. Introduction 17b-estradiol (E2) is the major oestrogen secreted by humans, and is an emerging contaminant that can reach the aquatic environment via wastewater release, livestock waste and agricultural runoff. The occurrence of E2 in wastewater and surface water has been investigated in numerous studies (Servos et al. 2005;Coleman et al. 2010;Yan et al. 2012), thus presenting a potential hazard for aquatic species (Rotchell & Ostrander 2003;Caldwell et al. 2012). The reported effects of E2 on aquatic species have mainly focused on the endocrine disrupting effects and the mechanisms of action (Osada et al. 2003;Maria et al. 2008;Ciocan et al. 2010). However, some studies have demonstrated that E2 can disrupt non-reproductive endocrine events, namely the stress responses in fish (Teles et al. 2005;Maria et al. 2008;Moura Costa et al. 2010). The immune system of molluscs also represents a significant target for the action of environmental estrogens, as E2 in the nM range (1-100 nM) induces lysosomal destabilisation in Mytilus (Canesi et al. 2007). Exposure to E2 can also disturb the intra-cellular environment, increasing the production of reactive oxygen species (ROS), thereby impairing the antioxidant defence system (Maria et al. 2008). The activities of antioxidant enzymes such as superoxide dismutase (SOD, converts O 2 to H 2 O 2 ) and catalase (CAT, converts H 2 O 2 to water) are important biomarkers for investigating the cellular redox system. Glutathione is a major endogenous antioxidant, which plays a crucial role in protecting cells from exogenous and endogenous toxins, and exists in the glutathione reduced (GSH) and glutathione disulphide (GSSG). GSH is the most important endogenous antioxidant for detoxification and elimination of environmental toxins and free radicals, such as ROS that disrupt cellular functions by oxidizing lipids, proteins, and DNA. Similar to SOD and CAT, glutathione-S-transferase (GST) catalyzes the conjugation of reduced glutathione with a variety of xenobiotics via cysteine thiol as a phase II detoxifying enzyme, protecting cells against chemical insults. The post-translational redox modifications of proteins probably depend on the magnitude of the oxidant stimulus and the duration of exposure. A high dose or chronic exposure may lead to irreversible post-translational modification and protein degradation. Such events could damage to biomolecules, cell death and altered homeostasis. Cancer, diabetes, atherosclerosis, neurodegenerative disorders and the aging process have been associated with oxidative stress (OS) due to elevated ROS or to insufficient ROS detoxification in humans (Lim on-Pacheco & Gonsebatt 2009). Polychaetes are usually the most abundant taxon in marine benthic communities, are an important source of food for many shore birds and oceanic fishes, and have also been frequently utilised as indicator species in environmental assessments (Dean 2008). The activities of GST, CAT, and SOD have been used as biomarkers in environmental field surveys employing polychaetes, including species such as Nereis diversicolor (Durou et al. 2007;Bouraoui et al. 2010), Perinereis nuntia and Neanthes succinea . Many polychaete species demonstrate a relatively high capacity to regulate organic contaminants and heavy metals, and are regarded as good bioindicators of metal and organic contamination in estuaries (Dean 2008). Despite this frequent use of Nereidids worms for environmental assessment and an understanding of their antioxidant defense systems, knowledge concerning the genotoxic and lipid peroxidative effects of E2 in these animals is scarce. In the present research, the clam worm P. aibuhitensis was selected as a model organism to evaluate the toxic effects of chronic exposure to E2, and verify the dose-dependent physiological effects of this hormone using biomarkers that cover the cellular redox system. These biomarkers include CAT, SOD and GST enzyme activity, the concentrations of GSH, GSSG and the redox ratio of GSH to GSSG. Experimental design Juvenile P. aibuhitensis (8.8 ± 1.2 mg), collected from a breeding colony in our laboratory, were divided into 21 different groups of 15 individuals and placed into with a 10 cm pebble bed glass tanks (20 cm  20 cm  40 cm), and containing 1000 mL artificial seawater enriched with different E2 concentrations. Five nominal concentrations of E2 were used in the experiment: 0.1, 1, 10, 100 and 1000 lg/L. A negative control and a solvent control (0.01% [v/v] acetone, the same as the 1000 lg/L E2 exposure) were also included. There were three replicate tanks for each experimental group. Every evening, all seawater was drained from the tanks and fish powder (purchase from feed factory) was fed. Tanks were supplied with 1000 mL of freshly prepared seawater containing E2 the next morning. Interstitial water quality parameters and ranges for the toxicity tests were as follows: water temperature 4-23 C, pH 7.1 ± 0.2 and salinity 24 ppt. The experimental period was 205 d, starting in October and lasting until April of the following year. Weight gain ratio [(final body weight À initial body weight)/Initial body weight] and specific growth rate [((ln (final body weight) À ln (initial body weight))/205 d)  100] were calculated at the end of the experiment. Biochemical assays The concentrations of glutathione (lmol/L) and total protein, and the activities of GST (U/mg protein), CAT (U/mg protein) and SOD (U/mg protein), were determined using the Diagnostic Reagent Kits purchased from Nanjing Jiancheng Bioengineering Institute (PR China). The level of GSH was calculated as tGSH-2GSSG, and the redox ratio of GSH/GSSG was calculated as (GSH/GSSG)  100. Statistical analysis All data are presented as mean ± SE. The concentrations of glutathione, SOD activity, CAT activity and GST activity were evaluated by one-way analysis of variance followed by least significant difference multiple comparisons using the statistical package SPSS 16.0 (SPSS Inc., Chicago, IL). Results were considered significantly different when p < 0.05. Results and discussion Effects of 17b-estradiol on growth, superoxide dismutase, catalase and glutathione-S-transferase activities After 205 d, no clam worm mortality was observed in the 1 or 100 lg/L E2 treatment groups. The highest rate of mortality (seven worms, 13.33%) was observed in the 1000 lg/L E2 group. The weight gain and specific growth rate of P. aibuhitensis were not significantly affected by E2 (Table 1). Antioxidant enzyme activities (CAT, SOD and GST) of P. aibuhitensis were significantly affected by E2 exposure. CAT activity being significantly lower in all E2treated P. aibuhitensis than in the control (Figure 1(A)). The CAT activities of P. aibuhitensis treated with 100 lg/L and 1000 lg/L E2 were lower than that of the control by 56.50% and 67.05%, respectively (p < 0.05), and 1000 lg/L E2 treatment group was also significantly lower than other E2 treatment groups (p < 0.05). The activity of SOD was also significantly decreased at all concentrations of E2 compared with control (p < 0.05), and 100 lg/L and 1000 lg/L E2 treatment groups were significantly lower than other E2 treatment groups (p < 0.05) (Figure 1(B)). A significant decrease was also observed in GST activity in all E2 treatment groups compared with control (p < 0.05), and 1000 lg/L E2 treatment group was lower than other E2 treatment groups (p < 0.05) (Figure 1(C)). ROS are continually produced as undesirable toxic by-products of various endogenous metabolic processes in aerobic organisms (Livingstone 2003). Some studies suggest that exposure to natural estrogens and xenoestrogens increases intracellular ROS levels, which induces damage to nucleic acids, proteins, carbohydrates and lipids, thereby altering the functions of these macromolecules in cells (Filby et al. 2007;Rempel-Hester et al. 2009;Koutsogiannaki et al. 2014). Treatment of Mytilus galloprovincialis hemocytes with 25 nM E2 for 30 min caused a significant increase in ROS production, which led to oxidative damage exemplified by a significant increase in DNA damage, protein carbonylation and lipid peroxidation (Koutsogiannaki et al. 2014). Damage of DNA was also increased in the gonads of male hornyhead turbot after a 48 h exposure to 15 mg/L E2 (Rempel-Hester et al. 2009). Similar DNA damage was also observed in the tissue of fish exposed to estrogenic wastewaters for 21 d (Filby et al. 2007). The ROS produced in biological systems are detoxified and purportedly held in check by antioxidant defences. The antioxidant enzymes, including CAT, SOD and GST, are widely found in aquatic organisms and are part of the defense system that prevents cellular damage by ROS. A time-and dosedependent inhibition of GST activity has been reported in fish (Dicentrarchus labrax) exposed to E2 (Vaccaro et al. 2005). Hepatic activities of GST and CAT were also significantly decreased in sea bream (Sparus aurata) after E2 administration (Carrera et al. 2007). The activities of CAT and SOD were also slightly decreased in EE2 injected common carp (Cyprinus carpio) (Sol e et al. 2000). In the present study, the weight gain of P. aibuhitensis was not significantly affected by exposure to E2. The rate of mortality was increased in the 1000 lg/L E2 treatment group, although this increase was not significantly different from the control. The decline in the activities of P. aibuhitensis antioxidant enzymes, including CAT, SOD and GST, suggests an impairment of the antioxidant defence system following exposure to E2. Under these conditions, the production of reactive ROS may exceed the capacity of cellular antioxidant defences to remove this toxic species, resulting in oxidative stress. Effects of 17b-estradiol on total glutathione and glutathione disulphide levels Treatment with E2 enhanced the levels of total GSH (T-GSH) in comparison with the control. However, GSH levels were significantly decreased by 17.02, 20.55, 23.45, 31.91 and 56.08% (Figure 2(A)), with a concomitant increase in the levels of GSSG by 1.49, 2.13, 2.32, 2.95 and 3.06 times (Figure 2(B)), at 0.1, 1, 10, 100 and 1000 lg/L E2, respectively, as compared with the control group. Treated with 100 and 1000 lg/L E2 also significantly increased the levels of GSSG compared with other E2 treatment groups. The redox ratio of GSH/GSSG was significantly decreased (p < 0.05) (Figure 2(C)). The effects of estrogens on the immune systems of Nereidids worms are still unclear. Some studies suggest that E2 can affect immune function in fish, resulting in changes that include leukocyte production and activity (Yamaguchi et al. 2001;Moura Costa et al. 2010). As a major intracellular antioxidant, GSH plays a crucial role in the maintenance and regulation of the thiol-redox status of the cell, which prevents the oxidation of protein thiol groups, either directly by reacting with reactive species or indirectly through glutathione transferases. Low intracellular GSH levels would decrease the cellular antioxidant capacity. Several reports have suggested that a decrease in GSH levels is also associated with immune system dysfunction and inflammation in humans (Ballatori et al. 2009;Ghezzi 2011). In the present study, the depletion of GSH and the elevation of GSSG could result in damage to the immune system, which would decrease the resistance to disease. The death of P. aibuhitensis might also be the result of immune system dysfunction following long-term exposure to E2. Nonylphenol (NP) exposure resulted in a depletion of GSH in Atlantic cod, while increasing GST activities, leading the author to suggest that the GSH depletion may be due to a phase II-mediated conjugation of nonylphenol with GSH by GST (Sturve et al. 2006). Induction of GST activity was also observed following NP treatment in the polychaete Nereis succinea (Ayoola et al. 2011). In studies in which GST enzyme activity was used as a biomarker in polychaetes exposed to various heavy metals and/or metal polluted environments, enzyme activity either increased or decreased with different patterns according to the elements studied or the exposure conditions (Mosleh et al. 2006;Won et al. 2011). In the present study, GSH decreased after exposure to E2, while GST activity did not increase. Further studies are needed to understand the synthesis, consumption, and/or regeneration of GSH in P. aibuhitensis. In addition, the decrease in the activities of SOD, GST and CAT, and the depletion after E2 administration, indicates a decrease in the xenobiotic transformation metabolism in P. aibuhitensis. This is similar to what has been observed in Sparus aurata injected with E2, in which EROD, GST and CAT activities were depleted (Carrera et al. 2007). Conclusions Chronic exposure to E2 did not affect the weight gain or specific growth rate of P. aibuhitensis. However, the activities of P. aibuhitensis antioxidant enzymes (CAT, SOD and GST) were decreased after exposure to E2, which may indicate a decreased capacity of the cellular antioxidant system. Treatment with E2 also resulted in decreased levels of GSH and GSH/GSSG, which could potentially lead to damage of the immune system, and a reduced resistance to disease.
2019-04-01T13:13:31.246Z
2016-07-02T00:00:00.000
{ "year": 2016, "sha1": "f032d0dc66ea6dca9024e34a1c706fc0b199d81c", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/1828051X.2016.1194172?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "71da8c130340e15c50e78cac3c2f4535348d08a9", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
238789479
pes2o/s2orc
v3-fos-license
APHTHOUS ULCERATION – RISK FACTORS AMONG DENTAL STUDENTS AT KHYBER COLLEGE OF DENTISTRY, PESHAWAR : OBJECTIVES: The aim of this study was to assess the risk factors of aphthous ulceration (AU) among dental students. METHODOLOGY : This cross-sectional study was conducted at Khyber College of Dentistry, Peshawar in March 2021 among dental students of all four professional years. It was a questionnaire-based study. Questionnaire comprised of demographics and questions about aphthous positivity, risk factors. Risk factors were stress, family history, menstruation (hormonal changes), food allergy, gastrointestinal diseases and medication. Most questions were closed ended. Hospital Anxiety, and Depression Scale (HADS) was used to assess anxiety and depression. RESULTS : Total of 245 dental students responded to questionnaire. Out of 245 dental students, 117 (47.8%) gave a positive history of AU. Mean age in AU positive subjects was 23±1.5 years. AU was seen in males (48.7%) and females (51.3%) with no association between them. Among risk factors, 69 (28.2%) had positive family history, 93 (38%) had stress and 8 (3.3%) reported menstruation. Present study included spice (3.7%), fast food (0.4%), sweets (0.4%), dry fruit (0.4%) and walnut (0.4%) as food allergies related to AU. Only 13 (5.3%) reported a gastrointestinal disease. Medication like NSAIDS and antihypertensive were not involved in any AU case. Anxiety and depression was present in 65% and 38.5% with AU positivity. CONCLUSION : Stress was the most common risk factor and positive family history turned out to be the second common risk factor for AU among dental students. study, with Correspondence remission due to its close relation to diet. In previous studies assessing AU relation risk factors of AU among dental students. Identification of common risk factors for dental students will allow better control of the AU by avoiding the trigger/risk factor altogether, thus improving quality of life and performance of dental students as AU can hamper smooth running of daily routine by affecting speech, food intake and mastication. Also, AU is sometimes source of unnecessary worry and stress, which leads to development of more aphthous ulcers. The aim of this study was to assess the risk factors of AU among dental students. METHODOLOGY: This cross-sectional study was conducted at Khyber college of Dentistry, Peshawar in March 2021. Convenience sampling method was used. Sample included students of all four professional years of BDS (Bachelor of Dental Surgery). House officers, postgraduate residents and faculty were excluded from this survey. A formal ethical approval was sought from the concerned institute before conducting the survey. Questionnaire, validated from Ajmal M et al study 6 , was distributed among participants of study. Questionnaire comprised of demographics of age, gender, and professional year. After providing a very brief description of AU, a question with a yes or no for AU occurrence was asked in section 1. Further questions regarding presence of risk factors of AU were put forward in section 2, which must be filled only by those with aphthous positivity. Risk factors that might be related to AU were family history, stress (professional, exam, emotional, social etc), food as risk factor, while some have verified allergic food for AU 10 . AU occurs at a higher frequency at times of stress. In Indian population, stress turned out to be the most common risk factor leading to recurrent AU 11 . A negative correlation of tobacco with AU is concluded by Axell and Henricsson study due to leukoedema that prevents penetration of antigens into epithelium 8 . Less data is available about to diet, some have shown spicy and fried 1 INTRODUCTION: Aphthous is derived from a Greek word "aphthaǁ" meaning ulceration. Aphthous ulcer AU is the most common oral ulcer. It has a prevalence of 25% out of 4% of all oral ulcers 1 . Epidemiological data suggests that AU affects 2-66% of population worldwide 2 . Recurrent and self-limiting nature with involvement of only non-keratinized oral mucosa is the classic clinical presentation of AU 3 . They appear as small, round ulcers with welldelineated margins and are associated with moderate to intense pain 4 . Aphthous ulcer AU is categorised into 3 types as minor, major and herpetiform on basis of size. Minor AU accounts for greater than 80% of all 3 types 5 . Major AU can lead to scarring 6 . AU, which is often initiated during childhood, involves both genders evenly while some research shows a female predominance 7 . The largest study on AU included 10,000 young individuals from 21 different countries, which showed that 38.7% males and 49.7% females get AU during their lifetime 8 . A multifactorial aetiology is suggested for AU, but exact aetiology remains largely unknown 7 . Factors such as diet, hypersensitivity, medications, hormones, smoking, trauma, and psychological stress are considered as risk factors 8 . Genetics is also linked to AU, many DNA polymorphisms (NOD-like receptor 3, toll-like receptor 4, interleukin 6, E-selectin, IL-1β and TNF-α genes), in particular 9 . In addition to this, evidence of Immune dysfunction as a causative factor also exists 8 . Scholars have suggested that dietary control can lead to AU illness (Crohn, celiac disease, ulcerative colitis, Helicobacter H. Pylori, peptic ulceration) and medications. History of smoking was also sought in AU positive subjects only. Closed ended questions were used. Allergic food item, GIT disease and medication, if any, had to be mentioned separately. Last section of questionnaire was HAD scale to assess stress. HAD-A and HAD-D with a score greater than 7 depicted significant anxiety and significant depression. Data was analysed in SPSS version 25.0. Mean and standard deviation was calculated for quantitative variable like age. Frequency and percentages were calculated for qualitative variables like gender, positive history of AU and risk factors including anxiety and depression. Associations between categorical variables (AU vs. gender, risk factors) were tested using chi-square test. Statistical significance was set at P < 0.05 for all associations. Figure 1: Frequency of Risk Factors of AU in Dental Students Anxiety was found in 143 (58.4%) dental students whereas depression was seen in 82 subjects (33.5%) as depicted by HAD scale. Gender distribution of subjects with AU is shown in Table 1. Mean age in AU positive subjects was 23±1.5 years. There were 7 (6%) AU positive students who smoked as well. Frequency of anxiety and depression among dental students with AU positivity is depicted in Table 2 and Table 3. No association between AU positivity and gender was found (p=0.20). AU positivity and family history showed a significant relationship (p=0.00). A significant relation of AU with stress and anxiety was depicted (p=0.00, p=0.04, respectively). DISCUSSION: Prevalence of AU can range between 5-60%, according to the group examined 8 . Among dental students, an increased prevalence of AU has been reported in several studies 7 . This study showed a frequency of 47.8%, which is quite like 44% prevalence seen in dental students of Rathod U study 1 . Similarly, Prithi R research study 3 reported a 50% incidence among dental students, which is again close to our finding. On the contrary, Al-Johani K 7 concluded a low positive history of 21.7 %. Dental students are under stress of vast academic syllabus and tough clinical, which can predispose them to greater AU incidence as seen in this study. This study found not much difference of AU occurrence in males (48.7%) and females (51.3%) with no association found between them. In contrast, participants of George S study 12 reported a higher incidence in females. Sharma M et al findings were same with females affected more than male counterparts 13 . This may be due to lower threshold of females for stress, an important risk factor of AU. Another factor can be presence of hormonal changes due to But Jabar SK et al menstruation and pregnancy. Despite high prevalence seen in females, Leonardo, Ship, Chattopadhyay failed to demonstrate a relation between AU and gender, which is corroborating this study 5 . In an observational study by Complito et al in adult subjects, AU was more common at 38 years of age 13 . In this study too, AU positivity increased with increasing age with a minimum of 0.9% at 18 years. Mean age with SD for aphthous ulcer AU positivity was 23±1.5 years. Usha R et al results, on the other hand, showed 19-20 years as prevalent AU age group 1 . With increasing age, stressors of life also increase. As the age increases, professional year also increases which ultimately results in greater workload, greater fear of failure, and more thoughts for future job. Stress was the most common risk factor (38%) in the present study with 65% anxiety and 38.5% depression. A significant association of AU was found with stress and anxiety in this study. Al-Johani K results showed stress as the most frequent risk factor 7 (53%), which is in accordance with this result. About association, a positive correlation of AU with stress was also stated in Sharma M et al study 13 . Ajmal M et al study similarly depicted an association between anxiety and AU 8 . Stress hormone, cortisol, is at a higher level in saliva of patients with AU. Depression was not related to AU significantly in the current survey (p=0.11), which is very much in accordance with Soto-Araya results 8 . Stress and anxiety among dental students can be multifactorial from professional exam stress to Covid-19 related stress and anxiety. Covid-19 related stress could be more in dental students as compared to others as most of the dental procedures involve aerosols, thereby elevating the risk of contracting the disease. Positive family history increases susceptibility to AU 9 . Present study revealed a positive family history as the second common risk factor (28.2%), which depicted a statistically significant relationship. Similarly, a significant association of AU with a positive family history was found in George S et al study 12 . reported positive family history in 60%, 14 which is greater than results of this study. It has been postulated that more severe AU with an onset at an early age is seen in patients with a positive family history 15 . Another study showed 40% of patients with a family predisposition 14 whereas in this study, it is 28.2%. Positive family history marks the genetic predisposition of AU. Menstruation was reported in 8(3.3%) like Maged A findings of 2% 16 . Ajmal et al found a relation between AU and menstruation whereas most of the other studies have not established an association between the above two 8 . A small percentage of menstruation as risk factor can be due to its association with females only whereas males do not exhibit any obvious event of monthly hormonal changes. Present study, with low frequency, included spice (3.7%), fast food (0.4%), sweets (0.4%), dry fruit (0.4%) and walnut (0.4%) as food allergies related to AU. In this study, total 6.1% students with AU reported food allergy. In Ajmal et al study, 11.8% students had food allergy as risk factor 8 . Lakdawala study exhibited association of AU with spicy food 5 . Sweet and acidic intake can cause AU by changing pH inside oral cavity. Nuts can reduce AU due to the lubrication of oral mucosa by their unsaturated fatty acids 10 . Only 13 (5.3%) students reported a gastrointestinal disease (H. pylori, peptic ulceration, ulcerative colitis, IBS). Similarly, no association was shown between AU and systemic diseases like celiac, Crohn's in Queiroz study 17 . H.pylori was positive in 3 students (1.2%), which according to Al-Amad study 18 is not associated with AU. In this study, medication like NSAIDS and antihypertensive, which are commonly notorious for AU, were not mentioned in any AU case. This may indicate that medication is not a common and important risk factor for AU. Smoking was present in 6% of participants with aphthous positivity contrary to the fact that smoking is protective against AU. This can be attributed to co-existence of other risk factors like stress and family history simultaneously with smoking. This is in accordance with 6% smoker patients in Maged A et al study but two of them reported decreased frequency of AU after starting smoking 16 . CONCLUSION: Stress, the most common risk factor for AU among dental students, is significantly related to AU. Educational and public health programmes on AU and stress can help increase this awareness. This could be the way forward for dental students to improve quality of life against AU by either tackling their stress levels themselves or seeking psychological/psychiatric help whenever required.
2021-09-29T15:27:56.288Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "7c2839f0c76ce4220844f13cc7bf59c4cef97935", "oa_license": "CCBYNCSA", "oa_url": "http://jgmds.org.pk/index.php/JGMDS/article/download/195/111", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "7199baf4fd4c5fefa1488cc917065315671cdc0a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216379975
pes2o/s2orc
v3-fos-license
Incidence and preoperative predictors for major complications following radical nephroureterectomy Background Radical nephroureterectomy (RNU) is the referent standard for managing bulky, invasive, or high grade upper-tract urothelial carcinoma (UTUC). The UTUC patient population, however, generally harbor medical comorbidities thereby placing them at risk of surgical complications. This study reviews a large international cohort of RNU patients to define the risk of major complications and preoperative factors associated with their occurrence. Methods Patients undergoing RNU at 14 academic medical centers between 2002 and 2015 were retrospectively reviewed. Preoperative clinical, demographic, operative, and comorbidity indices were recorded. The modified Clavien-Dindo index was used to grade complications occurring within 30 days of surgery. The association between preoperative variables and major complications occurring after RNU was determined by multivariable logistic regression. Results One thousand two hundred and sixty-six patients (707 men; 559 women) with a median age of 70 years and BMI of 27 kg/m2 were included. Over three-quarters of the cohort was white, 50.1% had baseline chronic kidney disease (CKD) ≥ stage III, 22.4% had a Charlson comorbidity index (CCI) score >5, and 17.1% had an Eastern Cooperative Oncology Group (ECOG) performance status ≥2. Overall, 413 (32.6%) experienced a complication including 103 (8.1%) with a major event. Specific distribution of major complications included 49 Clavien III, 44 Clavien IV, and 10 Clavien V. On univariate analysis, patient age (P=0.006), hypertension (P=0.002), diabetes mellitus (P=0.023), CKD stage (P<0.001), American Society of Anesthesiologists (ASA) score (P=0.022), ECOG (P<0.001), and CCI (P<0.001) all were associated with major complications. On multivariate analysis, ECOG ≥2 (OR 2.38, 95% CI, 1.46–3.90), P=0.001), CCI >5 (OR 3.45, 95% CI, 1.41–8.33, P=0.007), and CKD stage ≥3 (OR 3.64, P=0.008) were independently associated with major complications. Conclusions Major complications following RNU occurred in almost 10% of patients. Impaired preoperative performance status and baseline CKD are preoperative variables associated with these major post-surgical adverse event. These easily measurable indices warrant consideration and discussion prior to proceeding with RNU. Introduction Upper tract urothelial carcinoma (UTUC) is an uncommon genitourinary cancer that accounts for only 5% of all urothelial tumors (1). UTUC is three times more common in men (than women) and primarily affects elderly patients with peak incidence occurring after the 7 th decade of life (2,3). Known risk factors for UTUC include smoking, cyclophosphamide use, aristolochic acid exposure, occupational hazards, chronic inflammation, and a history of bladder cancer (3). UTUC is particularly aggressive, with high rates of recurrence and progression, with over 60% of patients presenting with muscle invasion at time of diagnosis (4,5). Tumor stage and grade are the most established and reproducible prognostic factors in patients with UTUC (5,6). Radical nephroureterectomy (RNU) remains the gold standard for management of bulky, invasive, or high grade UTUC with overall 5-year recurrencefree and cancer-specific survival probabilities of 69% and 73%, respectively (6). Endoscopic ablative techniques are an attractive option in select patients with pathologic low-grade and/or low burden UTUC, or in patients unable to tolerate extirpative surgery. There are, however, no randomized clinical trials comparing endoscopic management with RNU. Admittedly, those patients treated endoscopically have higher rates of ipsilateral upper-tract recurrence as well as bladder cancer (7). Nonetheless, up to 38% of patients of RNU patients develop a complication, and prior retrospective studies have identified a significant reduction in mean GFR, thereby limiting the role of adjuvant chemotherapy for locally advanced disease (8,9). Associated GFR decline further increases the potential for cardiovascular and all-cause mortality. Many patients undergoing RNU are older and harbor multiple comorbidities which place them at increased risk for postoperative complications. Preoperative nomograms have been constructed to better objectify the risk associated with RNU (9). This prior work has focused on all complications events that a patient may experience. Yet, the association between preoperatively measured patient-specific factors and major complications following RNU (which likely will impact convalescence and recovery) is less clear. This study reviews a multi-center cohort of RNU patients to identify the incidence of major complications as well as preoperative risk factors for their occurrence. Such information may improve patient counseling as well as present evidence to encourage less invasive treatment options when appropriate and applicable. Methods The charts of 1,266 patients with clinically localized, nonmetastatic, upper-tract urothelial carcinoma (UTUC) undergoing RNU at 14 academic medical centers between 2002 and 2015 were reviewed. RNU was performed via either an open or minimally invasive approach with regional lymphadenectomy and specific bladder cuff management at the discretion of the operating surgeon. All specimens were confirmed as urothelial carcinoma on pathologic review. Preoperative clinical, demographic, and select patientspecific comorbidities were collected for analysis. Perioperative complications occurring within 30 days of surgery were graded using the modified Clavien-Dindo classification scale (10). The number, type, and severity of all complications were included. In accordance with standard reporting scales, major complications were classified as Clavien grade ≥ III, while minor complications were Clavien grade ≤ II. Univariate and multivariate logistic regression analysis determined the association between preoperative variables and Clavien III or greater post-RNU complications. A multivariable model included all possible preoperative predictors, including age, race, gender, ECOG performance status, Charlson Comorbidity Index (CCI), American Society of Anesthesiologists (ASA) score, body mass index, individual comorbidities, and receipt of neoadjuvant chemotherapy. Importantly, in this study, we specifically focused on preoperative patient characteristics as opposed to surgical approach or intraoperative events. The rationale was to specifically provide information to practicing urologists regarding preoperative baseline comorbidities which could be ascertained in the office which would impact patient outcomes following RNU. A P value of <0.05 was set as significance. Table 1 highlights the clinicopathologic characteristics of those patients undergoing RNU. A total of 1,266 patients (707 men and 559 women) with a median age of 70 years and body mass index of 27 kg/m 2 were included. Over three-quarters of the cohort was Caucasian, approximately 50% had an ASA score >3, 22% had a CCI score >5, and 17% had an Eastern Cooperative Oncology Group (ECOG) performance status ≥2. Of those patients included, 54.3% had hypertension, 50.1% had baseline chronic kidney disease (CKD) stage III or worse, 37.3% had hyperlipidemia, 20.8% had coronary artery disease, 17.1% had diabetes mellitus, and 14% had baseline pulmonary disease. Only 7.1% of patients received neoadjuvant chemotherapy. Discussion While RNU with excision of ipsilateral bladder cuff is the gold-standard therapy for patients with UTUC and normal contralateral kidney function, there remains a significant risk of perioperative complication due to pre-existing medical comorbidities in this patient population (8,9). In the present study of almost 1,300 patients undergoing RNU at 14 academic medical centers, 413 patients (32.6%) experienced a complication including 103 (8.1%) patients with a Clavien grade III or higher complication. Of these 103 patients with major complications, 49 were Clavien III (47.6%), 44 were Clavien IV (42.7%), and 10 patients suffered Clavien V (9.7%) mortality events. The rate of major complications in this large series is similar to prior studies, but slightly higher than those reported in a systematic review of laparoscopic RNU versus open technique for management of UTUC (4.6% vs. 3.8%, respectively) (11)(12)(13). We hypothesize this may be attributable to the inherent case mix bias that is seen at tertiary care academic medical centers. This study identified several patient-specific factors as independent predictors of perioperative complications. On univariate analysis, patient age (P=0.006), ASA score (P=0.022), ECOG performance status (P<0.001), CCI (P<0.001), hypertension (P=0.002), diabetes mellitus (P=0.023), and CKD stage (P<0.001) were all associated with major complications following RNU. On multivariate analysis, only ECOG ≥2 (OR 2.38, P=0.001), CCI >5 (OR 3.45, P=0.007), and CKD stage ≥3 (OR 3.64, P=0.008) were independently associated with major complications. These findings show concordance from observations noted in prior work. Specifically, a previous multi-institutional study with 427 patients determined ECOG performance status was associated with perioperative mortality and worse overall survival (14). Another multi-institutional study involving 731 patients identified patient age, race, ECOG, CCI, and CKD as independent predictive factors of all complications following RNU, and these patient-specific variables were used to construct a preoperative nomogram to predict complications within 30 days for UTUC (9). The results of the current study involving 1,266 patients from 14 academic medical centers highlights performance status and increased CKD stage as independent predictors of major complications following RNU. Other patientspecific factors, including BMI, ASA score, hypertension, and diabetes mellitus were not associated with major complications. These observations improve our ability to counsel patients diagnosed with this uncommon malignancy. Indeed, while RNU remains the reference for management of UTUC, urologists must recognize the reality of treatment related morbidity. In that regard, conservative therapies may present a viable (and possibly safer) option in those patients with lower risk disease. Indeed, improved risk stratification and refinements in the ability to deliver adjuvant intracavitary therapies may further enhance the effectiveness of kidney preserving UTUC procedures. We recognize that our analysis focused on preoperative factors that are associated with major complications. Indeed, we elected not to incorporate operative variables such as surgical approach for several reasons. Firstly, in retrospective studies, surgical approach is inherently impacted by surgeon expertise and case selection bias. Indeed, large series from population based administrative datasets provide conflicting data likely due unmeasured factors (15,16). Secondly, we sought to specifically explore factors that could be measured prior to the operative procedure. These factors would be most beneficial in counseling and expectations prior to a decision of a surgical approach. Finally, if one considers surgical approach, then an array of other operative factors merit inclusion such as lymphadenectomy (and extent) and ipsilateral bladder cuff management. Individually, each surgical nuance presents its own associated benefits and risks. Therefore, analysis did not factor these variables. Several limitations of this study warrant mention. The retrospective and multicenter design could contribute to variable accuracy and annotation in grading complications and how these complications are managed at each institution. Secondly, these experiences also reflect academic urologic practices and therefore may be subject to referral bias with regards to surgeon expertise as well as patient comorbidity profile. Nonetheless, despite these limitations, this is the largest cohort to date of UTUC patients undergoing radical surgery and the first to report impaired baseline performance status and increasing CKD stage as independent variables in developing major postoperative complications following RNU. Conclusions Major complications occur in 8% of patients undergoing RNU. Impaired preoperative performance status (as determined by ECOG or CCI) and baseline CKD are associated with major postoperative adverse events. These easily measurable indices warrant consideration prior to proceeding with RNU.
2020-04-02T00:50:08.566Z
2020-03-13T00:00:00.000
{ "year": 2020, "sha1": "2b1453eb3b6d79c85840cb3bcd3f1f35a2981737", "oa_license": "CCBYNCND", "oa_url": "https://tau.amegroups.com/article/viewFile/37997/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2b1453eb3b6d79c85840cb3bcd3f1f35a2981737", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
35649467
pes2o/s2orc
v3-fos-license
Complex magnetism of lanthanide intermetallics unravelled We explain a profound complexity of magnetic interactions of some technologically relevant gadolinium intermetallics using an ab-initio electronic structure theory which includes disordered local moments and strong $f$-electron correlations. The theory correctly finds GdZn and GdCd to be simple ferromagnets and predicts a remarkably large increase of Curie temperature with pressure of +1.5 K kbar$^{-1}$ for GdCd confirmed by our experimental measurements of +1.6 K kbar$^{-1}$. Moreover we find the origin of a ferromagnetic-antiferromagnetic competition in GdMg manifested by non-collinear, canted magnetic order at low temperatures. Replacing 35\% of the Mg atoms with Zn removes this transition in excellent agreement with longstanding experimental data. Lanthanide compounds play an increasingly important role in the development of novel materials for high-tech applications which range from mobile phones and radiation detectors to air conditioning and renewable energies.Much of this stems from their magnetic properties, so that they are indispensable components in permanent magnets, [1] magnetoresponsive devices for solid state cooling [2] and other applications.Common to all the lanthanide elements is their valence electronic structure which makes them chemically similar and also causes magnetic order.Lanthanide atoms are predominantly divalent (5d 0 6s 2 valence electron configuration), becoming mostly trivalent in a solid, donating three valence electrons to the electron glue in which the atomically-localised f -electron magnetic moments sit.The interaction between these moments derives from how the electron glue is spin-polarised.The longstanding RKKY [3] (Ruderman-Kittel-Kasuya-Yosida) free electron model of this electronic structure is typically used to try to explain the many features of the indirect coupling of the 4f -electron moments despite its rather poor representation of the narrow band 5d-states.The possible importance of the latter has already been inferred from some earlier electronic structure studies [4][5][6][7]. Whilst theoretical aspects of lanthanide magnetism are well understood at the phenomenological level, predictive first principles calculations are challenging owing to the complexities of the strongly correlated f -electrons and itinerant valence electrons along with the magnetic fluctuations generated at finite temperatures.In this letter we explore lanthanide compounds with an ab-initio theory based on Spin Density Functional Theory (SDFT) in which the self-interaction corrected (SIC) local spin density (LSD) method [8,9] provides an adequate description of f-electron correlations [10][11][12] and the disordered local moment (DLM) theory [13] handles the magnetic fluctuations.We are able to give a quantitatively accurate description of the diverse magnetism of Cs-Cl (B2) ordered phases of Gd with Zn, Cd and Mg which we test against experimental data and show the complex role played by the spin-polarised valence electrons. Local moments of fixed magnitudes are assumed to persist to high temperatures and in lanthanide compounds are formed naturally from partially occupied localised 4f-electron states.The orientations of these moments fluctuate slowly compared to the dynamics of the valence electrons glue surrounding them.By labelling these transverse modes by local spin polarisation axes fixed to each lanthanide atom i, êi , and using a generalisation of SDFT [13](+SIC [14,15]) for prescribed orientational arrangements, {ê i }, we can deter-mine the ab-initio energy for each configuration, Ω{ê i } [15][16][17][18][19] so that the configuration's probability at a temperature T can be found.The magnetic state of the system is set by an average over all such configurations, appropriately weighted, and specifies the magnetic order parameters, {m i = êi }, where the magnitudes m i = |m i | range from 0 for the high temperature paramagnetic (PM) (fully disordered) state to 1 when the magnetic order is complete at T = 0K.A distribution where the order parameters are the same on every site, {m i = m f ẑ} say, describes a ferromagnetically ordered (FM) state whereas one where the m i alternate layer by layer between m a x and −m a x characterises an antiferromagnetic (AF1) order.The free energy function F({m i }), written in terms of these magnetic order parameters, m i , monitors magnetic phase transitions.It contains the effects of the spin-polarised valence electronic structure which adapts to the type and extent of magnetic order [18,20,21].For lanthanide materials DLM theory describes how valence electrons mediate the interactions between the f-electron moments.These can turn out to be RKKY-like, but can also show strong deviations from this picture as we find here for simple Gd-containing prototypes. We start with GdZn, of particular interest in solid state cooling, [22] but also because we expect its electronic structure to be straightforward [6,23].The Gd atoms occupy a simple cubic lattice of the CsCl(B2) ordered phase.Our first-principles SIC-LSD calculations find the ground state Gd-ion configuration to be trivalent (Gd 3+ ), with seven localised f-states constituting a stable half-filled shell, in line with Hund's Rules [10,11,18].So Gd of all the heavy lanthanides has the relative simplicity of an S-state for its f-electrons, largely uncomplicated by crystal field effects and spin-orbit coupling.This permits a clinical look at how the interactions between large 4f-magnetic moments are mediated by the valence electrons. These come from both the lanthanide (5d 1 6s 2 ) and the post-transition metal Zn which has low-lying, nominally filled, d-shells (3d 10 ) added to its two s-electrons.Our ab-initio DLM theory can thus investigate the effect of the lanthanide 5d electrons hybridising weakly with 3d states.This touches on a very important aspect of many magnetic materials containing both rare earth and transition metal elements [24] where understanding the interplay between the localised lanthanide magnetic moments and the more itinerant magnetism originating from the transition metal d-electrons is paramount for the design of more efficient materials. Our DLM theory calculations for the paramagnetic state of GdZn produce local moments of magnitude µ ≈ 7.3µ B on the Gd sites pointing in random directions so that there is no long range magnetic order, {m i = 0}.The calculated paramagnetic susceptibility [15,16,18], χ(q), with a maximum at wave-vector q max = (0, 0, 0), shows that, in accord with experiment, GdZn develops ferromagnetic (FM) order below a Curie temperature, T c = 184 K (at theoretically determined lattice constant, a = a th = 6.62 a.u.), somewhat lower than the experimental value of T c = 270 K [25](at a = a exp = 6.81 a.u.).We find that GdZn's T c gradually decreases under pressure, P , with calculated dT C dP = -0.45K kbar −1 which agrees reasonably well with experimental value of -0.13 K kbar −1 from the literature [26] (Fig. 1(a)).The negative dT C dP is typical of many metallic magnets owing to pressure-induced band broadening and diminished energy benefit from spin polarising the valence electrons around the Fermi energy. Naïvely one might expect similar effects if Zn is replaced with isoelectronic Cd whose filled 4d−band states are simply more extended than the 3d's of Zn.Our calculations, however, show something rather different.Whilst both theory and experiment find GdCd to be a simple ferromagnet like GdZn, with T c = 234 K (a th = 6.98 a.u.) and 265 K [27](a exp = 7.09 a.u.[18]), in sharp contrast to its results for GdZn, theory predicts its T c to increase quite dramatically with pressure (Fig. 1), i.e. a positive and rather large dTc dP .Owing to the paucity of reliable published experimental pressure data for GdCd, [28] we have carried out measurements [18] to test this specific prediction and a comparison between the calculated and experimentally observed T c 's for GdCd as a function of pressure is shown in Fig. 1(a). The theory-experiment agreement is excellent: dTc dP from theory is +1.5 and from experiment is +1.6 K kbar −1 .Whilst not unusual for first-order magnetostructural transitions (e.g.≈ 1-3 K kbar −1 is observed in Gd 5 Si x Ge 4−x alloys [29]), this is a rather high rate for a second-order transition as occurs in GdCd.Reasons for this stark difference between GdZn and GdCd are found from our T c calculations as a function of lattice parameter, a, (Fig. 1(b)).Starting from large values, T c initially increases with decreasing Gd-Gd distance for both GdZn and GdCd, reaching a maximum whence it starts decreasing with further reduction of the Gd-Gd distance.The dTc dP 's shown in Fig. 1(a) originate from where the two compounds have their equilibrium lattice spacings, a th , marked by blue (GdZn) and red (GdCd) arrows in Fig. 1(b). For materials with the same number of valence electrons per atom, the RKKY account of magnetic interactions would be the same.GdMg is isoelectronic with both GdZn and GdCd but with no filled 3d or 4d band of states.This difference leads to our DLM theory finding GdMg's PM state to be qualitatively different than GdZn's and GdCd's.We find a discordant blend of FM and AF1 dominant magnetic correlations in the PM state -the calculated paramagnetic χ(q) has two comparable peaks at wavevectors (0, 0, 0) and (0, 0, 1 2 ) [18] (in units of 2π a ).Which one is stronger depends on the a values used.At the theory volume (a th =7.00 a.u.[c.f. a exp = 7.20 a.u.[30]]), our calculations predict a FM state below T c =128K.Reducing the Gd-Gd separation weakens the FM aspects and, for example, a 4% decrease leads to an AF1 state instead, below the Néel temperature T N =87K. We determine the magnetic order that evolves as T is lowered through the transition temperature to 0K as a consequence of these competing FM and AF1 effects by using our DLM theory [18] for the first time to describe a magnetically ordered state with a canted structure and repeating the analysis for a number of a values.We set the order parameters, m i 's for the system at various stages of partial onto complete magnetic order, to alternate between m f ẑ + m a x and m f ẑ − m a x on consecutive Gd layers along the (1, 0, 0) direction giving a canting angle between layers of Θ c = 2 arctan( ma m f ) so that the overall magnetization of a system is local moment size µ (7.3 µ B ) times m f .m f = 0, m a = 0, Θ c = 0 signifies a FM state and m a = 0, m f = 0, Θ c =180 • a AF1 state.[31] which finds that, upon lowering the temperature, GdMg orders into a FM state at T c ≈ 110K and then undergoes a further second order transition into a canted magnetic ordered state at T F ≈ 85K [31,32].At low T the magnetisation ≈ 5 µ B , a value we have also confirmed with our own experimental measurements.This is indicative of the FM and AF components, m f and m a , being roughly the same size giving a canting angle between 7 µ B -sized Gd moments of roughly 90 • .This state is robust against applied magnetic fields [32] of up to 150 kOe.The experimental results are matched almost exactly by our calculations shown in Fig. 2(a) for a 2 % lattice spacing reduction from a th .Liu et al. [31] also found that under pressure GdMg orders into a AF1 from a PM phase at ≈ 100K and at a lower temperature undergoes a further first order metamagnetic transition into a canted FM phase.The authors estimated the pressure derivative of the magnetisation to be -0.04 µ B kbar −1 at 4.2K, which we have also confirmed experimentally and in fair agreement with our calculated low T value of -0.02 µ B kbar −1 . Experimentally it is known that when Gd is replaced by Tb in GdMg, there is a 1% lattice contraction [39] and a FM state undergoes a transition into a canted magnetic structure at low T with Θ c of at least 90 • .Replacing Gd with Dy leads to a larger lanthanide contraction and measurements [40] show that DyMg orders into an AF1 state, developing non-collinear structure with a FM component at low T and Θ c of about 110 • .This correlates with Fig. 2(a) [18] for the smaller lattice spacing regime.The little available data for Ho-Mg [39] also indicates canted AF magnetic structure at low T .So we infer that the lanthanide contraction [15] in part causes the transition from FM-canted to AF-canted magnetic structures as the heavy lanthanide series is traversed.Our figure 2(a) also implies a tricritical point (PM-AF-FM) at some concentration, y, in the (Tb 1−y Dy y )-Mg alloy system with a transition to a canted structure at a marginally lower temperature or possibly a transition into a canted structure directly.This unusual canted magnetism of GdMg is evidently destroyed by nominally filled, lowlying 3d or 4d bands from the non-lanthanide constituent.Our calculations, Fig. 2(b), show what happens when a fraction x of the Mg sites in GdMg is replaced by Zn.T c increases with x, and the low temperature canted structure vanishes altogether for x > 0.35.This observation is in excellent agreement with the experimental data for GdMg (1−x) Zn x of Buschow et al. [33] who gave an early report of a serious shortcoming of the RKKY picture. The successful capturing of these unusual temperature and pressure trends of the Gd intermetallics' magnetism is a consequence of the theory's detailed description of the valence electrons.The theory includes both the response of these electrons to the magnetic ordering of the f-electron local moments as well as their effect upon it.Fig. 3 shows the non-free electron-like PM valence density of states (DOS) of GdMg, GdZn and GdCd at a th for an electron spin-polarised parallel and anti-parallel to the local moment on the Gd site [13]. Averaged over equally weighted moment orientations the DOS is unpolarised overall.Below T c the electronic structure adjusts and spin-polarises [18] when magnetic order develops.The Gd f-moment interactions are properties of the electronic structure around the Fermi energy, ε F .The Fermi surface (FS) of PM GdMg (for a = 0.96a th ), Fig. 4(a), shows a distinctive box structure so that a wave-vector, (0, 0, 1 2 ), connects (nests) [36][37][38] large portions of parallel FS sheets and drives AF1 magnetic correlations.This topological feature is absent in GdZn's and GdCd's FS's.Weak hybridization between Gd-5d and lower lying, nominally filled Zn-3d or Cd-4d states, shown in Fig. 3, causes complex differences between their electronic structures around ε F and GdMg's.In GdZn the Zn 3d bands are narrower than GdCd's 4d ones and lie at slightly higher energies [18].Moreover we find that lattice compression increases Gd d-state occupation relative to sp-ones in these compounds [34,35] which affects FS topology.In particular, as shown in Fig. 4(b) for GdCd, we find that Fig. 1(b)'s peak correlates with a distinct electronic topological transition -a 'hot spot' formed by a hole pocket around k = ( 12 , 1 2 , 0), collapsing as a is reduced.Atomically localised f-electrons and their intricate physics is inevitably the focus for lanthanide material studies.But the valence electron glue in which the f-moments sit also harbors surprises.Its s-, p-and d-electrons can shift it far from a nearly free electron model, as exemplified by the canted magnetism of GdMg and the stark contrast of the magnetism of isoelectronic GdZn and GdCd with their disparate pressure variations.The predictive ab-initio computational modelling described here has successfully accounted for the subtle aspects of the valence electrons' spin polarisability around ε F and how it is affected by occupation of lower-lying lanthanide-other metal d-electron bonding states.This implies that further successful quantitative modelling of the rich variety of technologically useful lanthanide-transition metal materials must also treat valence electronic structure accurately and in quantitative detail.We have shown that coordinated ab-initio theory-experimental studies have the capability of producing new guidelines for understanding the magnetism in lanthanide-transition metal magnets.Factors such as the average number of valence electrons or band-filling, separation in energy of the lanthanide 5d and the other constituents' d-bands and the valence band widths, reminiscent of the modern analogs of the famous Hume-Rothery rules [41] for alloy phase stability, will influence the nature of the valence electron structure around ε F and the magnetism it supports. FIG. 1 : FIG. 1: (a) Comparison between theory (open symbols) and experimental[18] (filled symbols) T c differences, (T c (P ) − T c (0)), as a function of pressure, P , for GdZn (blue circles) and GdCd (red squares).The experimental data for GdZn are from Ref.(26).(b) T c of GdZn (blue circles) and GdCd (red squares) as a function of lattice parameter a (atomic units) calculated from the theory.The vertical arrows indicate a th , red for GdCd and blue for GdZn. FIG. 2 : FIG. 2: (a) The magnetic phase diagram of GdMg, represented by the canting angle Θ c (T ) and its dependence on lattice spacing alongside a schematic picture of the CM state.(b) The magnetic phase diagram of GdMg (1−x) Zn x , Θ c (T ), and its dependence on x for a fixed lattice spacing equal to the 2% reduction value in (a), equal to a th of GdMg 0.6 Zn 0.4 . Fig. 2 ( Fig.2(a) summarises our results.These are the first ab-initio calculations to show canted magnetism (CM) in GdMg.The figure shows the emergence of a CM from either a FM (Θ c = 0) or AF1 state (Θ c =180 • ).For low T , Θ c ranges from 70 • at the theoretical equilibrium lattice constant (0% reduction) through to 120 • (4% reduction) before eventually forming an AF1 magnetic structure (angle 180 • ) with further reduction.This agrees very well with experiment [31] which finds that, upon lowering the temperature, GdMg orders FIG. 3 : FIG. 3: The local density of states (DOS) at a = a th for the PM states of GdMg, GdZn and GdCd resolved into Gd (black curve) and Mg,Zn or Cd anion (red) components.The Gd d-(green curve shaded underneath) and anion d-component (blue curve) are also shown.The upper (lower) panel shows the DOS for an electron spin-polarised parallel(anti-parallel) to the local moment on the Gd site.The total DOS, an average over all directions, is unpolarised.
2015-08-18T12:35:47.000Z
2015-08-18T00:00:00.000
{ "year": 2015, "sha1": "cdde22a31127959440a06aa196619e676cf10588", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.115.207201", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "fde14796e5ce2940591ab0367644748f6bf134d5", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Medicine", "Materials Science" ] }
73677657
pes2o/s2orc
v3-fos-license
Failure Investigation : Analysis Procedure and Some Notable Aircraft and Aeroengine Service Failures Failure of an aircraft structural component can have catastrophic consequences, with a resultant loss of life and of the aircraft. The investigation of defects and failures in aircraft structures is, thus, of the vital importance in preventing further incidents. This review discusses the common failure modes observed in aircraft structures, with examples drawn from case histories. A review has given three notable aircraft and aeroengine service failures investigated by the National Aerospace Laboratory NLR in the Netherlands. All three failures were initiated by a corrosion. Introduction AILURE of an aircraft structural component can have catastrophic consequences, with a resultant loss of life and of the aircraft.The investigation of defects and failures in aircraft structures is, thus, of the vital importance in preventing further incidents.This review discusses the common failure modes observed in aircraft structures, with examples drawn from case histories.The review will also outline the investigative procedures employed in examination of the failed components. Any failure occurring on the structural materials applied on air platforms needs to be studied in order to avoid other accidents which are going to happen on an aircraft of the same fleet.Though, the economic and safety issues are driving forces to develop a more efficient failure analysis and investigations [1].The objective of the failure analysis The Failure Investigation is undertaken to determine the cause of a failure, and if possible, to identify corrective actions that should be initiated to prevent similar failures [2].The failure analysis is a powerful tool for solving engineering problems and can provide significant savings in the development of the components. General Stages of an Analysis The nature of the failure being investigated determines what steps and processes must be employed.The following is a brief description of the typical stages of the Failure Investigation.These are guidelines to be considered by the investigator; however, not every stage may be utilized [2]. Collection of preliminary data Initially, the Failure Investigation should be directed towards gathering as much information relating to the failure insofar as to reconstructing its sequence of events.This may involve obtaining materials specifications, drawings, and records as to the manufacturing, processing, and service history of the failed component or structure.Also, data on the parameters of the flight could be of a great benefit to the investigation [2]. Photographic Records It is important for the investigator to maintain a photographic record of the failed component (e.g.macroscopic, fractographs, micrographs, etc.) throughout the analysis to detail the characteristics of the failure [2]. Selection of the samples It is critical that the investigator selects the correct components that accurately depict the failure.Sometimes, this may involve seeking other evidence of damage beyond the apparent failed component [2]. Preliminary examination of the damaged components Before cleaning, the condition of the damaged components should be described in the record "as received".This is useful in determining the sequence of events leading to the failure.A thorough visual inspection and detailed record keeping are very important for the success of the investigation. Visual inspection should be performed initially unaided and subsequently with the photographic microscopes. For the fractured components, it is essential to document the entire component and then relate the fracture to the entire component [2]. Sample handling 2.4.1. Conservation (preservation) Fracture surfaces are prone to mechanical and / or chemical damage, so the proper selection, cleaning and conservation of the fracture surface is very important to prevent the destruction or contamination of the same. The fracture surfaces should never be touched by fingers or be fitted together with the sections of a part.They should also be covered with a cloth or cotton during transport and stored in containers or rooms where moisture is removed in order to preserve the existing state of corrosion [2]. Cleaning Fracture surface should be cleaned when necessary, such as, for example, preparation of the fracture surface for an electron microscopy observation.Applied techniques are dry cleaning with compressed air, ultrasonic cleaning with solvents or mild detergent and removing the plastic isolation. When working with plastic materials avoid organic solvents [2]. Cutting (Sectioning) Due to the size limitations of the equipment for testing and assessment, such as hardness testers and optical microscopes, it is sometimes necessary to allocate a part of the fracture surface containing the damaged components.It is very important to keep the fracture surface when cutting, using coolant or secondary coatings [2]. Cracks opening In the event that the primary cracks are not developed until the final fracture, they must be open. On the other hand, if the primary crack is damaged or corroded so that the fracture surface is damaged, it is necessary to open the secondary cracks for testing and study.These cracks can provide more information since they are less damaged or corroded. The proposal is to open the cracks such that the two fracture surfaces move opposite to each other, perpendicular to the plane of the fracture (crack).Sometimes, the use of different mechanisms of the fracture can be used to differentiate primary from secondary cracks [2]. Non-Destructive Testing Non-destructive testing can be useful in detecting surface cracks and discontinuities in the damaged components [2,3].Magnetic particle testing, eddy current testing, ultrasonic testing, liquid-penetrant inspection and radiography are some of the most commonly used techniques of the nondestructive testing. Mechanical Testing Mechanical testing helps if the component is not in accordance with the specification or in assessing the impact of surface conditions on the mechanical properties [2,3]. Hardness Hardness testing can be used to assist in an assessment of the heat treatment. Tensile strength and toughness These tests should be performed only if there is sufficient material for the preparation of the samples for testing. Dynamo-mechanical analysis These tests should be done on plastic materials and polymer composites only if there is sufficient material for the preparation of the samples for testing. Chemical and elemental analysis In a Failure Investigation, a routine analysis is recommended to confirm that the material is one that was listed in a specification.Minor deviations in compositions are unlikely responsible for failure. It is useful to identify and determine the concentrations of elements in the alloy, sediments, samples of fluid from the environment, lubricants, etc. Various analytical techniques can be used in a failure investigation: emission spectroscopy, atomic absorption spectroscopy, inductively coupled plasma atomic emission spectroscopy, classical wet analytical chemistry, and spot tests.Surface integral elements can be identified by using the energy-dispersive and wavelength dispersive x-ray spectrometries (analytical techniques used for elemental analysis or chemical characterization of the sample). Polymeric materials must be tested for their chemical nature and the thermodynamic and kinetic parameters, by means of infrared spectroscopy, gas chromatography (method of chemical analysis in which the sample is heated to decomposition of the molecules which are separated by gas chromatography and detected by mass spectrometry) and thermal analysis [2]. Fractographic examination Fractography is the interpretation of the features observed on the fracture surfaces and, although it is simple in many cases, it can prove to be fairly difficult in practice.This is particularly the case on high strength quenched and tempered steels, or in alloys (such as cast irons and pearlitic steels) where the microstructure affects the crack path [4]. Fracture surface is a source of information regarding the cause of the fracture.It contains the evidence about the material disadvantages, the impact of the environment, as well as the use of force. Macroscopic Examination The detailed examination of the fracture surfaces at low magnifications (ranging from 1 to 100 times) may be done with the unaided eye, a hand lens, or a low-power optical microscope.The information obtained can give an indication of the stress system that produced the failure, the direction of the crack growth and therefore the origin of the failure [2].Microscopic examination of the fractured surfaces can be achieved with the optical (light) and scanning electron microscopies.However, interpretation of the fractographic requires a practice and an understanding of the fracture mechanisms. Analysis of the microstructure Metallographic examination of the polished and polished-and-etched sections by an optical microscope and an electro-optical technique is a vital part of the failure analysis and should be implemented as a routine procedure. Defining the Types of Fracture Examination of the failure region, the fracture surfaces, and metallographic sections is necessary to identify the fracture type. Fractures are usually classified according to the mechanism of growth: ductile, brittle, intergranular, transgranular, and fatigue. These mechanisms can be correlated with different types of fracture, such as failures incurred as the consequence effects of the environment (stress-corrosion cracking, melting, liquid-plastic deterioration, liquid-metal embrittlement, hydrogen embrittlement and creep) and fractures incurred as a consequence of stress (overload, wear, impact load) [2]. Application of Simulation Tools Finite Element Analysis (FEA) may be worthwhile in assessing how stresses act on a failed part, by calculating the stress concentration factor (Kt) and residual fatigue life, either in presence or in absence of an existing crack or defect. Application of the fracture mechanics theories Fracture mechanics studies the formation of cracks, its development and expansion of the fracture.It includes a part of the science that relates to the final stage of the process of deformation under load. Simulated Service Testing It may be necessary to conduct tests that attempt to simulate the conditions under which the failure is believed to have occurred.Most of the metallurgical phenomena involved in failures can be satisfactorily reproduced on a laboratory scale, and the information derived from such experiments can be helpful to the investigator provided the limitations of the tests are fully understood.Furthermore, the simulated testing of the effects of certain selected variables encountered in service may be helpful in planning corrective action that will avoid similar failure or, at least, extend service life. The rotor blade of the Sikorsky S-61N helicopter (1974) In May 1974 a Sikorsky S-61N helicopter crashed into the North Sea with the loss of six lives.Fig. 9 shows the aircraft type and Fig. 10 shows the crashed helicopter during the recovery.All the main rotor blades were broken, but the blade 3 was exceptional in showing little deformation at the fracture location (indicated in Fig. 10) [6,7].Fig. 11a shows the recovered fracture surface of the blade 3, consisting of a hollow spar made from aluminium alloy AA6061-T6.The spar had been adhesively bonded to aluminium face in form of the ribbed aluminum pocket, as shown in Fig. 11b.The first phase in the failure sequence determined from the fractographic analysis was a highcycle fatigue initiating from the corrosion pits on the spar lower surface under the bonded area (indicated by the red arrow in Fig. 11b) [6,7]. The propeller blade of Air Tractor AT-301 (1987) In June 1987 an Air Tractor AT-301 fitted with a twobladed propeller experienced loss of a propeller tip in flight.The pilot managed to switch the engine off and landed safely. Figure 12.Air Tractor AT-301 (with a two-bladed propeller) [6] Figure 12 shows the Air Tractor AT-301 aircraft at the time of the accident with a two-bladed propeller, while Figure 13 shows the same plane a year later, but this time with a three-bladed propeller [6,8]. Figure 13.Air Tractor AT-301 (with a three-bladed propeller), about one year after the incident [6] Fig. 14 shows the broken propeller blade and a detail of its fracture surface.The propeller was manufactured by Pacific Propellers Inc., under the license to Hamilton Standard, and consisted of an aluminium alloy AA2025-T6 forging that had been chromic acid anodised.Despite the anodized surface, the propeller showed pitting and exfoliation corrosion.The corrosion was most probably due to attack and penetration of the anodised layer by deposits, which included sea salt.The broken blade failed by highcycle fatigue initiating from a 0.1 mm deep × 0.3 mm long corrosion pit on the convex surface [6,8]. Causes of failure Propeller cleaning: The operator neglected to follow the manufacturer's recommendation to wash and oil the propeller blades after the last flight of the day. Operating environment: The aircraft was used for crop spraying near the Dutch coast.This resulted in aggressive deposits on the propeller. In February 1992 a General Dynamics F-16 crashed between housing blocks in the city of Hengelo, without a loss of life.The crash was caused by an engine failure.Figure 15 shows the General Dynamics F-16, while Figure 16 shows the crash site, including the remains of the Pratt & Whitney F100-PW-220 engine.Fig. 17 indicates the location of the engine failure, which was due to a fracture of a pin attached to a Rear Compressor Variable Vane (RCVV) lever arm [6,9].The lever arm material was Inconel 718, a nickel-base superalloy.The pin material was Nitronic 60, a stainless steel.Detailed investigation showed that the pin had undergone stress corrosion cracking (SCC). Causes of failure Pin material: The SCC susceptibility of Nitronic 60 was unknown, until identified by the NLR. Residual stresses: Cold-upsetting the pin head to attach it to the RCVV lever arm resulted in high residual stresses.The combination of these stresses and a concentrated salt solution in the crevice between the lever arm and pin resulted in SCC.Figures 19 and 20 show the pin fracture surface and an evidence of the corrosion and SCC.The actual (physical) cause of the engine failure involved a sequence of events, see Figure 21 [6,9]. Required specialist knowledge Investigating the foregoing service failures required a broad range of knowledge and expertise.Fig. 22 Conclusion Failure of an aircraft structural component can have catastrophic consequences, with a resultant loss of life and of the aircraft.Any failure occurring on structural materials applied on the air platforms needs to be studied in order to avoid other accidents which are going to happen on an aircraft of the same fleet.Though, the economic and safety issues are driving forces to develop more efficient failure analysis and investigations. A Failure Investigation is undertaken to determine the cause of the failure, and if possible, to identify corrective actions that should be initiated to prevent similar failures.Failure analysis is a powerful tool for solving engineering problems and can provide significant savings in the development of components.The nature of the failure being investigated determines what steps and processes must be employed. Investigating the foregoing service failures required a broad range of knowledge and expertise.The failures cover many classes of aircraft metallic materials.Fractographic analyses were always needed.Fractography had to identify the fracture mechanisms, including fatigue, stress corrosion and overload, and take account of the environmental effects.Metallography was usually needed to aid or confirm the fractographic analyses.Chemical analyses were usually done to check on the materials. Figure 1 . Figure 1.Failure Investigation Process Flow Figure 2 . Figure 2. Some of the commonly used techniques of non-destructive testing [3] Figure 3 . Figure 3.Some of the commonly used techniques of mechanical testing [3] Figure 5 . Figure 5. Details of the lug fracture, which was close to welds joining the lug into the airframe.(In October1990 a Aérospatiale Alouette III helicopter crashed in the Veluwe).[6] Figure 6 . Figure 6.Details of the tail lug fracture, showing intergranular cracking followed by the fatigue.The failure occurred by the fatigue initiating from intergranular cracks on the upper side.The intergranular cracks were heattinted.(In October1990 a Aérospatiale Alouette III helicopter crashed in the Veluwe).[6] Figure 8 . Figure 8. Basic shapes of crack development and shaping of the fracture surface[5] - Object (It will clearly indicate a vehicle and a part gone under the investigation.)-Introduction (Presents the Statement of Work as tasked by the customer and the reason the customer requested the tests.It also identifies the customer.The introduction should also include a "background" discussion, which will clearly relate the sample history.)-Selection of the failed parts -Preliminary examinations -Fractographic examinations -Chemical analysis -Mechanical analysis -NDT (optional) -Application of the Simulation Tools (optional) -Simulated-Service Testing (optional) -Discussion on the experimental results (Discuss the significance of the experimental results.Describe how the results apply to the statement of work and how they led to conclusions.)-Discussion on the experimental results -Application of the Fracture Mechanics Theories (optional) -Determination of the Fracture Types -Individuation of the failure cause and chronological events -Conclusions -Recommendations (optional) -References (optional) and -Acknowledgements (optional) Aircraft service failures investigated by the National Aerospace Laboratory NLR A review is given of some notable aircraft and aeroengine service failures investigated by the National Aerospace Laboratory NLR in the Netherlands.The selected failures are: the rotor blade of helicopter Sikorsky S-61N (1974), the propeller blade of Air Tractor AT-301 (1987) and the lever arm pin of General Dynamics F-16 / Pratt & Whitney F100-PW-220 RCVV (1992).All three failures were initiated by a corrosion. Figure 11 . Figure11.Recovered blade spar fracture surface and phases of the fatigue life[6] Figure 14 . Figure14.Broken propeller blade and a detail of the fracture surface[6]
2018-12-30T08:42:42.982Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "55688f96520af658df088e2da3634dc6e5d92dea", "oa_license": "CCBY", "oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/1820-0206/2015/1820-02061502045L.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "55688f96520af658df088e2da3634dc6e5d92dea", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
4024197
pes2o/s2orc
v3-fos-license
TP53 Arg72Pro, mortality after cancer, and all-cause mortality in 105,200 individuals Rs1042522 (Arg72Pro) is a functional polymorphism of TP53. Pro72 has been associated with lower all-cause mortality and lower mortality after cancer. We hypothesized that TP53 Pro72 is associated with lower mortality after cancer, lower all-cause mortality, and with increased cancer incidence in the general population in a contemporary cohort. We genotyped 105,200 individuals aged 20–100 years from the Copenhagen General Population Study, recruited in 2003–2013, and followed them in Danish health registries. During follow-up 5,531 individuals died and 5,849 developed cancer. Hazard ratios for mortality after cancer were 1.03 (95% confidence interval:0.93–1.15) for Arg/Pro and 0.96 (95% CI:0.79–1.18) for Pro/Pro versus Arg/Arg. Hazard ratios for all-cause mortality were 0.99 (95% CI:0.93–1.04) for Arg/Pro and 1.09 (95% CI:0.98–1.21) for Pro/Pro versus Arg/Arg. Risk of cancer specific mortality, cardiovascular mortality, and respiratory mortality were not associated with Arg72Pro genotype overall; however, in exploratory subgroup analyses, genotype-associated risks of malignant melanoma and diabetes were altered. Considering multiple comparisons the latter findings may represent play of chance. The TP53 Arg72Pro genotype was not associated with mortality after cancer, all-cause mortality, or cancer incidence in the general population in a contemporary cohort. Our main conclusion is therefore a lack of reproducing an effect of TP53 Arg72Pro genotype on mortality. Scientific RepoRts | 7: 336 | DOI: 10.1038/s41598-017-00427-x Methods Study population. We included 105,200 individuals aged 20-100 years from the Copenhagen General Population Study, a prospective population-based cohort study initiated in 2003 with ongoing enrolment. All Danes are given a unique number for identification at birth or immigration and are registered in the national Danish Civil Registration System. This unique identification number can then be used to follow the participants through the national registers with complete follow-up 19 . Danish inhabitants of suburban Copenhagen areas were invited using the national Danish Civil Registration System. In order to minimize the risk of population stratification, only individuals of Danish descent were included. All participants filled out an extensive questionnaire on life-styles and health, which was reviewed by the participant together with an investigator at the day of study attendance, had a physical examination done, and had blood samples taken for biochemical analysis and DNA extraction. The study was approved by Herlev and Gentofte Hospital, a Danish ethics committee, and was conducted according to the Declaration of Helsinki. Written informed consent was obtained from all participants. Endpoints. We followed all individuals until death (n = 5,531), emigration (n = 392) or November 14, 2014, whichever came first. No individuals were lost to follow-up. Date of death or emigration was obtained from the national Danish Civil Registration System. Causes of death were obtained from the national Danish Causes of Death Registry. Records rank main and contributing causes of death as reported by general practitioner, hospital doctor, or by a physician in a forensic or pathology department, using WHO's tenth International Classification of Diseases (ICD-10) 20 . Cause of death was defined as cancer specific, cardiovascular, or respiratory if the highest ranked cause of death was a diagnosis of cancer (ICD-10 C00-C97), cardiovascular disease (ICD-10 I00-I99), or respiratory disease (ICD-10 J00-J99), respectively. If the highest ranked cause of death was none of the above, but a diagnosis was available, then cause of death was categorized as "other". Information on the diagnosis of cancer until December 31, 2012 was obtained from the national Danish Cancer Registry, which was established in 1943. Since 1987, it has been compulsory for all physicians by law to report cancer diagnosis to the national Danish Cancer Registry 21 . Cancer diagnoses were assembled into 8 groups to maximize statistical power. Gastrointestinal cancer included pharynx, esophagus, stomach, colon, rectum, liver, and pancreas cancer. Respiratory cancer included larynx and lung cancer. Urologic cancer included bladder and kidney cancer. Hematologic cancer included non-Hodgkin lymphoma, Hodgkin disease, multiple myeloma, and leukaemia. Male cancer included prostate and testis cancer. Female cancer included female breast, cervix uteri, corpus uteri, and ovary cancer. Other cancers included brain and central nervous system, thyroid cancer, sarcomas, and cancer of unknown primary origin. Genotypes. We extracted DNA from leukocytes in peripheral blood using Qiagen blood kit for DNA extraction. Genotyping of 105,200 individuals for the TP53 rs1042522 (Arg72Pro) variant was done with a TaqMan-based assay (Applied Biosystems). Sequence of primers and probes are available upon request. Samples with failed genotyping were attempted to be genotyped again, and a second time if failed. Thereby, 99.9% of available samples were genotyped. Control samples for correct genotyping were obtained from the Copenhagen City Heart Study, previously genotyped using a different technique 18, 22 . Covariates. Information on covariates was derived from the questionnaire, physical examination, and blood measurements recorded at the day of attendance. We defined physical inactivity as being completely physically inactive or physical active for a maximum of two hours per week. Completed higher education was defined as having a higher education of 3 years or longer. High annual household income was defined as more than 600,000 DKK per year, equivalent to 80,625 euros. Weekly alcohol intake was self-reported number of units, converted to g/week (1 unit ≈12 g). Cumulative smoking in pack-years was calculated as the cumulated amount of tobacco smoked by the individual, divided by the equivalent of smoking 20 cigarettes a day for an entire year. Diabetes was defined as self-reported diabetes, use of insulin, use of oral antidiabetics, nonfasting blood glucose >11 mmol/L (198 mg/dL) at the day of examination, and/or a diagnosis of diabetes in the national Danish Patient Registry before the date of baseline examination. Body mass index was calculated as measured weight in kilograms divided by measured height in meters squared. Plasma cholesterol was measured using a standard hospital assay. Statistical analysis. We used Stata/SE 13.1. We used Cuzick's nonparametric test for trend to test for associations of genotype with baseline characteristics. The Hardy-Weinberg Equilibrium hypothesis was tested using Pearson's χ 2 test. Hazard ratios for mortality after cancer were calculated using Cox proportional hazards regression analysis adjusted for sex and age (as underlying time scale), since Arg72Pro was not associated with any of the measured potential confounders included in baseline characteristics in Table 1. We included all individuals who received a cancer diagnosis after the examination date and before December 31, 2012, the date where cancer follow-up ended. In these analyses, entry was defined as the day of cancer diagnosis. For all-cause mortality, all individuals were included. Hazard ratios were calculated using Cox proportional hazards regression analysis adjusted for sex and age (as time scale), with entry as the day of examination. For risk of cancer, we used Cox proportional hazards regression analysis adjusted for sex and age (as time scale), with entry at birth or at the start of the Danish cancer registry if individuals were born before January 1, 1943. Proportional hazards over time were assessed based on Schoenfeld residuals. No major violations of the proportional hazard assumption were noted. Interactions between Arg72Pro genotype and age, sex, smoking status, and body mass index, covariates that could influence the association between genotype and all-cause mortality, were tested using the Likelihood-ratio test by introducing a two-factor interaction term in a model including both factors. Estimates with confidence intervals from other studies were compared with our results using the Bland-Altman test 23 . A meta-analysis was conducted using the method of DerSimonian and Laird, with the estimate of heterogeneity taken from the Mantel-Haenszel model. This method takes the number of individuals per study into consideration. P-values were calculated from two-tailed analyses and nominal values are shown throughout the paper. For each display item, several exploratory analyses were performed, lowering the p-value required for significance below the conventional 0.05. Therefore, we note the significance cut-off a.m. Bonferroni in all the figure and table legends. Mortality after cancer. 5,849 individuals developed cancer after the baseline examination, and 1,529 of these died during follow-up. The hazard ratio for mortality after cancer was 1.03 (95% confidence interval (CI): 0.93-1.15) for individuals with Arg/Pro and 0.96 (95% CI: 0.79-1.18) for Pro/Pro versus individuals with Arg/Arg ( Fig. 1). When restricting follow-up to 5 years after the diagnosis of cancer, results were similar ( Fig. 1). All-cause mortality. During follow-up, 5,531 of the 105,200 individuals died. The hazard ratio for all-cause mortality was 0.99 (95% CI: 0.93-1.04) for individuals with Arg/Pro and 1.09 (95% CI: 0.98-1.21) for Pro/Pro versus individuals with Arg/Arg (Fig. 1). Results were similar when restricting analyses to those of 85 years and older (data not shown), as done in a previous study 17 . We found no interaction of the Arg72Pro genotype with age, sex, smoking status, or body mass index on mortality risk (data not shown). In a meta-analysis of the results from this and the published studies on Arg72Pro and all-cause mortality, the estimates were heterogeneous for both the Pro/Pro versus Arg/Arg (I 2 = 84%, p = 0.002), and for the Arg/Pro versus Arg/Arg (I 2 = 79%, p = 0.03) analyses. Overall, there was no association between Arg72Pro genotype and all-cause mortality (Fig. 2). When assessing cause-specific mortality, we found no association of the Arg72Pro genotype with cancer specific mortality, cardiovascular specific mortality, or respiratory specific mortality (Fig. 3). The Arg72Pro and cancer incidence. For malignant melanoma, the hazard ratio was 0.87 (95% CI: 0.77-0.99) for individuals with Arg/Pro and 0.78 (95% CI: 0.60-1.01) for Pro/Pro versus individuals with Arg/Arg with a nominally significant trend for a per-allele effect (p = 0.01); however, this was not significant after Bonferroni correction for eight tests. No other associations between Arg72Pro and cancer incidence were found (Fig. 4), nor between Arg72Pro and incidence of sex-specific cancers (Table 2). Likewise, when stratifying on cancer subtypes there was no association between Arg72Pro and cancer incidence after correcting for multiple comparisons (Supplementary Figure S1). Discussion In this study of 105,200 individuals from the general population recruited in 2003-2013, we found no difference in risk of mortality after cancer, all-cause mortality, or cancer incidence according to Arg72Pro genotype. In exploratory subgroup analyses, genotype associated risks of malignant melanoma and diabetes were altered. Considering multiple comparisons, these findings may represent play of chance. The association between malignant melanoma and Arg72Pro genotype has been studied by several groups due to a geographical latitude gradient in allele distribution. Among South Africans, 70% of the alleles are Pro72 compared to only 23% among Western Europeans 24 . This has led to the hypothesis that the Pro72 allele is protective against sunlight induced diseases 25 . However, meta-analyses of both non-melanoma skin cancer and malignant melanoma have found no association between Arg72Pro and risk of these cancers 26 . The Arg72Pro polymorphism has been extensively studied as a risk factor for development of cancer. Several meta-analyses have been conducted in various cancers including cervical, breast, skin, head, and neck cancer [8][9][10][11][12][13] . However, the results have been inconsistent with reports of potential publication bias 9, 11 and of genotype misclassification in association studies using tumor tissue as a source of genotyping material 15 . No genome-wide association study has found an association between Arg72Pro and risk of a cancer 27 . In a recent meta-analysis, the Arg72 allele of Arg72Pro was associated with increased odds of type 2 diabetes 28,29 . Although, we did find an association between diabetes and genotype at baseline examination in women only, this association did not remain after correction for multiple comparisons, suggesting that the association may represent play of chance. Two human cohort studies have evaluated the association between all-cause mortality and the Arg72Pro genotype. A Dutch study of individuals aged 85 years or older, the Leiden 85-plus Study including 1,226 individuals, reported a higher survival ratio for Pro/Pro versus Arg/Arg homozygotes but also a higher proportion of death from cancer among Pro/Pro homozygotes 17 . Similarly, a Danish study of 9,219 individuals aged 20-100 years from the Copenhagen City Heart Study reported lower mortality for both Pro/Pro and Pro/Arg versus Arg/ Arg 18 . This study also found increased survival after a diagnosis of any cancer for Pro/Pro versus Arg/Arg. These studies suggest a survival benefit for the Pro72 allele, possible through a lower apoptotic-inducing potential 17,18 . However, our results do not support this. Our result on all-cause mortality in individuals recruited in 2003-2014, was indeed different from those in the two previous studies recruited in 1987-1999 and 1991-1994 (Figs 2 and 5). The lack of association between Arg72Pro and all-cause mortality observed in the present study could indicate a gene-environment interaction with a change in effect of Pro72 on mortality over calendar time. The disappearing association with reduced all-cause mortality could be caused by more effective prevention or treatments of factors limiting the longevity of Arg72 homozygotes. Additionally, secular trends in mortality as well as in lifestyle and environmental changes could interact with the Arg72Pro genotype and eliminate a survival effect of the Pro72 allele; possible mechanisms may be examined in future studies. Changes in cancer treatment and cancer survival in Denmark during the past twenty years could have led to a diminished effect of Arg72Pro on mortality after cancer 30,31 . In support of this, a German study with 463 cases and 563 controls, found a possible gene-environment interaction between Arg72Pro and the preventive effect of nonsteroidal anti-inflammatory drugs (NSAID) on risk of colon cancer; Pro72 carriers had a low risk regardless of NSAID use, while Arg72 homozygotes had low risk only when taking NSAID 32 . Recent trends in use of NSAID and aspirin show increasing usage of ibuprofen and low dosage aspirin 33 . This could be one example of an interaction between Arg72Pro genotype and environment, an interaction that may have changed over time and that could influence mortality. Although the concept of a secular trend between the Arg72Pro polymorphism and all-cause mortality is intriguing, a simpler theoretical explanation may be that many of the studies on TP53 Arg72Pro and cancer Hazard ratios were adjusted for sex and age. Follow-up started at the day of birth or for individuals born before 1943 at the start of the national Danish Cancer Registry and ended at cancer diagnosis, death, emigration, or December 31, 2012 whichever came the first. p-values for trend were calculated with Cox regression using Arg72Pro genotype as a continuous variable, and the nominal values are shown without prior adjustment for multiple comparisons. NS Insignificant after Bonferroni correction for 8 multiple tests. Required p-value = 0.006 CI; Confidence interval. HR; Hazard ratio. Scientific RepoRts | 7: 336 | DOI:10.1038/s41598-017-00427-x incidence did actually examine the associations with all-cause mortality, but did not publish a lack of association. This in turn could create selective reporting and hence publication bias of the association. Such a possibility together with the effect of genotype misclassification and genotyping errors in candidate gene studies 14,16 might explain the discrepancies between previous studies and our findings. As only two other studies have published on the association between the Arg72Pro polymorphism and all-cause mortality 17, 18 , we were not able to statistically evaluate the hypothesis of a publication bias. That said, a meta-analysis combining the results from the two earlier studies with ours showed no association between Arg72Pro genotype and all-cause mortality (Fig. 2). Finally, we cannot exclude the possibility that the earlier findings, or the present, came from play of chance. This study is by far the largest cohort study examining the effect of Arg72Pro on all-cause mortality and mortality after cancer; however, some limitations should be considered. First, since our study population consists solely of individuals of Danish descent our findings might not apply to other ethnicities; however, this also minimizes the risk of population stratification. Second, we were not able to evaluate gene-gene-interactions such as with the mouse double minute 2 (MDM2) promoter SNP 309 and hence cannot exclude the existence of an interaction. Also, although we include a large number of individuals, 11,316 cancer events, and 5,531 deaths, we cannot rule out an effect of the Arg72Pro too small for our study to detect. Additionally, the use of Bonferroni correction diminishes the likelihood of chance findings (type I errors) due to multiple testing, but also increases the risk of ignoring important association (type II errors). Hence some associations displayed in Table 1, Table 2, Fig. 4 and Supplementary Figure S1 that did not meet the required level of significance after Bonferroni correction but did meet conventional level of significance (p-value < 0.05), could represent true associations. Importantly, however these findings need validation in future studies. In conclusion, the TP53 Arg72Pro genotype was not associated with lower mortality after cancer, lower all-cause mortality, or cancer incidence in the general population in a contemporary cohort. Our main conclusion is therefore a lack of reproducing an effect of TP53 Arg72Pro genotype on mortality. Data availability. Individual participant data from the Copenhagen General population Study are subject to protection from the national Danish Data Protection Agency and we are not allowed to share the data. However, interested researchers can contact members of the Copenhagen General Population Study steering committee (http://binanic.com/CGPS/Contacts.htm) to request limited data access. Additional data are available upon request and requests may be made to the corresponding author. Hazard ratios for all-cause mortality according to TP53 Arg72Pro genotype (Pro/Pro vs. Arg/Arg) in 3 longitudinal studies. Hazard ratios for death are adjusted for sex and age. Follow-up started at time of recruitment and ended at death, emigration or study termination, whichever came first. The boxes cover the period from the first recruitments until end of follow-up, and hazard ratios with 95% confidence intervals are shown at end of follow-up. CCHS; Copenhagen City Heart Study. CGPS; Copenhagen General Population Study. CI; Confidence interval. HR; Hazard ratio. Leiden-85; The Leiden 85-plus Study.
2018-03-23T13:15:02.113Z
2017-03-23T00:00:00.000
{ "year": 2017, "sha1": "53d400db0209eb7832e848d3a324145c3a18120a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-00427-x.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb5c149ece2472cab2d0cb7ba83966a6c3f61466", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258052034
pes2o/s2orc
v3-fos-license
Sample size calculations for indirect standardization Indirect standardization, and its associated parameter the standardized incidence ratio, is a commonly-used tool in hospital profiling for comparing the incidence of negative outcomes between an index hospital and a larger population of reference hospitals, while adjusting for confounding covariates. In statistical inference of the standardized incidence ratio, traditional methods often assume the covariate distribution of the index hospital to be known. This assumption severely compromises one’s ability to compute required sample sizes for high-powered indirect standardization, as in contexts where sample size calculation is desired, there are usually no means of knowing this distribution. This paper presents novel statistical methodology to perform sample size calculation for the standardized incidence ratio without knowing the covariate distribution of the index hospital and without collecting information from the index hospital to estimate this covariate distribution. We apply our methods to simulation studies and to real hospitals, to assess both its capabilities in a vacuum and in comparison to traditional assumptions of indirect standardization. Supplementary Information The online version contains supplementary material available at 10.1186/s12874-023-01912-w. Introduction Indirect standardization is an important tool for assessing the performance of a hospital (i.e., hospital profiling) compared to other hospitals in a wider population. This assessment is done by studying the incidence or prevalence of a (usually negative) binary outcome while adjusting for variables out of the hospital's control which may confound comparison to other hospitals. For example, in the field of computed tomography (CT), there is currently a significant movement to standardize or optimize best practices between hospitals, especially with regards to radiation dosage, for safety quality assurance [1][2][3][4]. One of the most basic outcomes of interest in this movement is the incidence or prevalence of CT exams determined to be "high dose" in a hospital. Comparison of this number between hospitals, however, must take into account the part of the body being scanned and the size of the patient being scanned, both of which have: a) high impact on whether a high dose is acceptable and b) highly variable distributions from hospital to hospital. Indirect standardization makes this comparison by studying the standardized incidence ratio (SIR), computed by dividing the observed incidence (or prevalence) of high dose exams in an index hospital by the expected incidence (or prevalence) of high dose exams if a wider population of reference hospitals shared the distribution of body part scanned and patient size seen in the index hospital [5]. The index hospital can then be considered "performing badly" if its SIR is substantially greater than 1, or equivalently if its observed incidence of "high dose" exams substantially out-populates the expected incidence of "high dose" exams. The utility of this methodology in hospital profiling is well-established [6][7][8]. Traditional methods of inference on this ratio view the denominator as fixed, modeling all uncertainty in its estimation as a consequence of uncertainty in the numerator [9][10][11][12]. The justifications for this assumption are numerous and multi-layered, but ultimately inadequate in a variety of circumstances [13][14][15]. They are especially inadequate when attempting to compute the sample size necessary to perform indirect standardization, as in such a context it's usually the case that no data (or very little data) has thus far been observed from the hospital of interest. The requirement of all or most of an index hospital's data to perform indirect standardization can have severe consequences on how long it takes to profile a hospital or whether the profiling is done at all, as such a requirement not only presents logistical issues, but may also breach hospital policies on data-sharing and patient confidentiality [16]. The demand for an overall assessment of a hospital's radiation dosage still persists, however, and the problem must be approached using novel methods. In this paper, we explore the assumptions made when performing traditional inference on the SIR, explain how such assumptions can be inappropriate to our hospital profiling problem, especially in the context of sample size calculation, and present an alternative novel approach to SIR hypothesis testing that addresses the issues of traditional methods. We present a means of sample size calculation under this new approach, estimating how many exams are needed from the index hospital to consistently detect abnormally high rates of high dose exams. Our sample size calculation methods are tested with an application using 157 example hospitals from which we have complete data, showing sample sizes computed using our method are sufficient but not excessive to achieve desired type I and type II error rates. We will also apply our methods to simulated hospitals, comparing the performance of our novel method to methods using assumptions associated with traditional indirect standardization. Methodology We begin by describing our problem in the mathematical terms associated with traditional indirect standardization, then apply the language to our hospital profiling problem. Description of traditional indirect standardization and its short-comings We assess the quality of a population of interest, the index, by studying a dichotomous outcome Y, controlled for a categorical predictive covariate X, which takes the values 1, ..., J. This categorical predictive covariate can denote a single variable, or a list of all combinations of the levels of multiple categorical variables. In the second case, the "distribution of X" is equivalent to the joint distribution of all constituent variables that form X. Define the SIR as Which constituencies of this ratio can be viewed as known exactly, and which must be viewed as uncertain estimates, depends on what we mean exactly by "population of interest". We begin by describing interpretations in traditional indirect standardization [9][10][11][12]. Vector is viewed as known exactly, as indirect standardization assumes that the reference population has significantly greater sample size (or validity) than the index population, to a degree that any uncertainties in estimations made using information from the reference data (like ) are eclipsed by uncertainties in estimates using the index population. Vector p is also viewed as known exactly. Sometimes this viewpoint is motivated by the same high sample size assumptions made with . Other times this viewpoint is a consequence of the "population of interest" being defined specifically by a collection of already-observed data points, as opposed to a population which has not been observed entirely, but from which we have sampled data. That is, using the language of hospital profiling, the "population of interest" would not be the index hospital itself, but a specific set of observed patients from said hospital. The estimated SIR, in such a case, would describe the quality of the index hospital's care for the observed patients, rather than its overall quality of care. Under such traditional assumptions, the distribution of X naturally does not need to be estimated. Value q is viewed as unknown. Even under cases where p is known, the purpose of the SIR is to describe the underlying mechanisms that the population of interest uses to achieve its outcome prevalence. Such mechanisms may not be deterministic, even when p is known. In the context of our hospital profiling problem, this refers to the fact that radiation dosage is highly variable even when physically identical patients are scanned at the same hospital in the same anatomic area, due to a combination of inconsistencies in execution of radiological protocols and the intrinsic randomness of radiation dosage. Our hospital profiling problem can mostly follow the same standards on which components of the SIR to view as known and unknown, excepting the case of p , which we cannot view as being known exactly. In the context of sample size calculation, the reason for this is clear -we've never observed any data from the index hospital. Given sufficiently generous resources, it may be possible to pursue some preliminary study on p to construct some anticipation of its true value. In fact, if the goal were only to construct a confidence interval of the SIR after collection of data, the denominator of the SIR may be estimated (with uncertainty) using the same collected data meant to estimate q, and literature exists to quantify said uncertainty in various respects [13][14][15]. However, even when such preliminary studies for sample size calculation are logistically possible (which itself is unlikely), they are unlikely to acquire the covariate distribution of the entire index hospital, leaving us with an estimate of p that uses a sample of the index hospital's patients, even though the population of interest is the entire index hospital. This problem usually persists even after data collection for the"main" analysis, as due to a variety of logistical and legal issues [16], it is possible that the collected data would not be a census of all exams performed at the index hospital. Proposed solution Our hospital profiling problem seeks to compute the sample size necessary to detect hospitals with substantially more cases of high-dose exams than expected -that is, we seek to detect hospitals with SIR substantially higher than 1. This will be done using two mathematical statements, the proofs of which may be found in the appendix. All notation in this section are identical to those described in Description of traditional indirect standardization and its short-comings section. Lemma 1 In an arbitrary index hospital, let θ(�, p, q) denote its SIR with respect to a reference population. Let q and p = {p 1 , ...,p J } respectively denote estimated values for q and p = {p 1 , ..., p J } , computed using observed prevalences of the outcome and each category of the predictive covariate, respectively, using a sample of n individuals from the index hospital. The estimator θ (�,p,q) for θ has the following asymptotic distribution where D p is a diagonal matrix with elements p , and ∇θ is the gradient of θ(�, p, q) with respect to {q, p 1 , ..., p J }. Using the notation of this lemma, denote σ 2 = ∇θ T �∇θ . Note that, by Eq. 1, the value of σ 2 can be determined by the values of θ(�, p, q) and p when is known. Thus, we will alternatively denote this value as σ 2 (θ, p). Theorem 1 In an arbitrary hospital, let θ(�, p, q) denote its SIR with respect to a reference population consisting of I reference hospitals. Let θ (�,p,q) be the estimator for θ(�, p, q) described in Lemma 1. Consider a hypothesis test with null hypothesis θ = 1 and alternate hypothesis θ > 1 . If the null is rejected when The power ( 1 − β ) to detect a θ of at least 1 + δ , while allowing for a type I error rate of α , can be described by the following equation: is the covariate distribution of the i th reference hospital, is the cumulative density function of the standard normal distribution, and z 1−α is the value of at 1 − α. Equation 5 describes a monotonic relationship between β and n, allowing us, for fixed values of α, β, and δ , to easily compute n through a variety of existing univariate root-finding algorithms (for example, [17]). Of note is the fact that Eq. 5 does not contain any information from the index hospital. This is especially important in the context of sample size calculation, where data for the target population is typically unavailable. One way to address this issue is to assume σ 2 (θ, p) to simply take whatever value would result in the highest required sample size to achieve the desired power. However, the value of p which would maximize σ 2 (θ, p) may be unlikely to occur in real life, leading this approach to demand an unnecessarily high sample size. Another approach would be to simply use the overall covariate distribution of the reference population as an estimator for p . This approach, however, assumes a stable covariate distribution across hospitals. Alas, this is not the case. Covariate distributions vary substantially across hospitals, and we must account for the uncertainty accordingly. Thus, we believe, and intend to show, that Eq. 5 presents the best means of sample size calculation in the (highly likely) event that one has no information about the index hospital for which one is performing sample size calculations. Simulation study We evaluated our proposed methodology by testing whether the computed required sample size can identify high SIR values. This process is engaged by simulating fictional hospitals from a basis of real-life, observed hospitals from the University of California, San Francisco International Dose Registry (hereby known as the UCSF Registry). Description of data The UCSF Registry is a multi-site collaborative dataset containing nearly all (2,319,449) consecutive adult computed tomography exams from 157 hospitals performed between November 1, 2015 and Jan 30, 2018, including 850,701 abdomen exams, 607,593 chest exams, 86,654 combined abdomen-chest exams, and 774,501 head exams. Such hospitals include public, private, academic, and non-academic institutions, from a variety of localities in Europe, Japan, and throughout the United States, representing very diverse demographics and radiological practices. At the time of the UCSF Registry being made available for use by this paper, three of its constituent hospitals were identified as incomplete or possibly erroneous. These three hospitals (totaling only 25 examinations) were removed from consideration for this paper. To evaluate one aspect of the quality of these radiological practices, we perform indirect standardization on the hospitals, with the outcome of interest being whether an exam has high radiation dosage. This is measured by observing whether each exam has a dose value (specifically dose length product or DLP) above a value predetermined to be high for the anatomic area. These values are 1160 mGy-cm (milliGray-centimeters) for abdomen exams, 660 mGy-cm for chest exams, 1580 mGy-cm for combined abdomen-chest exams, and 1060 mGy-cm for head exams. Evaluation of this outcome is controlled for by two categorical variables, the aforementioned anatomic area scanned, as well as the "size category" of the anatomic area scanned, denoting whether the body part is very small, small, large, or very large, determined by the diameter of the body part scanned. These two categorical variables are collapsed into one for purposes of indirect standardization, the manner described at the beginning of Methodology section. The expected prevalence of high dose within each combination of anatomic area and size category is computed by taking the observed prevalence within all exams in the UCSF Registry. This produced highly variable prevalences, with 7% probability of high dose for the smallest patients undergoing combined abdomen-chest exams and 51% probability of high dose for the largest patients undergoing abdomen exams. The high impact of anatomic area and patient size category on dose suggest a need to control for their distributions in hospital profiling. The between-hospital variance of high dose prevalence is high, ranging from 0% in the best-performing hospital to 75% in the worst-performing hospital. The betweenhospital variance does not disappear after controlling for anatomic area scanned and size category, with SIR values ranging from 0 in the best-performing hospital to 3.0 in the worst-performing hospital. While this wide range of observed SIRs helps illustrate the benefits hospital profiling and standardization of radiological practice can provide, it does not help assess our proposed sample size calculation methodology. In the context of our hospital profiling problem, there is little clinical interest in identifying hospitals with low SIR (for example, SIR below 1.1), as their doses are low enough that they do not need help optimizing their radiological practices. There is also little reason to power a hypothesis test to detect hospitals with very high SIR (above 1.5), because while we do wish to detect hospitals of this kind, we also expect such hospitals to be very easy to detect, regardless of the statistical methodology used. Thus, we evaluated our proposed methodology under the hypothetical scenario of comparing a null hypothesis of SIR=1 to a minimal detectable alternate hypothesis of SIR=1.2. These are the extreme values for which our selected type I and type II error rates are meant to apply, and our methods can not be viewed as successful unless error rates fall below target values even at these values of the true SIR. Neither of these two exact values, however, were observed among the true SIR values of the example hospitals. We thus simulate a new set of index hospitals so the behavior of our methods can be evaluated under these circumstances of disproportionate clinical interest. Description of simulation procedure Our simulation procedure is a five-step process: 1 Hospitals in the UCSF Registry are randomly separated into two groups. The first group, consisting of 103 hospitals, will serve as the "reference population, " while the remaining 51 hospitals will serve as the basis upon which fictional index hospitals are simulated; refer to these 51 hospitals as "base index hospitals. " 2 For each base index hospital, we construct 11 "simulated index hospitals. " These 11 simulated index hospitals have a covariate distribution identical to that of their corresponding base index hospital, but with the number of high dose exams adjusted to achieve one of 11 pre-selected SIR values. These 11 SIR values are described by a sequence of numbers starting at 0.5, ending at 1.5, increasing in increments of 0.1. 3 We compute the minimal sample size required to detect an SIR of 1.2 using our proposed methodology. 4 For each simulated index hospital, we sample a number of data points equal to the minimal sample size required. We then conduct the testing necessary to compare a null hypothesis of SIR=1 to an alternate hypothesis of SIR>1. 5 For each simulated index hospital, repeat the previous step 1000 times, letting us compute the simulated hospital's type I error rate if the null hypothesis is true, and its type II error rate if the null hypothesis is not true. The precise means by which we construct the simulated index hospitals described in step 2 of the simulation procedure can be found in the appendix. Specifically for SIR values 1.0 and 1.2, we also perform steps 3-5 of this simulation two more times, using more traditional models of the SIR rather than our methodology. The two more traditional approaches are: 1 Fixed Denominator from Index Sample -We assume the SIR denominator to be known after preliminarily sampling 100 data points from the index hospital, allowing for an estimate of the covariate distribution. This method was described at the end of Description of traditional indirect standardization and its shortcomings section. 2 Fixed Denominator from Reference Mean -We assume the SIR denominator to be known and equal to the covariate distribution of the overall reference population. This method was described at the end of Proposed solution section. We conclude by comparing the type I error rates and type II error rates produced by our methodology with these two more traditional approaches. Simulation results According to our proposed methodology, 613 exams need to be sampled from an index hospital to detect a SIR of 1.2 with 80% power. We see in Table 1 that, using our proposed methodology, simulated hospitals with SIR value 1.2 have an average type II error rate of 20% , matching the target 20% . The average type I error rate for simulated hospitals with SIR value 1 averages 2.5% , lower than the target 5%. At all other SIR values, simulated hospitals performed as expected. From Fig. 1, we see that hospitals with SIR less than 1 typically had type I error rate lower than 5% , while those with SIR greater than 1.2 typically had type II error rate lower than 20% . Hospitals with SIR of 1.1 were typically undetected, though this is expected given our choice of minimal detectable alternate hypothesis. Lastly, we compare our methods to traditional indirect standardization, which assumes the denominator of the SIR to be known exactly. According to the methodology assuming fixed denominator from index sample, sample size required ranges from 438-989 exams, depending on the index hospital in question, with a median of 610 and an interquartile range of 75 (577-652). Among simulated index hospitals, 51% required sample size below 613 according to this fixed denominator method. This results in this method often being less capable of detecting SIR values modestly higher than one. This traditional method had higher type II error rates than our methods for 90% of simulated hospitals with true SIR 1.2. Viewing the SIR denominator as known also means that any inaccuracies in its estimation are also more likely carry over to errors in inference. As a consequence, even the type I error rates for traditional methods also fall below expectations, despite type I error usually decreasing when type I error to increases. Traditional methods have a higher type I error rate than our method for 73% of simulated hospitals with true SIR 1 (Fig. 2). According to the methodology assuming fixed denominator from reference mean, sample size required is 610. This traditional method also underperforms compared to our proposed method most of the time. Among simulated hospitals with true SIR of 1.2, this method had higher type II error rate than our proposed method 88% of the time. Among simulated hospitals with true SIR of 1.1, this method had higher type I error rate 86% of the time (Fig. 2). Application to real data Lastly, to see the performance of our methodology on real data, we re-apply our methods to index hospitals drawn directly from the UCSF Registry, rather than hospitals simulated using the UCSF Registry as a base. For this exercise, we wish for the "base index hospitals" to represent a wide range of SIR values. To achieve this, we consider six categories of SIR values: <0.9, 0.9-0.95, 0.95-1, 1-1.25, 1.25-1.5, >1.5. From each category, we sample either 2 hospitals or 1/3 of all available hospitals in the category to serve as index hospitals, whichever number is higher. The resultant counts for number of index and reference hospitals in each SIR category are detailed in Table 2. Using the resulting set of reference hospitals, our methodology computes that a sample size of 615 is required to detect 1.2 SIR with 80% power. Just like in simulations, for each index hospital, we sample 615 patients 1000 times to assess the type I error (for index hospitals with true SIR ≤ 1) or type II error (for index hospitals with true SIR>1) of our methodology. According to Fig. 3, these expectations were met. Type I error rates fell below 5% for all hospitals with true SIR less than 1. Type II error rates fell below 20% for all hospitals with true SIR greater than 1.2. Discussion and conclusion The ability to compare one's own performance with the performance of other hospitals is an extremely important component of hospital profiling. To do this in a nuanced method that controls for confounding variables, however, involves one hospital sharing information with another to a degree which may not be logistically feasible or may require navigating legal and policy issues that, at best, significantly slow down the process and, at worst, render the process impossible. Thus, as much as possible, it's of great merit to reduce the amount of information that needs to be shared, and finding the minimal sample size necessary for a proper confounder-adjusted comparison is key to addressing this problem. We provide a method of calculating this minimal sample size without requiring the same information as traditional methods. Indeed, we do not require any information from the index hospital. When conducting simulated sample size calculations using traditional assumptions of the SIR, we were very generous in the resources theoretically considered available. Specifically, what's described as the "fixed denominator from index sample" is highly infeasible to apply in practice, as few hospitals would be willing to engage in the circular practice of providing a sample of data to a statistical collaborator for the purposes of finding out how much data needs to be sampled. Despite this dynamic, our proposed method has been shown to work better than traditional methods. The sample sizes required in our example application also seems modest enough to upload into a small, easyto-use web application which can provide hospital profiling services in seconds without excessive communication between the parties being compared. Development of this web application is the ultimate goal to which this paper hopes to contribute. We hope for this web application to contribute to an expansion of hospital profiling and ultimately to an increase in quality of patient care. While the motivating medical problem of this paper lie in the realm of optimization of radiological practices, the applications of indirect standardization are broad, extending to domains such as cardiology [7], pulmonology [8], demography [6], and many others. We expect the methods presented in this paper to be applicable in many domains outside of its original intent.
2023-04-11T13:56:45.639Z
2023-04-11T00:00:00.000
{ "year": 2023, "sha1": "53699cc1bfa7cb6e9a7b65754473dcebeb6cbc7c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "53699cc1bfa7cb6e9a7b65754473dcebeb6cbc7c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221541721
pes2o/s2orc
v3-fos-license
LMNA functions as an oncogene in hepatocellular carcinoma by regulating the proliferation and migration ability Abstract The role of the LMNA gene in the development and progression of hepatocellular carcinoma (HCC) and the associated molecular mechanism is not yet clear. Therefore, the purpose of this study was to evaluate the relationship between LMNA and HCC. LMNA gene expression in normal tissues and corresponding tumours was evaluated and the Kaplan–Meier survival analysis was performed. Next, the LMNA gene was knocked out in the 293T and HepG2 cell lines using the CRISPR/Cas9 technique. Subsequently, the proliferation, migration and colony formation rate of the two LMNA knockout cell lines were analysed. Finally, the molecular mechanism affecting the tumorigenesis due to the loss of the LMNA gene was evaluated. The results showed that the LMNA gene was abnormally expressed in many tumours, and the survival rate of the HCC patients with a high expression of the LMNA gene was significantly reduced compared with the rate in patients with a low LMNA expression. The knockout of the LMNA gene in the HCC cell line HepG2 resulted in a decreased tumorigenicity, up‐regulation of the P16 expression and down‐regulation of the CDK1 expression. These findings suggested that LMNA might function as an oncogene in HCC and provided a potential new target for the diagnosis and treatment of HCC. primary liver cancer. In the early stage of liver cancer, the clinical symptoms of the patients are not evident, although the disease develops quite rapidly. Generally, a correct diagnosis is reached when the liver cancer is in its late stage, thus becoming a greater threat to the life and health of patients, and resulting in a higher probability of death. 4,5 Due to its complexity, recurrence, metastasis and heterogeneity after surgical resection, HCC is one of the most deadly cancers even after surgical resection. 6 Like other cancers, HCC is also characterized by an abnormal gene expression. The LMNA gene encodes the two main isoforms lamin A and lamin C. Lamins are structural proteins forming the nuclear lamina, which is the inner nuclear membrane determining the nuclear shape and size. Three types of lamins have been previously described in mammal cells, such as A, B and C. 7 Lamin B1 and vimentin were the main overexpressed proteins in liver cancer tissues. 8 Thus, Lamin B1 and vimentin in the blood could be used as novel biomarkers for early HCC and can be detected by non-invasive methods. 9 Lamin A expression varies in a variety of tumour cells. Its expression decreases in breast, prostate, colon, ovarian, gastric and endometrial cancer, leading to a reduction in overall survival and an increase in the number of metastatic sites and tumour recurrence. [10][11][12][13] In contrast, some studies revealed a link between increased lamin A expression and the development and progression of colorectal cancer, prostate cancer and ovarian cancer. [14][15][16] However, the role of the LMNA gene in the development and progression of HCC and the associated molecular mechanism remains unknown. Therefore, the purpose of this study was to evaluate the relationship between LMNA and HCC. The LMNA gene was knocked out in 293T and HepG2 cell lines by the CRISPR/Cas9 technology. Subsequently, the proliferation, migration and colony formation rate of the two LMNA knockout cell lines were analysed, and the tumorigenicity in vivo was tested in a subcutaneous tumour mouse model. Finally, the molecular mechanism affecting the tumorigenesis due to the loss of the LMNA gene in the 293T and HepG2 cells was explored. | Ethical guidelines All the experiments in this study were performed according to the guidelines of the Experimentation Ethics Committee of the Guilin Medical University, China (approval No. 2019-0008). | Bioinformatic Data related to the LMNA gene expression in various normal tissues and corresponding tumours were collected from the Proteomics DB, Max QB and MOPED databases. The Kaplan-Meier survival analysis was performed by the Kaplan-Meier plot (http://kmplot.com/ analysis). | Cell lines and culture conditions The human liver cancer cell line (HepG2) and HEK 293 kidney cell line expressing a mutant version of the SV40 large T antigen (293T) were stored in our laboratory. Cells were routinely cultured in Dulbecco's Modified Eagle's Medium (DMEM; Gibco, CA, USA) supplemented with 10% foetal calf serum (Gibco, CA, USA) and incubated at 37°C in a 5% CO 2 atmosphere. They were cultured until reaching 50% to 80% confluence before the next passage or further experiments. | gRNA design The gRNA was designed using the Massachusetts Institute of | CRISPER/Cas9 technique The resultant gRNA justice chain and antisense chain were renatured to form a double chain. The reaction system was the following: 1 μl of each positive and antisense chain, T4 linkase buffer (10×) 1 μl l, T4 polynucleotide kinase (PNK) 0.5 μl, double distilled water 6.5 μl, 95°C for 5 minutes, and left to cool down to 16°C for 10 minutes. Then, BbsI endonuclease was used to shear PX459 at 4°C overnight, and the plasmid, after shearing, was recovered by electrophoresis. The oligonucleotide double chain and the post-shear plasmid were recombined using T4 ligase. The reaction system was as follows: PX459 plasmid 50 ng, oligonucleotide double chain 2 μl, T4 connectase 1.5 μl, T4 connectase buffer solution 1.5 μl, double evaporation of the water until reaching 15 μl. The reaction procedure was as follows: after combining the oligonucleotide double chain and the T4 connectase at 16°C for 10 minutes, the reaction system was left overnight at 4°C. The recombined carrier suspension product of 5 ~ 10 μl was added to the 50 μl DH5a sensor cells, which were lightly blended and bathed in ice for 30 minutes, heated at 42°C for 90 seconds, left on ice for 2 minutes, and then directly coated on an ampicillin resistant LB plate. The reaction system was incubated overnight at 37°C, 3 ~ 5 white bacterial colonies were selected for culture, and plasmid DNA was extracted for sequencing verification and amplification. The amplified plasmid DNA was collected and stored at −20°C. The sequencing primers were the following: Two million cells were added to each well in a 6-well plate and washed with PBS twice after 12 hours. The Lipofectamine 3000 Transfection reagent (10 μl, Invitrogen, USA) was mixed with 250 μl bovine free media, and left at room temperature for 5 minutes. Meanwhile, 5 μl recombinant plasmid (or exogenous LMNA expressing plasmid) was mixed with 250 μl bovine free media, and left at room temperature for 5 minutes. Then, the two liquids were thoroughly mixed and left at room temperature for 20 minutes. Finally, the mixed liquid was transferred into a culture plate. The complete culture medium was changed at 6-8 hours after transfection, fresh complete culture medium was replaced at 24 hours after transfection, and puromycin was added for screening (working concentration of 293T was 2.0 μg/mL, and that of HepG2 was 1.5 μg/mL). The screened cells were cultured for subsequent experiments. | Immunofluorescence The cells were immersed in PBS and seeded in glass slides at a concentration of 1 × 10 4 /ml. The slides were fixed in 4% paraformalde- | Western blot RIPA lysis buffer (plus PMSF) was used to lyse the cells to extract the total proteins, which were separated by a polyacrylamide gel electrophoresis and transferred into the PVDF membrane. After membrane blocking, the corresponding primary antibodies (1:1000, rabbit anti-human, anti-Lamin A/Lamin C/β-actin, Abcam, USA, and anti-P16/CDK1/MMP2/MMP9, BOSTER, China) were added to the membrane that was incubated overnight at 4°C. Next, the secondary antibody (1:8000, mouse anti-rabbit antibody, Abcam, USA) was added to the membrane that was incubated at room temperature for 1 hour. Finally, the bands were imaged using Gel imaging system. | CCK-8 cell growth assay The 293T and HepG2 wild-type and knockout cell lines were trypsinized, seeded into 96-well plates at a concentration of 1500 cells per well in triplicate, and cultured at 37°C. Cell counting kit-8 (KeyGen, China) was added after 18 hours and incubated for 1 hour. The absorbance was measured at 450 nm every other day until one of the cells reached the confluence of more than 70% to stop the cell proliferation experiment. | Cell cycle detection The cells in the logarithmic growth phase were seeded in a 6-well plate at a density of 1 × 10 6 cells/mL in 2 mL medium and in a 24-well plate in 1 mL medium, and the cells were collected after 24 hours. After centrifugation at 800 rpm for 5 minutes, the supernatant was discarded, and the cell pellet was collected, washed twice with pre-cooled PBS, and with pre-cooled with 75% ethanol. Next, the cells were fixed at 4°C for more than 4 hours. After centrifugation at 1500 rpm for 5 minutes, the supernatant was discarded, the cells were washed once with 3 mL PBS, and finally 400 uL ethidium bromide (PI, 50 µg/mL), and 100 µl RNase A (100 µg/mL) (KeyGen, China) were added, and the cells were and incubated at 4°C for 30 minutes in the dark. Flow cytometry was used to detect 20 000-30 000 cells by a standard procedure, and the results were analysed by the cell cycle FACS software. | Apoptosis detection The cells were collected, washed once with PBS, centrifuged at 1000 r/ min for 5 minutes, and the supernatant was discarded. According to the kit instructions, each sample (293T-WT, 293T-KO, HepG2-WT, HepG2-KO) was resuspended in 100 μL binding buffer and then Annexin V-FITC (5 μL/tube) and PI staining solution (5 μL/tube) (KeyGen, China) were added and incubated for 10 minutes in the dark. Next, 400 μL binding buffer was added to each tube, tubes were gently mixed, and flow cytometry analysis was performed within 1 hours. | Wound closure assay The cells were grown in 6-well plates until reaching 80-90% confluence. A wound was made using a plastic pipette tip across the cell surface. The remaining cells were washed three times to remove any cell debris and incubated at 37°C with serum-free DMEM. Six different areas of the wound per well were photographed at 0, 12, 24 and 36 hours and the migrating cells were compared. The cell migration distance was determined by measuring the width of the wound and subtracting half of this value from the initial half-width value of the wound. Each experiment was performed in triplicate and three separate experiments were performed. | Cell transmigration assay The transmigration assay was conducted in 24-well transwell cham- for each sample (as described above in Apoptosis detection). | Plate clone formation The cells were digested with 0.25% until obtaining single cells and the cell suspension was diluted at a concentration of 1 × 10 4 cells per mL. Fifteen hundred cells in their medium were added to each well of a six-well plate and incubated at 37°C under 5% CO 2 . When the clones could be distinguished by naked eye in the six-well plate, the cell culture stopped, and cells were fixed in 4% paraformaldehyde for 15 minutes. The crystal violet staining was performed and the number of clones of more than 50 cells was counted under the optical microscope. The colony formation rate was calculated as follows: (average number of clones/ number of inoculated cells) × 100%. | Soft agar cloning Cells were digested, centrifuged, counted and diluted to 1 × 10 4 cells/mL. Five hundred μL of the agar and 500 μL of the single-cell suspension was added to each well of a 6-well plate, and thoroughly mixed. The bottom of the 6-well plate was covered with a 1.2% agarose layer solidified at room temperature to form a double agar layer. The incubation was performed for 4 weeks at 37°C under 5% CO 2 with 100 μl complete medium added at intervals to prevent drying. The ell clone formation rate was calculated as follows: Cell clone formation rate = (number of cell clone formation/ number of inoculated cells) × 100%. ured with a calliper, and the volumes were calculated using the equation (length × width 2)/ 2. Thirty-one days after inoculation, mice were sacrificed and the tumours were excised and subjected to pathological examination. An analytical balance was used to measure the tumour weight, and a Vernier calliper to measure the long diameter as the length and the short diameter as the width, and then the formula (length × width 2)/ 2 was used to calculate the tumour volume. | LMNA gene expression in different tumours from HCC patients and different cancer cells To explore the specific changes in the expression of the LMNA gene in various tumours, a comparative analysis was performed between normal tissues and corresponding tumours as well as on different types of cancer cells using data obtained from the Proteomics DB, Max QB and MOPED databases. The data revealed that LMNA expression was significantly up-regulated in brain cancer cells (U251), bone cancer cells (U2OS), kidney cells (293T) and liver cancer cells (HepG2) ( Figure 1A). The Kaplan-Meier curve of patients with HCC showed a lower survival rate in patients with high LMNA expression ( Figure 1B). | LMNA knockout cell lines from 293T and HepG2 by CRISPR/Cas9 technology To further study the function of the LMNA gene, LMNA knockout cell lines were acquired by the CRISPR/Cas9 technology ( Figure 2A). Immunofluorescence and western blot were performed to verify whether the LMNA gene was successfully knocked out. The immunofluorescence results revealed that both cell lines did not express the LMNA protein, while the LMNB protein expression was unaffected ( Figure 2B). The WB results confirmed this result ( Figure 2C). Besides, the LMNC protein which is translated by the same LMNA gene was also down-regulated ( Figure 2D). | Knockout of the LMNA gene in 293T and HepG2 cells led to a decreased cell migration and colony formation, and improved the transmigration ability After the function tests, the tumorigenicity of the LMNA knockout cell lines was evaluated in vitro. The wound closure assay showed that the healing ability of both LMNA knockout cell lines was significantly decreased ( Figure 4A). In contrast, the transwell migration tests showed that the transmigration ability of both LMNA knockout cell lines was enhanced ( Figure 4B). Furthermore, the colony formation ability of each group of cells showed that the colony formation rate ( Figure 4C) and the average size of the clones ( Figure 4D) of the two LMNA knockout cell lines decreased | WB analysis indicated that the ECM and cancer signalling pathway was changed after LMNA knockout After concluding that the LMNA gene knockout resulted in a decrease in the tumorigenic capacity of tumour cells, the relevant molecular mechanism was investigated. RNA-seq analysis of the LMNA knockout cell lines and wild-type cells, which were obtained by the CRISPR/Cas9 technique described above, was performed, and the results showed that two different batches of four cells could be clustered ( Figure 6A). In addition, the GO enrichment map ( Figure 6B) and KEGG pathway ( Figure 6C) enrichment map showed the presence of many differentially expressed genes and regulatory pathways in LMNA knockout cells and wild-type cells. The expression of MMP2 and MMP9 in the ECM signalling pathway was then evaluated by WB, and the result showed that the expression of MMP2 and MMP9 was decreased (P < 0.01) in the two LMNA knockout cell lines ( Figure 6D), which was consistent with the RNAseq results. | D ISCUSS I ON The role of LMNA gene in tumours, in the development and progression of HCC and its molecular mechanism is still a challenge. In the current study, the relationship between LMNA and HCC was evalu- In this study, our hypothesis was that LMNA might play an oncogene role in HCC since HCC patients with higher LMNA expression showed a lower survival rate according to the Kaplan-Meier curve. It is well known that the most important pathological type of HCC is the primary liver cancer, which accounts for approximately 90%. 17,18 LMNB1 expression (lamin B) is significantly up-regulated in HCC patients, thus, its expression may be used as a prognostic indicator in patients at an early-and late-stage HCC. 19 Lamin A, a nuclear lamina structural protein like lamin B, is critical for the stabilization of retinoblastoma tumour suppressor proteins pRb and p107. [20][21][22] These discoveries suggest that Lamin A/B might be closely related to the tumorigenesis. In this work, LMNA protein expression in HepG2, and cells was significantly up-regulated suggesting that the LMNA gene might be relate to the malignant degree of tumour cells. In addition, the proliferation ability of HepG2 cells decreased after LMNA knockout and the cell cycle was arrested. Previous studies showed that the knock down of lamin A/C in human lung cancer cell lines leads to an increased tumour growth rate in vivo. 21,23 However, the knock down of lamin A/C in human primary diploid fibroblasts leads to G1 arrest and inhibits cell proliferation. 24 Thus, our conclusion was that the knockout of the LMNA gene in different cells has a different effect on cell proliferation and cell cycle, thus potentially explaining the different role of LMNA in different tumours. In this study, we also found that P16 expression increased after knockout of the LMNA in HepG2 cells. P16 expression significantly decreased after the overexpression of LMNA, indicating that the LMNA gene could regulate the expression of P16. F I G U R E 4 Influence of LMNA knockout on 293T and HepG2 cell migration, transmigration and ability of clone formation. Knockout of LMNA gene in 293T and HepG2 cells leads to (A) a decreased wound closure ability, and (B) increased cell transmigration ability (C) decreased formation of soft agar clones and (D) decreased platelet colony formation. Soft agar colony formation of the two LMNA knockout cells resulting in a decrease in the cloning rate and the size of the clone. Results were expressed as mean ± SD of triplicates (**P < .01) F I G U R E 5 Effect of LMNA knockout on the tumorigenicity in a nude mouse xenograft model. A, Tumorigenic analysis of 293T WT and knockout cells (2 × 10 6 cells for each group) in nude mice (n = 6) for 30 days. B, Analysis of the tumour formed by HepG2 WT and knockout cells (2 × 10 6 cells for each group) in nude mice (n = 6) for 30 days. C, Volume of the subcutaneous tumours by wild-type and LMNA knockout cell lines in nude mice (293T group *P < .05). D, Weight of the subcutaneous tumours by wild-type and LMNA knockout cell lines in nude mice. Results were expressed as mean ± SD of triplicates (*P < .05, **P < .01) F I G U R E 6 Differentially expressed genes in the LMNA knockout cell lines and their correspondent wild-type by RNA-seq. A, Heat map of 8 samples; the up-regulated genes are shown in red colour, and the down-regulated genes are shown in blue colour, the deeper the colour, the bigger the expression difference. The scale was by log 10 . B, GO analysis of the differential gene sets in the wild-type and LMNA knockout cell lines (WT vs KO). C, KEGG pathway analysis of differential gene sets in the wild-type and LMNA knockout cell lines (WT vs KO). D, Western blot results of MMP2/9 protein expression. Results were expressed as mean ± SD of triplicates (**P < . tissues. [33][34][35] The two LMNA knockout cell lines showed enhanced transmigration ability. The decrease in lamins potentiates cancer cell migration through narrow spaces, suggesting a potential role in metastasis. 23,[36][37][38] In our study, the migration ability of the two LMNA knockout cell lines was significantly lower than that of the wild-type cells, which is consistent with previous findings 23,36,38 . On the contrary, the transwell migration ability increased in knockout cell lines. Our hypothesis was that the loss of lamin A/ C might result in a thinner nuclear membrane and an easier ability of deformation to pass through narrow gaps; furthermore, the loss of CO N FLI C T O F I NTE R E S T The authors confirm that there are no conflicts of interest. AUTH O R S CO NTR I B UTI O N S Heng Liu: Data curation (lead); Formal analysis (lead); Investigation DATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are available from the corresponding author upon reasonable request. Liping Wang https://orcid.org/0000-0001-6372-5939 Ming Chen https://orcid.org/0000-0001-5935-7694 F I G U R E 7 Model of LMNA gene regulating the migration and proliferation of HCC cells HepG2. LMNA binding to the chromosome resulted in the downregulation of P16 and MMP2/9, then the collagen degradation was inhibited and the expression of CDK1 was up-regulated. Finally, cell proliferation was promoted and cell invasion was suppressed
2020-09-09T13:06:02.719Z
2020-09-08T00:00:00.000
{ "year": 2020, "sha1": "721dda2456cd29dc883890bf113c408b38dcce9d", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.15829", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c9782d698a37ea4dacbad7f524d083482651731", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
268893980
pes2o/s2orc
v3-fos-license
Health benefits of bluefin tuna consumption: (Thunnus thynnus) as a case study Consumers are increasingly interested in food products with high nutritional value and health benefits. For instance, fish consumption is linked with diverse positive health benefits and the prevention of certain widespread disorders, such as obesity, metabolic syndrome, or cardiovascular diseases. These benefits have been attributed to its excellent nutritional value (large amounts of high-quality fatty acids, proteins, vitamins, and minerals) and bioactive compounds, while being relatively low-caloric. Atlantic bluefin tuna (Thunnus tynnus) is one of the most consumed species worldwide, motivated by its good nutritional and organoleptic characteristics. Recently, some organizations have proposed limitations on its consumption due to the presence of contaminants, mainly heavy metals such as mercury. However, several studies have reported that most specimens hold lower levels of contaminants than the established limits and that their richness in selenium effectively limits the contaminants’ bioaccessibility in the human body. Considering this situation, this study aims to provide baseline data about the nutritional composition and the latest evidence regarding the beneficial effects of Atlantic bluefin tuna consumption. A review of the risk-benefit ratio was also conducted to evaluate the safety of its consumption, considering the current suggested limitations to this species’ consumption. Introduction Nowadays, consumers are increasingly aware of the beneficial effects on health of certain foods and the adoption of well-balanced diets.In this sense, most marine products, especially fish, are widely appealing for their high nutritional value (1).Fish consumption has been traditionally linked to many health benefits due to their high omega-3 polyunsaturated fatty acids (PUFAs) content (2), being of particular interest eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) (3)(4)(5)(6).Both compounds are well-known for their positive effects on the cardiovascular system and the nervous system, as well as the control of inflammatory processes in vertebrates, being beneficial in various human pathologies and disorders like obesity or metabolic syndrome (7,8).More recently, fish proteins, peptides, and amino acids have harbored attention as they have shown properties similar to PUFAs (9).In addition, fish Schematic representation of bluefin tuna (Thunnus thynnus) distribution and their main health impact derived from their consumption. is also a significant source of vitamin B12 and vitamin D (10).Vitamin B12 is required to form red blood cells and DNA.Deficiency of vitamin D leads to rickets, a low bone mineral density and thereby to osteoporosis, among other pathologies.Fish is also an important source of essential minerals, like copper (Cu), manganese (Mn), zinc (Zn), and selenium (Se), which participate in many biological processes as part of numerous enzymes (10).Cu plays an important role as a catalytic cofactor in numerous critical enzyme reactions in metabolism (11).Mn deficiency results in poor reproductive performance, congenital malformations, growth retardation in offspring, and abnormal function of bone and cartilage (12).Zn is required in the stabilization of the structure of many proteins at all levels of cellular signal transduction (13).Finally, Se plays a fundamental role in reproduction, thyroid function, DNA replication and protection against microbes and oxidant compounds (14).Therefore, fish is considered one of the healthiest foods on global scale and is a fundamental part of a healthy and well-balanced diets. However, in recent years, some national and international Food Safety Agencies, like the Spanish Agency for Consumer Affairs Food Safety and Nutrition (AECOSAN) and the US Food and Drug Administration (FDA), among others, have recommended limiting the consumption of certain species of fish in children and pregnant women (15).The reason for this limitation is the level of certain heavy metals, like mercury (Hg), found in some blue fish such as Prionace glauca (blue shark), Isurus oxyrinchus (blue pointer or bonito shark), Xiphias gladius (swordfish) and Thunnus thynnus (Atlantic bluefin tuna) (16)(17)(18).When Hg reaches the sea from soil or chemical industry, it accumulates in marine species throughout the food chain; the larger and longer predator fish are, the higher the levels found (19,20).Thus, large fish such as swordfish, bluefin tuna, and sharks accumulate these compounds in their tissues since they feed on small fish.Despite this, some recent studies point out that the risk of Hg intake due to fishery products consumption is not as substantial as commonly believed (21,22).The European Food Safety Agency (EFSA) has recently stated that limiting fish consumption due to Hg's presence can lead to more significant health risks than moderate consumption (23).In fact, the European legislation (Commission Regulation (EU) No. 1881/2006) established maximum levels of Hg in fish (0.5-1 mg/kg) based on the level of consumer exposure (24), but the majority of fishery products currently show levels much lower than the limits set in the legislation (21,22).This points to the current limitation on seafood consumption being somewhat exaggerated.In addition, several studies show that Se, an essential mineral commonly present in seafood, may also protect against the toxic effects of Hg, mainly its most dangerous form of organic methylmercury (25,26).Thus, the Hg: Se ratio should also be considered when assessing the risk linked to fish intake (11). The present study will be focused on the Atlantic bluefin tuna (ABFT) Thunnus thynnus (L., 1758), a top-level pelagic predator distributed throughout the Atlantic Ocean, from the Canary Islands to Ireland, with incursions to Norway and the North Sea, the Baltic, and the Barents Sea, Mediterranean and Black Sea, also in Canada and South America, along the Brazilian coast (27) (Figure 1).The species is very voracious and feeds on many other fishes, crustaceans, and cephalopods (28).The generic name of bluefin tuna incorporates three species: the ABFT Thunnus thynnus, the Pacific bluefin tuna Thunnus orientalis, and the southern bluefin tuna Thunnus maccoyii.Throughout history, bluefin tuna Thunnus thynnus has been exploited in the Mediterranean for thousands of years until the end of the 20th century (29).Research on bluefin tuna farming began in the 1970s in Japan, and numerous business initiatives for farming have been launched since then (30).Several studies have been carried out in various field of research such as reproduction, nutrition, genetics, pathology, diseases, and engineering, among others (31,32).In addition, numerous projects have been launched to improve the captive reproduction of this species, both from the business and research sectors.In a recent study, the European Market Observatory for Fisheries and Aquaculture Products (EUMOFA) shows that tuna is Europe's most consumed marine species, followed by cod, salmon, and Alaska pollock (33).The consumption of tuna in Europe is around 3.07 kg per capita, from which 99.2% is wild-caught and only 0.83% is farmed (33).There is a growing demand for fresh tuna Thunnus thynnus in Europe.Their production is currently limited to the Mediterranean Sea, mainly in Spain, France, Italy, and to a lesser extent, Portugal, Malta, Croatia, Cyprus, and Greece (34).There has been an essential economic contribution from the bluefin tuna fishing industry, with a value of sale of more than 875 million euros in the Mediterranean Sea since 2018 (35).However, it is necessary to improve the fisheries management to make fishing more sustainable from an environmental point of view.In this sense, the treatment and recovery of the waste originated in such an industry could reduce these environmental issues.By-products from bluefin tuna have several bioactive compounds of considerable economic value that can be extracted and obtained from this discarded biomass following the principles of the circular economy (36). In this context, the present study is focused on the nutritional composition and contaminants of ABFT Thunnus thynnus, including the latest evidence on human health impact and the assessment of the risk-benefit ratio of its consumption.Knowledge about the nutritional composition and risk-benefit ratio is valuable for consumers to change their diet conscientiously according to their life cycle stages. Nutritional composition It is well-known that fish consumption has numerous benefits for human health (37,38).ABFT Thunnus thnunnus is valued as an excellent food worldwide due to its good nutritional and sensory quality, making it a favorite choice in the seafood market.Consequently, many organizations have been interested in developing aquaculture and processing technology to increase fishing and processing efficiency (39).In this section, we will address the nutritional composition of ABFT.Different databases were consulted to provide information about the approximate composition of fish and shellfish.Among them are the global database of FAO/INFOODS, the USDA, the United States National Marine Fisheries Service, and the United Kingdom Department of Health.Table 1 shows the composition of macro and micronutrients present in 100 g of ABFT meat, which is low in calories while providing high-quality proteins and lipids, fat-soluble vitamins, and various essential elements.In addition, the consumption of this species has been linked to a series of beneficial health effects due to the presence of bioactive compounds, including bioactive peptides present in proteins and PUFAs, mainly EPA and DHA (28,44,45).Nevertheless, it is important to underline that the nutritional composition of any fish may vary depending on environmental factors, age, sex, maturation stage, and the migratory behavior of each species. Protein and amino acid profile According to the data compiled, the protein content in bluefin tuna is 23 g/100 g of fresh product (Table 1).Considering that the usual protein range provided by fish is between 17-23 g, we find that bluefin tuna has a higher protein content when compared to other species.Similar results have been reported in farmed and wild bluefin tuna samples (21-23 g protein) (46,47).In 2012, the Spanish Ministry of Agriculture, Food, and Environment published a guide on nutritional declarations and health properties of food products, where ABFT was considered a high-protein food.Additionally, in its health declarations, the European Parliament stated that these proteins contribute to increasing and conserving muscle mass and maintaining bones under normal conditions (40).Experimental studies in animals have demonstrated various benefits derived from fish protein intake.These benefits include hypocholesterolemic effects attributed to the amino acid composition of fish, although the mechanism is not clear (48); antihypertensive effects due to the presence of angiotensinconverting enzyme (ACE) inhibitor peptides (49,50); and antiatherosclerotic effects, which are attributed to the antioxidant properties of peptides and fish protein hydrolysates (44).In addition, it has also been shown that proteins can improve insulin sensitivity, prevent metabolic syndrome, and reduce the risk of type 2 diabetes (44).Fatty acid composition of ABPTF (g/100 g). Fish proteins are better quality than red meat due to their lower collagen content and better digestibility, reported to be over 90% (6,51).The nutritional value of a protein depends on the amino acid composition (score), the content of essential amino acids, and its susceptibility to digestion (52)(53)(54).Currently, the suggested method for assessing protein quality is a chemical score, or a protein digestibility corrected amino acid score (PDCAAS) (52)(53)(54).The amino acid profile of ABFT shows a high amount of histidine, isoleucine, leucine, lysine, threonine, tryptophan, valine, phenylalanine, and methionine (6).They are considered essential amino acids since humans do not have the ability to synthesize them, and must be incorporated into the diet (Table 1).Table 1 presents the contribution of ABFT regarding the reference daily intake.Just 100 g of ABTF cover between 44% and 69% of the requirements for all essential amino acids (55-58).Due to their amino acid profile, fish proteins can also benefit health, mainly through antioxidant and antiinflammatory effects.For example, an adequate supply of histidine through the diet provides benefits against age-related neurodegenerative and cognitive disorders, metabolic syndrome, rheumatoid arthritis, and inflammatory bowel disease (59).The three branched-chain amino acids, leucine, isoleucine, and valine, also play a fundamental role in regulating energy homeostasis, metabolism, innate and adaptive immunity, and glucose metabolism, lipid and protein synthesis.Therefore, current evidence indicates that the adequate supply of these amino acids through the diet could positively affect the parameters associated with metabolic diseases (60).Another aspect to highlight in the amino acid content of bluefin tuna is the contribution of phenylalanine and tryptophan, as both amino acids are considered natural antidepressants (61).Tryptophan is additionally vital for the correct functionality of the brain-brain axis, gut, and immune system (62). On the other hand, the protein content is essential from an organoleptic point of view since fish species containing small amounts of protein tend to lose a considerable amount of water during cooking, which ruins the texture of the meat (47).Thus, the high protein content of this species also contributes to its good organoleptic properties. Lipid content: fatty acids profile and w-3/w-6 relation Lipids are macronutrients needed in the human diet and can affect health depending on the type and proportion of the dietary fatty acids consumed.It has been stated that monosaturated fatty acids (MUFAs) and PUFAs exert beneficial properties in human health (63).The lipid content of ABFT corresponds to 12 g/100 g in both wild and farmed specimens (Table 1) (64).Due to its high lipidic content, this species is considered a bluefish (64).The guidelines published in 2012 by the Spanish Ministry of Agriculture, Food and Environment declared that ABFT is low in saturated fats and high in PUFAs and that the latest contribute to the functioning of a normal heart (40).The fatty acid profile of ABFT is shown in Figure 2. PUFAs represent the main contribution to the total content of fatty acids (3.58 g/100 g) in Atlantic bluefin tuna.Within this group, DHA (2.18 g/100 g), EPA (0.693 g/100 g), and DPA (0.306 g/100 g) are the most abundant.Regarding MUFAs, oleic acid (2.263 g/100 g) and palmitic acid (0.397 g/100 g) stand out (Figure 2).Several studies have reported similar results in farmed ABFT, with 3.6 g/100 g of PUFA (47, 64).One minor difference was that the leading group of fatty acids corresponded to MUFAs, accounting for 42% of the total lipid profile (1.2 g/100 g of oleic acid and 1.1 g/100 g of erucic acid).These differences could be due to the diet received by the species in cultivation, sex, or the size of the animals under study.Other factors that may influence the lipidic composition of fish include environmental factors, age, state of maturation, and migratory behavior (41).The lipids present in ABFT have exceptional quality indices: an excellent omega 3/omega 6 ratio (9/1), an adequate polyunsaturated/saturated fatty acids ratio (1.16), and an adequate polyunsaturated/monounsaturated/saturated fatty acids ratio (2.03) (41,47,64).Furthermore, low levels of atherogenicity indices (AI), thrombogenicity indices (TI), and a high ratio of hypocholesterolemic to hypercholesterolemic fatty acids (HH) have been reported, indicating that the intake of this fish may exert hypocholesterolemic effects (4,64,65).Therefore, the consumption of ABFT could be beneficial in preventing cardiovascular diseases (66). Various organizations such as FAO (Food and Agriculture Organization), the Academy of Nutrition and Dietetics, and the European Association for Cardiovascular recommend a minimum intake of EPA and DHA of 250 mg for adults and, in the case of pregnant and lactating women, the amount of DHA should increase between 100-200 mg (67)(68)(69)(70).In this sense, ABFT guarantees a good quantity of fatty acids.Hundred grams of tuna meat provides 0.693 and 2.18 g of EPA and DHA, respectively, contributing to more than 100% of the reference daily intake.Consumption of these fatty acids has essential roles in human health, including promoting cardiovascular health and protection against neurological and inflammatory conditions (68,71).Observational studies demonstrated a protective effect of fish intake on cardiovascular disease risk.In agreement, various scientific organizations affirm that the consumption of at least two servings of fish per week, where at least 1 is an oily fish, is associated with a decreased risk of death from coronary heart disease of at least 25% compared to those who do not eat fish (67-70, 72). Carbohydrates Bluefin tuna tissue comprises lipids and proteins, so the proportions of carbohydrates are minor, almost insignificant.In Table 1 it is shown that the carbohydrate content is 0 g/100 g (73). Vitamins ABFT stands out for containing significant amounts of B complex vitamins, including thiamine (B1) 0.241 mg/100 g, niacin (B3) 17.8 mg/100 g, pyridoxine (B6) 0.46 mg/100 g and cobalamin (B12) 5 μg/100 g.Thus, 100 g of bluefin tuna provides between 25% and 50% of the reference daily intake of these vitamins (Table 1).The report published by the Spanish Ministry of Agriculture, Food and Environment in 2012 established that bluefin tuna is a good source of vitamins.Within the nutritional declaration, it is also indicated that thiamine contributes to the normal functioning of energy metabolism, the nervous system, the heart, and psychological functions; niacin contributes to the maintenance of the skin and mucosa and reduces fatigue; and pyridoxine and cobalamin vitamins contribute to the normal functioning of the immune system, formation of red blood cells, and the process of cell division (40). Additionally, bluefin tuna is rich in fat-soluble vitamins such as vitamins A, D, and E, and its consumption can contribute between 25% and 80% of the reference daily intake.Consumption of these vitamins is important because they contribute to normal iron metabolism, immune system functioning, and cell differentiation process.In the particular case of vitamin D, it contributes to the maintenance of normal bones and teeth, the maintenance of normal calcium levels in the blood, and the normal absorption and utilization of calcium and phosphorus (74-76).On the other hand, although to a lesser extent, ABFT is also a source of vitamin E, which stands out for its powerful antioxidant role and free radical scavenger (77). Minerals Minerals have a crucial role in human health and metabolism, with intake through the diet being essential (78).In this context, ABFT constitutes an excellent food source of minerals.Table 1 reports the contribution of minerals in 100 g of tuna, highlighting 28 mg of Mg, and 82 mg of Se.According to the nutritional declarations published in 2012 by the Spanish Ministry of Agriculture, Food and Environment (40), ABFT is an excellent source of these minerals.Regarding human health, Mg contributes to normal energy metabolism, electrolyte balance, normal muscle and nervous system function, normal protein synthesis, and cell division (40,79,80).Se is attributed to different health benefits; among them is the contribution to the normal functioning of the immune system, normal thyroid function, and the protection of cells against oxidative damage since it is part of many selenoproteins, which are responsible for biological reactions of reduction-oxidation type, antioxidant defense, metabolism of thyroid hormone and immune responses (81,82).Furthermore, various studies report that Se can protect against environmental contaminants, such as mercury (Hg), commonly found in some fish species (83)(84)(85)(86)(87), but this will be discussed later (see Table 2). Regarding the reference daily intake, it has been observed that 100 g of bluefin tuna can contribute 149% of Se recommendation.Additionally, ABFT also contains iodine (36.7 μg/100 g), zinc (1.5 mg/100 g), and phosphorus (200 mg/100 g), being considered as a source of these minerals (42).On the other hand, ABFT has a low contribution of sodium (43 mg/100 g), the nutritional declaration naming it as a low-content source.Thus, its consumption is attractive for low-sodium or low-salt diets, recommended, for example, to patients with hypertension. Health benefits associated with blue fish consumption As previously mentioned, blue fish and ABFT are highly nutritious seafood products of great interest in the market and among healthconscious consumers (88).Numerous studies have linked the chemical composition of these foods with many biological properties and beneficial effects on health.These beneficial effects are mainly attributed to PUFAs, especially EPA and DHA.Additionally, fish provide other high-quality nutrients, such as proteins, vitamins, and minerals, that may have a synergic effect, reducing the incidence of certain diseases (89).The health benefits associated with fish consumption will be discussed in this section and are summarized in Table 3 and Figure 3. Cardiovascular diseases Globally, cardiovascular diseases (CVDs) are still the leading cause of mortality.According to the World Health Organization (WHO), about 17.9 million people died in 2019 from CVDs, which represents 32% of all global deaths (97).The major risk factors that may trigger CVDs include smoking, hypertension, obesity, dyslipidemia, psycho-social stress, and unhealthy and sedentary lifestyle (98).Current first-line treatments effectively reduce CVD risk; however, adherence to healthier dietary patterns is increasingly encouraged since certain nutrients can contribute to maintaining this risk to the minimum and can be used as a preventive tool (98, 99).In this context, fish represents an important cardioprotective dietary component, attributed to its high omega-3 long chain PUFAs content, especially EPA and DHA (99).Many studies have correlated a higher fish consumption to a lower risk of CVDs, including stroke (98), coronary heart disease (99), hypertension, arrhythmias (100), and cerebrovascular disease (101).Recently, a dose-response meta-analysis showed that fish intake of 20 g/day significantly reduced total CVD mortality (4%) (102).In a further study, these authors also found a significant association between a fish intake of 15 g/day and a reduction of myocardial infarction risk by 4% (99).Increasing fish consumption to 100-700 g/week was significantly associated with stroke risk reduction by 2%-12% (103).Some differences were observed in such association between geographical regions.While a pronounced inverse relationship between fish consumption and CVDs risk was found in Asian countries, studies conducted in Western countries reported a modest U-shaped association (102).This means that both low and high fish consumption could lead to higher CVDs risk.Possibly, this variation may be attributed to different cooking fish methods employed in Asian (mainly steaming and stir-frying) and Western countries (deep-frying) being the latter more unhealthy (102). Many biological mechanisms are responsible for the cardioprotective effects attributed to omega-3 long chain PUFAs.Among them, are anti-inflammatory (104), and antioxidant action (105), antiarrhythmic and antithrombosis action, regulation of blood lipids level, protection of vascular endothelial cells, and immunemodulatory activity (99, 100, 106). Neurological diseases Bluefish consumption has also shown beneficial neuroprotective properties attributed to omega-3 long chain PUFAs composition.These compounds have a crucial role in proper brain development, neuro transmission, neuronal differentiation and growth, gene expression, and modulation of ion channels (107,108).It has been stated that DHA can enhance blood flow, reduce inflammation and diminish amyloid-β pathology, thus preventing a primary cognitive decline (107).In addition, DHA has vital functions in different stages of the neuronal degeneration process since this compound can keep membrane fluidity, stimulate neurotrophic factors, diminish oxidative stress and cell death and exert anti-inflammatory activities (109).By contrast, DHA levels in the brain decrease with aging, resulting in cognitive decline (108).In a meta-analysis, the impact of DHA supplementation alone or in combination with EPA on specific memory domains (working, episodic and semantic) was studied in adults.These authors found that supplementation with 1 g/day DHA/ EPA significantly improved episodic memory in adults with mild memory problems, while DHA supplementation alone induced changes in semantic and working memory to a lesser extent (110). Regarding the incorporation of fish into diet as a good source of DHA and EPA, some authors found that moderate fish consumption and supplementation with omega-3 long-chain PUFAs (0.5-1 g/day) led to a significant reduction in depression prevalence with an U-shaped association, regardless of sex, cardiometabolic disturbances or lifestyle (111).Other study reported that a decreased ratio of omega-6/omega-3 PUFAs, a reduction of omega-6 PUFAs, and increased EPA and DHA levels in Mediterranean-style diet supplemented with fish oil significantly enhanced mental health in patients with depression over 3 and 6 months.The addition of fish oil to the diet improved omega-3 PUFAs levels while reducing the omega-6 ones (112). Metabolic diseases Metabolic syndrome is a multifactorial disorder resulting from the interaction between genetic, metabolic and environmental factors that can increase the risk of suffering CVDs, type-2 diabetes and all-cause mortality (113).It has been stated that fish consumption could inversely enhance metabolic syndrome features such as insulin resistance, abdominal obesity, hypertension, and dyslipidemia since fish containing omega-3 PUFA can reduce plasma triglycerides, blood pressure, fasting blood glucose while increasing high-density lipoprotein (HDL) cholesterol (113, 114).In addition to omega-3 PUFAs, fish also contain high-quality nutrients such as vitamins, minerals, and proteins, which could contribute to reducing metabolic syndrome (113).In a cross-sectional analysis, higher fish consumption in Norwegian adults was related to a better lipid profile with high HDL cholesterol levels and reduced triglyceride content.These authors also observed that participants consuming fish once a week (aged between 60 and 70 years) showed a 36% lower risk of suffering metabolic syndrome compared to those consuming fish at a low frequency (115). Similarly, in another cross-sectional study, higher fish consumption in Iranian female adults led to a lower prevalence of metabolic syndrome features like low blood pressure and high HDL cholesterol (113). Many biological mechanisms have been proposed to understand the beneficial effects of omega-3 PUFAs on reducing metabolic syndrome.Among them, omega-3 PUFAs may alter transcription factors activity involved in inflammatory pathways and liver lipid metabolism (116).In this way, omega-3 PUFAs may promote triglyceride oxidation in the liver, adipose tissue and skeletal muscle, thus avoiding fat accumulation in these tissues (117).In addition, omega-3 PUFAs can enhance insulin sensitivity by reducing adipose tissue inflammation and synthesizing peroxisome proliferatoractivated receptor alpha (117, 118). Immunological system-related diseases The immune system protects the host from infectious agents, bacteria, and viruses.This system involves various blood-borne factors and cells (119).The phospholipids of human immune cells hold a high concentration of omega-6 PUFAs (6%-10% linoleic acid, 1%-2% There is an inverse relation between fish intake and type 2 diabetes in women.There is not a detrimental effect of fish intake in the population Frontiers in Nutrition 08 frontiersin.orgdihomo-γ-linoleic and 15%-25% arachidonic acid), while low concentrations of omega-3 PUFAs (<1% α-linoleic acid, 0.1%-0.8% of EPA, and 2%-4% of DHA).The immune processes are controlled by proteins, pro-inflammatory cytokines, eicosanoids, or miscellaneous compounds (120).It has been stated that arachidonic acid is the primary precursor of eicosanoids and leads to the production of inflammatory mediators, controlling inflammatory cell activities, cytokine production, and balance within the immune system (121). Eicosanoids are a family of bioactive mediators that modulate the intensity and duration of inflammatory and immune responses.Therefore, by altering the arachidonic acid concentration, cells will have less ability to produce eicosanoids (121)(122)(123).Some studies concluded that omega-3 long-chain PUFAs, especially EPA and DHA, could reduce immune cells' capacity to synthesize eicosanoids from arachidonic acid.The levels of eicosanoids are widely elevated when the amount of arachidonic acid is limited (122, 124).Thus, human diets rich in fish or fish oil may increase the concentration of EPA and DHA in immune cells.The antiinflammatory activity attributed to omega-3 PUFAs may handle their immune function.Some studies conducted in animals, mainly in rats, demonstrated that omega-3 PUFAs affected the production of inflammatory cytokines (120,121,125).In fact, incorporating fish oil into the diet reduced the arachidonic acid proportion while increasing EPA and DHA levels in immune cell phospholipids (126, 127).Studies carried out in humans also demonstrated the immunomodulatory effects of omega-3 PUFAs, resulting in a significant decrease in the generation of pro-inflammatory leukotriene B4 and modulating cytokine production (128-130).Studies suggested that when sufficient concentrations of fish oil are consumed, significant anti-inflammatory effects are obtained.According to some authors, 1.35-2.7 g EPA per day is the threshold intake required to achieve a significant immunological effect (131).From these results, it may be concluded that n−3 fatty acids can be used as therapy for any type of inflammation that involves an undesirable immune response (121).Therefore, the regular intake of ABFT may lead to a reduction in the level of inflammation and exert a crucial immunomodulatory effect. Bodyweight control Obesity is considered an energy balance disorder leading to adipose tissue dysfunction.It is associated with high levels of inflammation and metabolic abnormalities (high levels of cytokines) (132).In fact, this disorder usually appears when omega-6:omega-3 ratio is increased, and serum phospholipid n−3 concentrations are decreased (93).Being overweight can lead to the development of other conditions, such as insulin resistance, type 2 diabetes, and some types of CVDs (133, 134).Women have a higher prevalence of obesity and overweight than men, and it increases with age (135).In 2017, approximately 39% of the world's adult population was overweight, and 13% were obese (136).Although there are various strategies to treat obesity and overweight, such as pharmaceuticals, surgery, or dietary supplements, the prevalence of obesity continues to rise during this decade (117).For this reason, healthy strategies to help in weight loss and reduce body fat are needed.Omega-3 PUFAs might be a good candidate to treat obesity and its related side effects due to its important role as anti-inflammatory agent (117), reducing cytokines such as IL-1, IL-6, and TNF-α (137-139). Numerous mechanisms have been proposed to explain the effects of omega-3 PUFAs, particularly EPA and DHA, on reducing body weight and enhancing the metabolic profile, including alterations in adipose tissue gene expression, changes in adipokine release, appetite suppression, alterations in carbohydrate metabolism and increase of fat oxidation, among others (117).Despite the knowledge of these mechanisms to reduce obesity, more studies are needed to reach a conclusion.Some works have assessed the effects of omega-3 PUFAs on body weight control both in animals and humans, concluding that EPA and DHA play a key role in promoting protection against body fat gain (140-142).For instance, incorporating omega-3 PUFAs into a rat diet for 3 weeks reduced up to 30% fat weight of subcutaneous and visceral adipose tissues (142).Similarly, other authors demonstrated that obese mice fed a diet rich in omega-3 PUFAs showed a significant loss of weight (143).Other studies dealt with the effects of supplementing the diet of overweight or obese young adult men with lean fish, fatty fish or fish oil capsules during 8 weeks (144, 145).They found a significantly higher weight loss when supplemented with fish-related capsules concerning a diet without fish.On the other hand, Schulz et al. ( 146) found that regular fish intake led to low weight loss in men and higher weight gain in women.Another study concluded that adopting a Mediterranean diet, including a higher consumption of fish rich in omega-3 PUFAs, did not lead to significant weight changes in men and women compared with lower fish consumption (47).Nonetheless, based on clinical studies, the impact of omega-3 PUFAs on body composition is still uncertain since there is little data available to reach a conclusion. For this reason, there is still much controversy about whether omega-3 PUFAs exert significant anti-obesity effects (90, 93,96).In this context, despite the anti-obesity effects of omega-3 PUFAs not yet being clear, incorporating these fatty acids into the diet may mitigate weight gain or maintain weight loss (117).Moreover, they clearly play a beneficial role in obese or overweight people in contributing to reducing inflammatory cytokines levels (137-139) and inflammatory processes (117). Importance of fish consumption during the life cycle stages 4.1 Recommended intake per age group Bluefish consumption during the life cycle stages is highly relevant.Starting with pregnant women, a sufficient intake of this type of fish is not reached to meet the recommended contributions, it can generate malformations in the fetus and defects in the neural tube.In fact, the EFSA recommends the consumption of blue fish because it can be positive in avoiding cardiovascular diseases.In the first 6 months of life and even in young children, insufficient consumption of blue fish can affect their cognitive development, causing adverse effects on brain and immune function (147, 148).However, there is still no specific information or data on the optimal amounts of ABFT in pregnant women and children under 3 years (149).In children between three and 12 years, the recommended consumption is between 50 g per week, with a total of 120 g per month (150).In adults, according to EFSA, the recommended intake is 125 g per week (148). As mentioned, ABFT provides vitamins and minerals that stand out in its nutritional composition.ABFT is a source of vitamins B (B6, B3 and B12), D, and minerals such as phosphorus or selenium, which are high contents.For instance, one serving of tuna provides 250% of the recommended intake of vitamin D (151).Table 4 shows the nutritional contribution for each portion of 100 g of ABFT as well as the recommended daily intakes for different groups of age and also differentiated by sex.For instance, every portion of 100 g of this species provides 23 g of protein, which nearly accounts for half of the recommended daily intake.Similarly, a portion of ABFT contributes to fulfilling the recommended intake of minerals, as 100 g of ABFT provides 82 mg of Se (Table 4). It is important to note that the ingestion of toxic elements studied in different investigations from samples obtained from tuna do not pose any risk to the consumers health.However, regular, or excessive consumption of tuna species could exceed the recommended weekly intake or the lower confidence limit of the reference dose, which does not necessarily pose a significant risk to consumers (149). Risk-benefit ratio: toxicological assessment EFSA has provided risk-benefit assessments of fish consumption based on scientific resources that expose the beneficial effects of fish intake and the possible risks associated with some contaminants such as Hg or methylmercury (MeHg) (23,150,154,155).In this sense, in 2012 EFSA updated the tolerable weekly intake (TWI) of MeHg, establishing the limit at 1.3 μg/kg of body weight and for inorganic Hg 4 μg/kg of body weight (150).These limits were adopted based on the assessment of different outcomes.Among them, several biomarkers were used to provide precise data for MeHg exposure such as red blood cells, hair, toenail, or fingernail whereas plasma and urine samples were preferred for Hg.Data obtained from in vivo assays based on different experimental animals and epidemiological studies from the Faroe Islands and Seychelles such as the Hg and MeHg toxicity in prenatal neurodevelopment, were also used as reference.To assess dietary exposure, it was assumed that the total content of Hg in fish was 100% as MeHg and a bioavailability in the body of 100%.Subsequently, EFSA made a scientific statement where panel members addressed the benefits of fish consumption, such as those due to the PUFAs content and its capacity to counteract to the risks of MeHg.Considering all this data and factors, EFSA concluded that an intake of 1 to 4 servings per week of fish was associated with beneficial effects in adults with coronary artery disease.In this range of fish consumption, health benefits outweigh risks, especially compared to people who do not consume fish (23).In addition, the EFSA stated that this frequency of consumption (1-4 servings/week) has been associated with a lower risk of mortality from coronary heart disease in adults and is compatible with current intakes and recommendations in most European countries.This statement refers to fish per se and considers the beneficial and adverse effects of nutrients and non-nutrients, including contaminants such as MeHg, which may be present in fish (23).However, in the risk assessment, EFSA considers children under 10 years of age and women during pregnancy, lactation or expecting to get pregnant as sensitive populations to exposure of high levels of Hg or MeHg.Therefore, for these groups, the consumption of fish species with lower amounts of these contaminants is recommended (155).Indeed, various national food safety agencies have issued recommendations to limit the consumption of certain types of fishery products in these susceptible populations.For instance, AESAN recommends avoiding the consumption of swordfish, shark, bluefin tuna, and pike by these previously mentioned susceptible populations (156). Various authors have pointed out that the risk-benefit assessment should consider the apparent protective effect of some nutrients such as PUFAs and Se against Hg and MeHg (83,84,87,88,(157)(158)(159).Regarding the protective effect of PUFAs, DHA seems to protect against oxidative stress induced by MeHg in neuronal cells (160)(161)(162).In this sense, a study evaluated the dose-response between maternal fish consumption and the child's verbal intelligence quotient (IQ).It was found that a maternal intake of 100 mg of DHA per day may prompt a gain of 2.8 points of verbal IQ in 18 months-old children (163).Similarly, other works reported that the continuous consumption of fish by pregnant women led to a laxer relationship between intrauterine exposure to MeHg and children's IQ (164,165).In accordance with the Scientific Opinion of EFSA regarding the risks for public health related to the presence of Hg and MeHg, omega 3-LC PUFAs, can counteract the negative effects of exposure to MeHg (150).In this line, the most studied nutrient for protection against MeHg appears to be Se.The bound affinity of Hg and Se is a million times greater than for sulfur in analogous forms.Indeed, several attempts have been made to design products with Hg-detox capacity using Se (e.g., Hg selenide).Possible protective modes of action of Se against MeHg toxicity include antioxidant effects, increased glutathione peroxidase activity, glutathione synthesis, elevated selenoprotein levels, and increased MeHg demethylation (157, 166).In this sense, it is suggested that a molar excess of Se compared to Hg can protect against its toxic effects.This could explain why studies of maternal populations exposed to foods that contain Hg in a molar excess of Se, such as pilot whale meat, have found adverse results in children, while populations exposed to Hg but showing a constant pattern of consumption of sea fish rich in Se showed lesser or none adverse effects (167).Subsequently, a new criterion was proposed to assess the risks of Hg exposure, the Se Health Benefit Value (HBV Se ), which simultaneously evaluates Hg exposures and dietary Se intakes, particularly regarding Se consumption during pregnancy (157).Another risk assessment proposal is the benefit-risk value (BR V ), this equation attempts to reflect either excess Hg or excess Se, in which case it can be assessed with respect to adequate Se intake.Various studies have shown that benefits outweigh risks when it comes to bluefin tuna consumption, as the molar ratio of Se:Hg oscillates between 1.3 and 20 and always implies a molar excess of Se compared to Hg (Table 5).In addition, HBV Se values are reported to oscillate between 7.9 and 296 (Table 5); therefore, it is likely that the high Se content against Hg prevents the toxicity induced by Hg (88, 176, 178, 179).On the other hand, some authors suggest considering the bioaccessible fraction of Se and Hg to provide a more accurate risk assessment (180)(181)(182).In this line in vitro gastrointestinal digestion techniques provide valuable data about the bioaccessibility of Hg and MeHg which can get decreased after cooking to around half of the original concentration (181).This change in bioaccessibility has been attributed to the effect of the temperature in the structural conformation of fish muscle proteins, which may cause loss of native protein structure.These alterations could prevent the access of the enzymes used in in vitro gastrointestinal digestion models to the structures to which Hg is bound such as thiol groups (181).In agreement with these outcomes, another work also found up to 40% reductions in the bioaccessible fraction of Hg in fish after cooking it (183).Therefore, for a more accurate risk assessment, all the criteria mentioned above must be considered Nevertheless, further research in the area is necessary to study the synergistic effects between the different variables, to improve the understanding of the repercussions on health regarding the intake of fish and shellfish. Conclusion Atlantic bluefin tuna, Thunnus thynnus, is a highly nutritious species rich in high-quality proteins, lipids, fat-soluble vitamins, and various essential elements essential for the proper functioning of the body.Among the nutritional composition, bioactive peptides and the omega-3-polyunsatturated fatty acids EPA and DHA have been linked to beneficial effects.In this sense, several population studies have reported the positive effects of fish consumption on human health, including protection against cardiovascular, neurological, metabolic, and immune diseases and body weight regulation.Besides, consuming this species helps achieve the intake recommendations of several vitamins and minerals.However, some limitations for some vulnerable population groups, such as young children and pregnant women, should be considered due to the presence of contaminants, especially mercury and methylmercury.However, several authors have pointed to high selenium levels' capacity to counteract the negative effects of these contaminants.Selenium has been suggested to form complexes that reduce the bioaccessibility of mercury and methylmercury and so it would decrease their harmful effects.In this sense, some studies have evaluated this species' risk-benefit ratio, showing a minimal risk in most cases.Nevertheless, further research and assessments of the risk of tuna consumption is still necessary to provide reliable data and help safeguard the health of humans, especially about the bioaccessibility of heavy metals, toxicity of selenium complexes or deeper evaluation of risk-benefits and exposure.These outcomes would reinforce and increase the current knowledge about Atlantic bluefin tuna consumption safety and try to define more accurate consumption recommendations. 3 PUFAs supplementation did not lead to a significant reduction in body weight and body fat of patients.n−3 PUFAs supplementation reduced triglycerides and insulin levels of patients.n−3 PUFAs supplementation reduced inflammatory cytokines (IL-1β, IL-6, TNF-α) in the patients (96) 10.3389/fnut.2024.1340121 FIGURE 3 FIGURE 3Biological activities associated with bluefin tuna consumption. TABLE 2 Amino acid profile, recommended daily intake values and percentage of contribution to the daily diet of the amino acids present in bluefin tuna (55-58). TABLE 3 Different studies about omega-3 benefits in human health. TABLE 4 Nutrition offered by 100 g of bluefin tuna Thunnus thynnus and the recommended daily value of certain nutrients to several different targeted populations(40-43, 152, 153). *Do not consume any other fish in this category in the same week. TABLE 5 Comparison of Hg and Se concentrations (mg kg − 1 w.w.) and relation molar ratios Se/Hg and HBVSe in farmed or wild Thunnus sp.samples.
2024-04-05T18:27:38.763Z
2024-04-02T00:00:00.000
{ "year": 2024, "sha1": "2c76d4856922ef751fda52735706fd5761aa8898", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2024.1340121/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "61a2b33de4b52621a6c99158d0535ceb652b779f", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science", "Medicine" ], "extfieldsofstudy": [] }
155851278
pes2o/s2orc
v3-fos-license
Sharing a River with Downstream Externalities We consider the problem of efficient emission abatement in a multi polluter setting, where agents are located along a river in which net emissions accumulate and induce negative externalities to downstream riparians. Assuming a cooperative transferable utility game, we seek welfare distributions that satisfy all agents’ participation constraints and, in addition, a fairness constraint implying that no coalition of agents should be better off than it were if all non-members of the coalition would not pollute the river at all. We show that the downstream incremental distribution, as introduced by Ambec and Sprumont (2002), is the only welfare distribution satisfying both constraints. In addition, we show that this result holds true for numerous extensions of our model. Introduction Industries and cities all around the world have historically been concentrated along rivers, since rivers provide means of transportation, food production, energy generation and drinking water. Because of this intensive utilization, many rivers and streams have been and still are being heavily polluted. Excessive pollution worsens water quality, which reduces economic profits and negatively impacts wildlife and human health. One specific characteristic of rivers is that pollutants discharged into the river are carried downriver. As a consequence, it is the downstream riparians rather than the polluter himself who bears the negative consequences of the emissions discharged into the river. Moreover, if upstream polluters and downstream riparians belong to different jurisdictions, polluters may have little incentive to abate their emissions, because they cannot be held liable for the pollution damage caused in other jurisdictions. In this paper, we consider the problem of efficient emission abatement among agents located along a river, where upstream emissions cause negative externalities to all downstream agents. This setting can be characterized as a cooperative transferable utility game with two sources of externalities. First, upstream emissions impose negative externalities on downstream agents. Second, cooperative behavior among a subset of agents (a so-called coalition) imposes positive externalities upon agents located in between different connected subsets of this coalition. Due to this second kind of externalities, the core is, in general, empty. As a consequence, we restrict our attention to the non-cooperative core, i.e. the set of partitions which consists of one coalition and only singletons otherwise. The noncooperative core imposes cost upper bounds for any coalition, which can be interpreted as a participation constraint that has to be satisfied by any cost distribution to be acceptable to all agents. In addition, we impose cost lower bounds, which are inspired by the aspiration welfare principle, i.e. no coalition of agents should have lower costs than it can secure for itself if all non-members of the coalition would not pollute the river at all. We show that the downstream incremental distribution, as introduced by Ambec and Sprumont (2002), is the only distribution simultaneously satisfying the non-cooperative core upper cost bounds and the aspiration lower cost bounds. The existing literature on transboundary pollution in river basins mainly focusses on the case of two jurisdictions. Notable exceptions include Ni and Wang (2007) and Gengenbach et al. (2010). Ni and Wang (2007) derive cooperative sharing rules for the costs of cleaning a river from two principles of international water law: Absolute Territorial Sovereignty (ATS) claims that every jurisdiction has exclusive rights to use the water on its territory, while Unlimited Territorial Integrity UTI expands these exclusive use rights to all water originating within and upstream of a respective jurisdiction. They adapt these principles to the case of pollution 1 responsibility and derive axioms characterizing the two resulting cost sharing principles. They also show that these cost-sharing principles correspond to the Shapley value solutions of the corresponding cost-sharing games. However, Ni and Wang (2007) assume exogenously given costs for cleaning the river. Thus, they are only concerned with the distribution of these costs. In contrast, pollution levels in our model are endogenously determined by the actions of the agents. Thus, we are concerned about finding cost sharing distributions that are acceptable to all agents and, at the same time, give incentives to choose efficient emission abatement levels in the first place. In line with the literature on international environmental agreements, Gengenbach et al. (2010) model river pollution as a two-stage-cartel-formation game. In the first stage, agents decide whether to join a coalition, while pollution abatement levels are chosen in the second stage. In the absence of a supranational authority, abatement levels are in general inefficiently low, as all agents have an incentive to free ride on the abatement efforts of their upstream neighbors. Analyzing the formation of stable coalitions they find that the location of agents has no impact on coalition stability but rather impacts on environmental outcomes. In contrast to Gengenbach et al. (2010), we employ a cooperative game setting. In fact, our paper is most closely related to Ambec and Sprumont (2002) and Ambec and Ehlers (2008), who apply an axiomatic cooperative game theoretic approach to the efficient sharing of water along a river basin. In Ambec and Sprumont (2002), agents derive strictly increasing benefits from water consumption, Ambec and Ehlers (2008) generalize the results to agents which may exhibit satiation in water consumption. Ambec and Ehlers (2008) show that the downstream incremental distribution is the only welfare distribution satisfying the non-cooperative core lower bounds and the aspiration welfare upper bounds. Several other papers propose alternative sharing rules to the downstream incremental distribution in settings similar to the one proposed by Ambec and Sprumont (2002). Interpreting the river sharing problem as a line-graph game, Van den Brink et al. (2007) derive four different efficient solutions including the downstream incremental distribution by imposing various properties with respect to deleting edges of the line-graph. However, they do not address fairness issues and consider non-satiable agents. Allowing for multiple springs and satiable agents with respect to water consumption, Van den Brink et al. (2012) propose a class of weighted hierarchical welfare distributions based on the Territorial Integration of all Basin States (TIBS) principle, which includes the downstream incremental distribution as a special case. Ansink and Weikard (2012) concentrate on reallocations of the resource itself instead of the reallocation of welfare by an appropriate transfer scheme. In case of water scarcity, the agents' overlapping claims to river water render it a contested resource similar to a bankruptcy problem. They propose a class of sequential sharing rules based on bankruptcy theory and compare them to other sharing rules, including the downstream 2 incremental distribution. Demange (2004) considers hierarchies without externalities and shows that the hierarchical outcome satisfies the core bounds for all connected coalitions for all super-additive cooperative games. However, the hierarchical outcome may violate core bounds for non-connected coalitions. If the hierarchy is a river, then the hierarchical outcome corresponds to the counterpart of the downstream incremental distribution. Our paper can be interpreted as a generalization of the results of Ambec and Ehlers (2008) to commodities with public good properties. While water consumption is a purely private good, emission abatement exhibits public good characteristics, as it imposes negative externalities on all downstream agents. These additional externalities impose non-trivial complications for proving that the downstream incremental distribution satisfies the non-cooperative cost upper bounds and the aspiration cost lower bounds in the formulation of our river pollution model. A River Sharing Model with Downstream Pollution Externalities Consider a set of agents N = {1, ...., n}, which are located along a river. Without loss of generality, agents are numbered from upstream to downstream, i.e. i < j indicates that agent j is located downriver of agent i. We follow Ambec and Sprumont (2002) and Ambec and Ehlers (2008) Each agent i along the river produces gross emissions in exogenously given amount e i . An agent i may choose to abate the amount x i with 0 ≤ x i ≤ e i , the costs of which are given by the strictly increasing, twice differentiable and strictly convex abatement cost function c i (x i ). Without loss of generality, we assume that abating nothing induces no abatement costs, i.e. c i (0) = 0. Net emissions e i − x i are passed into the river where they accumulate and are carried along its course. Assuming that net emissions of agent i are discharged into the river after agent i's but before agent i + 1's location, and that there is no pollution at the rivers' source, the ambient pollution level q i at the location of agent i is given by the sum of net emissions of all strict predecessors of agent i: (1) with 0 < γ ji ≤ 1. γ ji represents the assimilative capacity of the river, i.e. what fraction of the net emissions released by agent j actually reach agent i. As the vector of abatement efforts x = (x 1 , . . . , x n ), together with the vector of exogenously given emissions e = (e 1 , . . . , e n ), fully determine the vector of ambient pollution levels, we shall often write the ambient pollution levels as a function of the vector x: The ambient pollution level q i causes damage costs to agent i, the amount of which is given by the increasing, twice differentiable and convex damage cost function d i (q i ). Thus, the net emissions e i − x i released by agent i induce negative externalities for all downriver agents j > i, but not for agent i himself or all upstream agents j < i. The total costs k i agent i faces are the sum of abatement and damage costs: A river sharing problem is characterized by (N, e, c, d), where c = (c 1 , . . . , c n ) and d = (d 1 , . . . , d n ) denote the vectors of abatement and damage cost functions. Given a river sharing problem, the distribution of total costs k i among all agents i is determined by the emission abatement allocation x. Our assumptions about the accumulation of emissions along the river, as described in the previous paragraph, imply the following proposition. Proposition 1 (No abatement is dominant strategy) Given a river sharing problem (N, e, c, d) and for given emission abatement levels of all agents j ∈ N \i it is a dominant strategy for agent i not to abate at all, i.e. x i = 0. Proof: The damage costs of agent i only depend on q i which are not influenced by x i . As costs c i are strictly increasing in the amount of emission abatement x i , given q i , total costs are minimized by setting x i = 0. Proposition 1 states that agents who only consider their own total costs will never abate. In particular, this implies that if the river sharing problem (N, e, c, d) is considered to be a non-cooperative game among the agents i ∈ N , the unique Nash equilibrium is given bŷ x i = 0 for all i ∈ N (no matter whether agents are considered to decide sequentially or simultaneously). However, this outcome is, in general, inefficient. In particular, if we assume that money transfers between agents are possible and agents have unbounded resources for such transfers, the efficient emission abatement allocation x ⋆ minimizes the sum of total costs k i among all agents. The following proposition establishes that such an allocation exists and is also unique. Proposition 2 (Existence and uniqueness of efficient allocation) Given a river sharing problem (N, e, c, d) there exists a unique vector x ⋆ which is the solution to the following constrained minimization problem: Proof: Existence and uniqueness follow directly from the strict convexity of the total costs Let t i denote the money payments. We impose n i=1 t i = 0 and define agent i's after transfer costs z i as: Obviously, any vector In the following, we call any efficient cost distribution a river sharing agreement. The main problem will be which one to choose among this infinite set. Coalitions and Cost Upper Bounds A non-empty subset of agents S ⊂ N is called a coalition if the agents of S choose their emission abatements such as to minimize the sum of total costs among all coalition members. Denoting by minS and maxS the most upstream, respectively the most downstream member of coalition S, the coalition S is connected or consecutive if all agents j with minS < j < maxS are also members of the coalition S. We define the secure costs v(S) of a coalition S as the minimum value of the sum of the total costs k i over all members of the coalition: It is obvious from the above definition that both the allocation of abatement efforts x v (S) and the secure costs v(S) of the coalition S depend, in general, on the behavior of the agents not belonging to the coalition S. As an example, consider the coalition S = {k, . . . , n}. In particular the pollution level q k (but also the pollution levels q i with i > k) depends on the amount of emission abatement undertaken by the agents i with i < k. According to Proposition 1, if these agents i < k only minimize their own sum of abatement and damage costs, they would not abate at all, implying a pollution level of q k = j∈P k \k γ jk e j . If however, the agents 1 to k − 1 form a coalition T and minimize their joint total costs, they will, in general, choose x j > 0 for at least some j ∈ 1, . . . , k −1. This implies a pollution level of q k < j∈P k \k γ jk e j which reduces the minimal costs v(S) coalition S can secure for itself. Thus, analogously to Ambec and Ehlers (2008), cooperation exerts a positive externality on the coalition S. In the following, we restrict our attention to the non-cooperative core, i.e. we assume that all non-members of a coalition S behave non-cooperatively, which according to Proposition 1 implies that they do not abate at all. Then, condition (7d) is replaced by x j = 0 for all j / ∈ S, and the secure costs v(S) of a coalition S are well defined and unique (as the resulting optimization problem is a subproblem of the one analyzed in Proposition 2). The reason is like in Ambec and Ehlers (2008): the structure of the river sharing problem (N, e, c, d), as described in detail in Section 2, is such that only the non-cooperative core is guaranteed to be non-empty. (2008), we impose the secure costs as the participation constraint of any coalition S. A coalition S will only agree to a river sharing agreement if it is not worse off than without the agreement. Thus, a river sharing agreement should at most assign the secure costs v(S) to any coalition S as otherwise the coalition would block the agreement knowing that it can achieve at least v(S) on its own. Hence, v(S) defines cost upper bounds for any coalition S a river sharing agreement must satisfy in order not to be blocked. 6 4 Cost Lower Bounds Ambec and Ehlers (2008) also impose welfare upper bounds that are inspired by the unlimited territorial integrity (UTI) doctrine. In case of water consumption, UTI claims that all agents are entitled to consume the full stream of water originating upstream from their location and, thus, have a legitimate claim to the corresponding welfare level such a consumption generates. As such claims are, in general, incompatible if water is scarce, Ambec and Sprumont (2002) and Ambec and Ehlers (2008) interpret them as welfare upper bounds agents may legitimately aspire to. Like Ambec and Ehlers The straightforward translation of these aspiration welfare upper bounds to the case of our river pollution model is to define the minimal costs a coalition S can ensure if all nonmembers of the coalition would abate all their emissions, and thus, not pollute the river at all. Formally, these cost lower bounds a(S) are given by: where x a (S) = x a 1 (S), . . . , x a n (S) denotes the solution to The cost lower bounds a(S) can be interpreted as a fairness condition: no coalition S should enjoy lower costs than the costs it could secure itself if all non-members of the coalition would not pollute the river at all. The Downstream Incremental Distribution Like in Ambec and Sprumont (2002) and Ambec and Ehlers (2008), there is a connection between the non-cooperative core upper bounds v(S) and the cost lower bounds a(S): For the coalition of all predecessors of agent i they coincide, i.e. v(P i ) = a(P i ). Thus, for any coalition of predecessors P i it is clear that the only river sharing agreement satisfying both the cost upper and cost lower bounds is the so called downstream incremental distribution 7 (DID) defined by The DID assigns every agent his marginal contribution to the coalition composed of his predecessors along the river. As a consequence, the DID is the only candidate for a river sharing agreement that at the same time satisfies the non-cooperative core upper bounds v(S) and the cost lower bounds a(S) for any coalition S. The following theorem establishes that the DID, in fact, satisfies the non-cooperative core upper bounds v(S) and the cost lower bounds a(S) for any coalition S. The downstream incremental distribution (DID) z ⋆ is the only river sharing agreement satisfying the non-cooperative core upper bounds v(S) and the cost lower bounds a(S) for any coalition S. Proof: The proof is split into three parts. In the first part, we show that the DID satisfies the non-cooperative core upper bounds for any coalition S. In part two, we proof that the DID also satisfies the cost lower bounds for any coalition S and, finally, in the third part, we show that any river sharing agreement that satisfies the cost upper and lower bounds for an arbitrary coalition S is identical to the DID. We proof that the DID satisfies the non-cooperative core upper bounds for any coalition S by induction. The idea is that any coalition S can be created from the grand coalition For the first part of the proof we need the following proposition, the proof of which is given in the Appendix. Proposition 3 For any T ⊂ N with minT > j and any j ∈ N the following inequality holds: For the grand coalition N = S z , the non-cooperative core upper bounds are satisfied. Now, suppose the DID satisfies the non-cooperative core upper bounds for some intermediate We generate the intermediate coalition S j−1 by deleting the non-member m j from the intermediate coalition S j . By construction the intermediate coalition S j−1 consists of all strict predecessors of agent m j and all agents i > m j who belong to the coalition S. Rearranging inequality (12) and applying the definition of the DID implies We have to show that the DID satisfies the non-cooperative core upper bounds for the Rearranging this inequality yields If the coalition S does not have any members i > m j , then the inequality is trivially satisfied as then S j = P m j and S j−1 = P m j \m j . Otherwise, define the set T consisting of all members i of the coalition S with i > m j . Then, S j = P m j ∪ T and S j−1 = P m j \m j ∪ T and by virtue of Proposition 3, inequality (15) holds. For the second part of the proof, the following proposition is needed Proposition 4 For any S ⊂ T ⊂ N and i / ∈ S, T the following inequality holds: The proof of Proposition 4 is given in the Appendix. To show that the DID satisfies the cost lower bounds for any coalition S, we employ v(P i ) = a(P i ) to rewrite the definition of the DID: Summing up over all agents i ∈ S and employing Proposition 4 yields The right hand side of the inequality simplifies to Thus, we obtain which proofs that the DID satisfies the cost lower bounds for any coalition S: Finally, we proof that the DID is the only river sharing agreement that simultaneously satisfies the cost upper and lower bounds for any coalition S. Therefore, we have to show that whenever a river sharing agreement z satisfies both the cost upper and lower bounds, then for each agent i it holds that z i = z ⋆ i . Again, the proof is by induction. Similar to Ambec and Ehlers (2008), for agent 1, any river sharing agreement z fulfilling both constraints satisfies v({1}) ≥ z 1 ≥ a({1}). As v({1}) = a({1}) this implies z 1 = z ⋆ 1 . Now, suppose that z i = z ⋆ i holds for all agents i upstream of some agent j, i.e. i ≤ j < n. Summing up over all i ∈ P j , we obtain i∈P j As v(P j+1 ) = a(P j+1 ) and because any river sharing agreement z satisfies both the cost upper and lower bounds, i∈P j+1 z i = v(P j+1 ) = a(P j+1 ) has to hold. Hence, Therefore, the cost distribution z is identical to the DID. Theorem 1 is the exact counterpart to Theorem 1 of Ambec and Ehlers (2008). However, it is neither obvious nor straightforward to prove that the DID is the only distribution satisfying the cost upper and lower bounds in case of our river pollution model. The main challenge in Ambec and Ehlers (2008) arose from the fact that cooperation among agents impose positive externalities on any coalition S. As a consequence, the welfare level a coalition could secure for itself crucially depends on the partition of all non-members. The same is true for our river pollution model. Cooperative behavior among non-members of a coalition S induces, in general, positive abatement levels, which benefits the members of the coalition. In contrast to Ambec and Sprumont (2002) and Ambec and Ehlers (2008), however, the decision variable in our model is emission abatement not water consumption. While water consumption only benefits the consumer and, thus, is a purely private commodity, emission abatement is not. In fact, in our model emission abatement does not benefit the abating agent but only all downstream agents, as it reduces the river's downstream pollution level. Thus, emission abatement imposes positive downstream externalities, i.e. pollution abatement is a commodity with public good properties. This is also reflected in the agents' welfare: agents' welfare in the water consumption models of Ambec and Sprumont (2002) and Ambec and Ehlers (2008) is simply given by some benefit function b i (x i ) which depends on the water consumption x i of agent i. In our model, the costs agent i faces consist of two parts: first, the abatement costs c i (x i ), which only depend on the emission abatement of agent i and, second, the damage cost function d i (q i ) depending on the pollution level q i , which itself is a function of the emission abatement levels of all upstream agents. Discussion and Extensions The model detailed in Section 2 relies on a number of assumptions which can be relaxed without impairing the statement of Theorem 1. First, we assumed that there is no initial pollution at the source of the river and that the net emissions of agent i do not harm agent i himself but only all downstream agents. As a consequence, agent 1 does not face any pollution and the specification of agent 1's damage function d 1 is optional. The first assumption simplified the specification of the pollution level q i , while the latter assumption implied that in the non-cooperative Nash equilibrium no agent would abate at all. However, the proof of Theorem 1 does not draw on these assumptions and would still be valid if the pollution level agent i faces would be defined as where q 0 denotes an initial pollution level at the source of the river. Second, we framed the model as a pollution abatement model. Obviously, emissions and the corresponding pollution levels are prime examples for downstream externalities, yet there are many other contexts to which our model is applicable. As an example, think of the case of flooding. Then, e i corresponds to the water discharges from the territory of agent i into the river and x i denotes the amount of water agent i withdraws from the stream (e.g. by the controlled flooding of designated flooding areas) and q i is the amount of excess water at agent i's location. In this interpretation it would also be reasonable to assume that the water withdrawn x i is not limited by the discharge e i but could sum up to the total amount of excess water in the river basin, i.e. These modifications would also not impact on the validity of Theorem 1. Third, particularly in case of flood protection, agents may have different means of protection. While the withdrawal of water induces costs to agent i and benefits all his downstream agents, there are other protection techniques which are purely private goods. As an example, consider that agent i could build a levee that protects the own territory from flooding, but does not induce any positive externalities to the downstream agents. Then, the damage to agent i does not only depend on the total amount of water q i but also on the agent's Assuming that an interior solution is optimal, i.e. m ⋆ i > 0, the optimal level of private protection m ⋆ i (q i ) is given by the solution of the first order condition Thus, we can re-write d i (q i , m i ) as d i q i , m ⋆ i (q i ) . Whenever these newly specified damage functions d i q i , m ⋆ i (q i ) are increasing, twice differentiable and convex in q i , we are back at the model specification introduced in Section 2. Conclusion We showed that the main result of Ambec and Ehlers (2008) that the downstream incremental distribution is the only welfare distribution that satisfied the non-cooperative core bounds and the aspiration welfare bounds simultaneously, can be generalized to the case of commodities with public good characteristics. Like their water consumption model, our river pollution problem is a cooperative game with externalities, since cooperation among non-members imposes a positive externality to the members of any coalition S. However, our model comprises an additional source of externalities because the emissions discharged into the river induce negative externalities on all downstream agents. In addition, our results are robust to various extensions of our baseline model. Proof of proposition 3: Set S j = P m j ∪ T and S j−1 = P m j \m j ∪ T . Let us parameterize the damage functions for agents j > m j with a parameter α ∈ [0, ∞). Due to this parametrization, the secure costs v(S j−1 , α) of the intermediate coalitions S j−1 now depend on the parameter α and amount By showing that (A.2) holds for all α ∈ [0, ∞], then it holds, in particular, for α = 1 and inequality (11) is satisfied. Hence, inequality (A.3) can be rewritten to Clearly, this inequality is satisfied whenever which is stated in the following lemma. Lemma 1 For any agent k ∈ S, T the following inequality is satisfied l∈P k x v l (S j−1 , α) ≤ l∈P k x v l (S j , α), ∀k ∈ S j−1 , S j . (A.6) Proof of Lemma 1 Consider two coalitions S and T = S∪m and an agent m / ∈ S. Given this notation, inequality (A.6) changes to j∈P k ∩S x v j (S, α) ≤ j∈P k ∩T x v j (T, α), ∀k ∈ S, T. Let us proof Lemma 1 by contradiction, i.e. assume that According to the parameterized minimization problem, the following first order conditions have to be satisfied Due to assumption (A.7), the right hand side of (A.8) for i ∈ T is higher than the right hand side of (A.9) for i ∈ S. This implies c ′ i (x i (T )) ≥ c ′ i (x i (S)) for all agents i ∈ S, T and thus, due to the characteristics of the cost function c i (.), x v i (T ) ≥ x v i (S), ∀i. This, however, implies Therefore, by contradiction, inequality j∈P k ∩S x j (S) > j∈P k ∩S x j (T ) cannot hold. Proof of proposition 4 For the proof of Proposition 4 the following lemma is required. Lemma 2 For any two coalitions S, T , the following relationships among the abatement levels of an q(x(T u i)) q(x(S u i)) q(x(T)) q(x(S)) q(x(T u i)) q(x(T)) q(x(S u i)) q(x(S)) d j (q(x)) d j (q(x)) n n m m Figure 1. Similarly, d j (q j (x a j (T ∪ i))) − d j (q j (x a j (S ∪ i))) = m, m ≤ 0 d j (q j (x a j (S))) − d j (q j (x a j (T ))) = n, n ≥ 0, ∀j with |n| ≥ |m| as depicted in Figure 2. Thus, we conclude that inequality (A.14) holds. Lemma 4 Given the terms in subgroup II of (A.13), it holds that II ≥ 0, i.e.
2019-05-17T14:40:04.477Z
2019-05-15T00:00:00.000
{ "year": 2019, "sha1": "8cf864f118238e92c20ee033c834b4bacf42170a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4336/10/2/23/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "537485b7f8b260e5403daf0b4c98c9608d13c8b8", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics", "Computer Science" ] }
209899934
pes2o/s2orc
v3-fos-license
Party Competition and Policy Liberalism Abstract Party competition is foundational to the study of modern politics, affecting outcomes as varied as policy choices, political participation, and the quality of representation. Scholars have long argued that increased levels of party competition are associated with more liberal policy making. By this logic, parties in close competition with one another try to expand their bases of support by catering to the desires of those who tend to abstain from the political process—the “have-nots.” We extend this classic hypothesis by examining the relationship between competition and policy liberalism over several decades, articulating and testing a theory that suggests that party competition relates differently to social and economic policy liberalism. We find robust evidence that increased competition has a positive relationship with economic policy liberalism, weaker evidence for a negative relationship between competition and social policy liberalism, and suggestive evidence that the direction and magnitudes of these relationships have changed over time. Introduction Political scientists have studied the development of political parties and their effect on policy making for decades (e.g., Campbell 1977;Downs 1957;Key 1949;Schattschneider 1960). The direction of public policy is often thought to depend, in part, on the party controlling government and the preferences of their electoral coalition. In some places and times, a single party is dominant and has little chance of losing office in the next election (e.g., Key 1949); in other places and times, the major parties routinely alternate power. Scholars term the vigor with which parties alternate power party competition. Party competition is one of the most important concepts in the study of state politics. Ranney (1965) claims that "of all the variables studied in the analysis of state party politics, the one receiving the most attention from political scientists has been 'inter-party competition'" (63). Similarly, Jewell (1982) calls the concept "[o]ne of the most important dimensions along which states differ" (6). Perhaps the most significant hypothesis involving party competition and policy making in the American states is its relationship to the ideological leaning of the laws produced. The conventional wisdom, dating back to Key (1949) and Schattschneider (1960), establishes a crystal clear prediction: higher competition should be associated with more liberal policies. This relationship has received widespread support, leading one prominent scholar to go as far as to state that "Expectations about the effects of competition on policy-making are unambiguous" (Barrilleaux 1997(Barrilleaux , 1462. Unfortunately, most tests of this theory have relied on data encompassing only single policy areas or relatively short time periods. Moreover, these studies tend to treat state policy making as homogeneous, either because they generalize from a single issue area or because they rely upon measures that do not distinguish among types of state policies. This paper reexamines the relationship between policy liberalism and party competition, building on previous scholarship in two ways. First, we take advantage of new advances in data availability to examine the relationship between these two concepts over a longer time horizon and across a broader array of policy areas. Most existing studies look only at a decade or two and at a small slice of economic policy. Our study, by contrast, examines the effects of party competition on economic and social policy liberalism since the Great Depression, enabling us to probe the generalizability of this theory over time and across issue domains. Second, we expand on earlier theories by distinguishing between social and economic policy, exploring the effect of party competition on each of these policy areas separately. Key's (1949) landmark study focuses on economic policy, finding higher levels of interparty competition are associated with the expansion of the electorate, bringing routine nonvoters to the ballot box. These nonvoters tend to be society's "have-nots," people with low incomes whose interests are traditionally excluded from policy creation by their more economically advantaged neighbors. In contrast to Key, we also explore the effect of competition on social policy. We argue that, while party competition has the positive effect on economic liberalism that Key (1949), Schattschneider (1960), and Barrilleaux (1997) suggest, it has the exact opposite effect on social liberalism: more competitive electoral environments lead to less social policy liberalism. This expectation is based upon the fact that class-level preferences on social policy cross-cut positions on social issues; higher income Americans tend to have socially liberal but economically conservative preferences while the converse is generally true from lower income Americans (Flavin 2011;Rigby and Wright 2013). Because the haves and have-nots tend to have divergent preferences over these bundles of policy, we expect that higher levels of party competition will have different effects for social and economic policy liberalism. This paper is organized as follows: first, we review the literature on policy making and competition. Next, we develop a theory explaining why party competition is associated with more liberal economic policy and more conservative social policy. Third, we discuss the data we bring to bear on this question and the methods of analysis we employ. Fourth, we examine the relationship between party competition and policy liberalism from multiple perspectives. We conclude that party competition is related to more liberal economic policies and more conservative economic policies; however, the link between competition and social policy is less robust than its relationship with economic policy. We end by discussing the practical implications of these results and the impact they have on our theory. Making Policy in the American States Commonly, four factors predominate explanations of state-level policy making: public opinion McIver 1989, 1993), the ideology of the lawmakers serving in government (Entman 1983;McIver 1989, 1993), the party controlling the policy making process (Dye 1984;McIver 1989, 1993;Garand 1985), and the degree of competition between these parties (Barrilleaux 1997(Barrilleaux , 2000Davies and Worden 2009). First, regarding public opinion, states with more liberal citizens are more likely to produce liberal policies (Erikson, Wright, and McIver 1993). Second, the ideology of legislators affects their roll call votes; liberal members tend to vote for liberal policies (Entman 1983). While these first two factors are fairly robust predictors of the ideological valence of state policy, the relationship between party control, party competition, and policy making is less clear. 1 Some research demonstrates Democratic control of government corresponds to more liberal policies (Dye 1984;Garand 1985) while others find the opposite (Erikson, Wright, and McIver 1989). To square these disparate conclusions, Barrilleaux (1997Barrilleaux ( , 2000 examines the interactive effect of competition and partisanship on policy making. These studies demonstrate that party competition increases policy liberalization and that, absent interparty competition, both Democrats and Republicans are unlikely to modify their positions (Barrilleaux 1997(Barrilleaux , 2000. The importance of party competition for effective governance has been underscored by several scholars. Ranney (1965) argues that: Most writers on the subject of state politics believe that a state's competitiveness is significantly related to other characteristics of its parties and politics … they generalize that the state parties facing the closest competition are likely to have the most centralized control of nominations, and the highest cohesion in state legislatures and in gubernatorial-legislative relations. Consequently, they are likely to be the most effective and responsible governing agencies (63). Ranney does not stand alone. Aldrich (2011) draws a connection between party competition and the quality of governance: "[t]he South was solidly Democratic for a century, machines ruled in many cities and in some rural areas, and in such areas of one-party dominance there was for long periods effectively no competition for office by the opposing party. Thus articulation, aggregation, and accountability were all lost" (13, see also Aldrich and Griffin (2018)). Scholars have long believed that high levels of interparty competition will incentivize parties to improve the quality of representation they provide. Many of these conclusions are generated, however, by examining a single policy area or by examining many laws during only a single year. For example, Dye (1984) and Barrilleaux, Holbrook, and Langer (2002) examine the relationship between electoral competition and welfare policy while Erikson, Wright and McIver (1993) 1 It is important to separate the desires of individual candidates and legislators from the goals of the party. While parties are both exogenous from, and endogenous to, candidates and policy makers, they have a duty to maintain their strength and assemble majority coalitions. Individual lawmakers may be incentivized to run toward their electoral base to secure future campaign resources, but parties must craft an agenda and platform that resonates with the state public at-large. While both parties are responsible for pushing policy in a decidedly ideological position (Dye 1984;McIver 1989, 1993;Garand 1985), they must advance an agenda that appeals to the whole state or risk all of their candidates appearing out of touch. examine the correlation across a cross-sectional measure of state policy liberalism. While each of these studies has represented an important advance in our understanding of state policy making, studies relying on single policy areas or single points in time raise questions of generalizability across policies or time. We are fortunate that new, dynamic measures of state policy liberalism exist. Caughey and Warshaw (2016) develop a measure of policy liberalism based on nearly 150 different policies dating back to the 1930s that is comparable across states and years. Caughey and Warshaw (2018) extend this data collection effort, creating separate measures of social and economic policy liberalism. Caughey and Warshaw use separate measurement models to estimate state-year estimates of social policy liberalism and economic policy liberalism based on the presence or absence of particular policies in a given state-year. As a result, a state-year's estimate for one type of liberalism does not affect that state-year's estimate for the other type of liberalism. The availability of new data allows us to reexamine these questions while differentiating between types of policies. Why Competition May Lead to Liberal Policy Making Traditional expectations for the relationship between party competition and policy liberalism see variation in electoral participation as the glue that binds competitive elections and liberal state policies. High levels of competition are associated with increased levels of electoral participation (Flavin and Shufeldt 2015); therefore, the argument goes, high rates of competition should increase the quality of representation by encouraging parties to expand the voters they target. Blais (2000), summarizing 32 studies across time, space, and method, writes that there is a "crystal clear" relationship between the closeness of an election and turnout: individuals are more likely to vote in close elections (60). Moreover, the effect of closeness is not limited to increased spending by candidates or parties on mobilization (Blais 2000). Parties in close competition with one another are expected to increase their base by targeting voters they believe will turn out. In these cases, parties must choose between doubling-down on the same policies their base already supports or modifying their position to attract the support of those who traditionally do not participate and whose interests may not currently be well-represented by either party. The traditional explanation linking competition and policy liberalism favors the latter strategy: parties respond to increasing competition by targeting the have-nots (Davies and Worden 2009;Downs 1957;Holbrook and Van Dunk 1993;Key 1949). These have-nots are marginal voters-the sort of people who are likely to turn out in competitive elections but not in landslide elections-and their policy preferences tend to be less represented under low party competition. As originally theorized, these are voters with low incomes who are not regularly participating in the electoral process. Key (1949) argues that: Politics generally comes down, over the long run, to a conflict between those who have and those who have less. In state politics the crucial issues tend to turn around taxation and expenditure… [O]ver the long run the have-nots lose in a disorganized politics. They have no mechanism through which to act and their wishes find expression in fitful rebellions led by transient demagogues who gain their confidence but often have neither the technical competence nor the stable base of political power to effectuate a program (307). Key is not alone. Indeed, Schattschneider (1960) contends that "one-party politics tends to strongly vest political power in the hands of people who already have economic power" (80). Areas with low competition tend to have conservative policies because elites have little need to represent the more liberal preferences of the have-nots. Increased electoral participation can remedy this conservative bias. Key (1949) writes: "The have-have-not match is settled in part by the fact that substantial numbers of the have-nots never get into the ring. For that reason, professional politicians often have no incentive to appeal to the have-nots" (308). 2 Because higher levels of competition tend to foster higher turnout elections and those nonhabitual voters that only turn out in competitive elections tend to be have-nots, politicians facing high levels of competition are incentivized to appeal to a more liberal electorate than they would otherwise target. More recent scholarship has validated Key's intuition. Hill and Leighley (1992) demonstrate there is a negative relationship between a class bias in state electorates and redistributive state policies; more generous redistributive policies are associated with the participation of the poor in the electoral politics. Hill, Leighley, and Hinton-Anderson (1995) examine the relationship between the mobilization of lower-class voters and the generosity of a state's welfare policy, demonstrating that higher rates of lower-class voting are associated with more generous welfare policies, exactly as Key suggested. Taken together, Key and others suggest that, because (a) party competition increases electoral participation and (b) those mobilized by this competition tend to be have-nots who favor liberal policies, (c) increased party competition should lead policy makers to pass policies that are more liberal than those they would have passed in the absence of such competition. Is This Always True? States pass a heterogeneous bundle of policies. Some issues are more salient than others to voters and, by extension, more important in electoral campaigns (Bianco 1994;Carmines and Stimson 1980;Carpini, Keeter, and Kennamer 1994;Jennings and Zeigler 1970;Kingdon 1966;Stokes and Miller 1962). One major distinction between policies concerns whether they affect economic or social issues. Existing explanations linking competition to policy liberalism tend to focus on either economic policy (e.g., Barrilleaux 1997) or a bundle of state policies (e.g., Erikson, Wright, and McIver 1993). This makes sense; in his original formulation of this expectation, Key (1949) refers specifically to economic policy. We extend this theory to the domain of social policy, as well. Social policy differs from economic policy in several important ways. First, much of a state's budget is constrained by their laws and constitution, level of indebtedness, amount of federal transfers, and economic health (Bunche 1991;Poterba 1994). While lawmakers may desire to move the budget strongly in one direction, they are often stymied by the economic realities and legal environment in which they operate. Second, social policy is easier for citizens to understand and take positions on (Carmines and Stimson 1980). These issues attract more attention and parties may come to believe their electoral fortunes ride more highly on these bills. While it is unlikely that citizens will take firm positions on the allocation of the state's budget, it is far more likely that they can express an opinion on a social concern (e.g., abortion) and remain invested in this position for a considerable length of time. Finally, there is empirical support for high degrees of policy responsiveness among social policies Phillips 2009, 2012). While there is qualified support for policy responsiveness on economic issues (Pacheco 2013), its effect is stronger when examining social issues, like gay rights (Lax and Phillips 2009). For these reasons, Caughey and Warshaw (2018) note that policy responsiveness to voters should be weaker on economic issues than social issues. Perhaps most importantly, however, class-level preferences on social policy crosscut their positions on social issues. For example, Flavin (2011) reports that lowincome citizens are 6% more likely to report a belief that abortion should be banned completely, compared to high-income citizens. Similarly, Ansolabehere, Rodden, and Jr. (2006) find that lower-class voters are more conservative on moral issues than upper-class voters. Gilens (2009) reaches a similar conclusion in his analysis of more than 1,700 survey questions; while low-income Americans are strong supporters of many redistributive policies, they tend to be more conservative on many social issues, including abortion policy, stem cell research, and gay rights. 3 In short, and as Rigby and Wright (2013) explain, "higher-income Americans tend to be more conservative than the poor on economic issues, but more liberal on social and moral issues" (554). Figure 1 illustrates this relationship, plotting the smoothed distribution of economic and social attitudes by socio-economic status from Rigby and Wright (2013). Those authors measure average attitudes for citizens with low, middle, and high income (divided into equally-sized groups by state income percentiles) for 47 states in the year 2000. Economic attitudes are distributed as expected: The citizens with the lowest socio-economic status (SES) have the most economically liberal attitudes, and the differences between the three income classes are fairly pronounced. For example, the most liberal respondents in the high-income group are about as liberal as the average respondent in the low-income group. On the social policy dimension, the ordering of the three groups is reversed. On issues of social policy, low-SES citizens have the most conservative attitudes. Importantly, however, the differences in social attitudes between the three groups are not as large as on the economic dimension. At the conservative end of the spectrum, the three groups are almost equally well-represented, and while high-income respondents lean liberal, there is still a substantial number of them on the conservative side. This cross-cutting opinion structure suggests that the relationship between overall policy liberalism and competition may differ on social issues than economic issues. Increased competition incentivizes politicians to appeal to marginal voters, and marginal voters tend to have preferences aligned with the have-nots in society. Moreover, have-nots have cross-cutting preferences on social and economic issues. Therefore, we expect: H 1 : Increased party competition is associated with increased economic liberalism. H 2 : Increased party competition is associated with decreased social liberalism. 3 This view is not unanimous. Soroka and Wlezien (2008) argue that "differences in preferences across income brackets are in fact small and insignificant" (309). Research Design and Data Testing these hypotheses requires repeated measures of policy liberalism and party competition for each state over time. We now describe how we operationalize these concepts, provide descriptive statistics for each, and explain our strategy for assessing our hypotheses. We test our hypotheses using measures of economic and social policy liberalism estimated by Caughey and Warshaw (2018). These variables, which are the output of a latent variable model incorporating information on the adoption of nearly 150 policies between 1936 and 2014, range from À2.49 to 2.61 (Social) and À2.24 to 3.13 (Economic). Both variables have a mean of 0 and standard deviations of 0.85 (Social) and 0.93 (Economic). Figure 2 shows the median values of the two variables by year. Overall, there is no clear trend in the series by year (a bivariate regression yields p-values of p = 0.1 Social and p = 0.26 Economic for the slopes). Unsurprisingly, the two measures are highly correlated: r ¼ 0:71. Conceptually, there are some differences between the two series. Many of the economic policies that Caughey and Warshaw (2018) use to construct their measures are effectively dials that states can use to change their level of economic policy liberalism very gradually. For example, a state might adjust its tax on a pack of cigarettes, or the level of benefits from the Aid for Families with Dependent Children program by a few percentage points. By contrast, social policies such as the death penalty, or the legality of same-sex marriage, are more akin to levers which are either on or off. Consequently, social policy moves more slowly within a state, but when changes do occur, they can be more erratic. The Caughey and Warshaw (2018) model includes both continuous and binary variables, but for some state-years, only binary social policy items exist. As a result, the model has no continuing time series of an interval-level variable to "tether" itself to from year to year. As a result, the standard errors for these estimates are larger. To measure party competition, we turn to the Folded Ranney Index. The original (unfolded) Ranney Index measures the competitiveness of the party in government by including the proportion of seats won by the Democratic party in the legislature, the percent of the vote received by the Democratic candidate for governor, and the percent of the time Democrats control both the executive and legislative branches (Ranney 1976). A single score is calculated by averaging these three items together over a number of years to account for the timing of gubernatorial and legislative elections as well as to smooth out high and low values that may be the product of one aberrant election cycle. The Ranney Index, therefore, measures the strength of the Democratic (or Republican) party in government. Scholars interested in the level of competition between the parties for control of government more explicitly have "folded" the Ranney Index over its midpoint to create a measure where higher values are associated with more competition and lower values, one-party dominance. The variable ranges from 0.50 to 1.00 with a median of 0.86 and a standard deviation of 0.13. 4 Our focus in this paper is party competition, rather than electoral competition. Party competition refers to the frequency with which the major parties alternate control of government. Ranney (1976), for example, measures this concept by assessing the number of seats a party holds in the legislative branch, the party's control of the executive branch, and the presence of unified government. Electoral competition, by contrast, refers to the vulnerability of a given legislator seeking reelection absent partisan considerations. In early studies, this concept was operationalized by comparing the votes received by the winner to the candidate receiving the next most (Jewell and Breaux 1988;Weber, Tucker, and Brace 1991) or aggregating the margin of victory to the state level (Anderson 1997;Berry, Berkman, and Schneiderman 2000). More recently, Holbrook and Van Dunk (1993) developed a state-level measure of electoral competition that incorporates the percent of votes received by the winning candidates, their margin of victory, the number of seats considered safe, and the number of races in which both major parties are running a candidate. While these two concepts are positively correlated, they are distinct conceptually (Shufeldt and Flavin 2012) and often Our unit of analysis is the state-year, and our dataset is a panel. Each of our outcome measures is continuous, so we rely on linear regression to test our hypotheses. We estimate a series of regression models using two-way state and year fixed effects to account for within-state and within-year confounding factors. To assess the robustness of our findings to various aggregations of the Folded Ranney Index, we estimate each model using 4-, 6-, 8-, and 10-year time aggregations for each measure of competition. Our data begin in the late 1930s (depending on the time aggregation) and end in 2010. Because we employ fixed effects for state and year, the gravest threats to inference are confounding factors that vary within states over time and are related to both competition and policy liberalism. 5 Therefore, in addition to estimating the bivariate relationship between competition and policy liberalism, we also estimate another series of models that included lagged controls for several factors-public opinion, gubernatorial control, legislative control, and the percentage of Black citizens-that relate to both policy liberalism and competition and vary within states over time. First, Caughey and Warshaw (2018) provide measures of public opinion toward social and economic liberalism; we include those variables in the models that assess those concepts. Again, these variables are the output of a latent variable model; they range from À1.25 to 2.04 (Social) and À0.93 to 0.65 (Economic) with means of 0 and standard deviations of 0.47 (Social) and 0.22 (Economic). The two variables are not strongly correlated: r = À0.07. Second, to measure control of the state legislature, we rely on Klarner's (2013) measure of the proportion of state lower chamber seats held by the Democratic party. 6 This variable has a median of 0.57 and a standard deviation of 0.23. Third, to measure gubernatorial control, we rely on Klarner's (2013) measure. The variable takes a value of 0 for a Republican governor, 1 for a Democratic governor, and 0.5 for a nonmajor party governor. The modal state during this time period has a Democratic governor. Finally, state-level demographic changes over time, coupled with the fact that party competition was low in southern states with a high proportion of Black citizens (e.g., Key 1949), demand that we account for the racial make-up of the state population. We use data from the US Census to control for the percentage of the state's population that identifies as Black. 7 As the data are only available in 10-year intervals, we interpolate linearly between them to get data for every state-year. In 1940, the average state population consisted of 9.3% of Black citizens, compared to 11.2% in 2010. Mississippi boasts the highest percentage of Black citizens, with 49.3% in 1940 as the all-time high. Results We discuss our results in four stages. First, we examine the results for a series of bivariate regressions. Second, we account for a series of time-varying, within-state have different effects on policy making (Barrilleaux 1997). We discuss the theoretical and empirical differences for the two concepts as applied to our theory and data analysis in the appendix. 5 Given Angrist and Pischke's (2009) warning that a lagged dependent variable in a model with fixed effects can lead to biased coefficients, we do not include a lagged dependent variable in our models. 6 As a result, when we estimate models with these control variables, we drop state-years with nonpartisan legislatures. 7 We also estimated the model using the percentage of the state's population that identifies as nonwhite. The results are nearly identical. factors that might confound our analysis, demonstrating that our results hold. Third, we probe whether the effects of party competition on policy liberalism have remained constant since the Great Depression. Finally, to account for the fact that the effects of competition on policy liberalism may not be linear, we present the results of a nonparametric regression. 8 Bivariate Relationship We begin by examining the bivariate relationship between party and social and economic policy liberalism. These results are shown in Figure 3. 9 The y-axis refers to the level of aggregation (t number of years) used to calculate the measure of competition. Each point shows the coefficient for competition in a linear regression model with state and year fixed effects. The shape of the points shows the dependent variable-social or economic liberalism-in the model. The clearest conclusion from Figure 3 is that the measures of competition relate differently to both dependent variables. Beginning with social liberalism (the Each point corresponds to the coefficient estimate for competition in a linear regression model. The figure includes 95% confidence intervals for each estimate. 8 We include another analysis in the appendix that divides state-years based on the partisan control of their legislature. We find that, when that one party controls (holding at least 60% of the chamber) the chamber, party competition pulls both economic and social policy in a countervailing direction. Among chambers controlled by Democrats, higher levels of competition are associated with more conservative social and economic policy. Among chambers controlled by Republicans, competition has the effect of moving both baskets of policies in a more liberal direction. triangles in Figure 3), the result of each linear regression model is unanimous: increased party competition is associated with less social liberalism. The size of these effects are moderate, with a change across the range in the 6-year competition measure resulting in a 0.25 unit decrease in social liberalism. This is comparable to living in Oklahoma versus Alabama in 2010. These findings stand in stark contrast to the conventional wisdom that competition leads to unambiguously more liberal state policies, suggesting that social conservatives are advantaged in times and places with high levels of party competition. Next, we examine the relationship between economic liberalism and competition, plotted with circular points. Here, the evidence supports a positive relationship between party competition and economically liberal policies, especially for those models that use a longer time aggregation to calculate competition. The effect sizes are again substantively important, with a change across the range of 6-year competition resulting in a 0.11 unit increase in economic liberalism-roughly the difference between the economic policies of Nevada versus Florida in 2010. In short, the results provide strong support for the hypotheses we have outlined and suggest that Key's original theory holds for economic policy-which, admittedly, was the focus of Key's original analysis-but not for social policy. Accounting for Potential Confounding Influences Of course, the results in Figure 3 do not account for any possible confounding influences. 10 Figure 4 therefore reproduces the models in the previous figure, controlling for public opinion, gubernatorial partisan control, legislative partisan control, and the percentage of the state's citizens that identify as Black. Table 1 provides numeric regression results. Looking at the lighter points in Figure 4, the relationships we observed in the previous section persist-and even strengthen-when accounting for the effects of public opinion, the composition of the legislature, the partisanship of the governor, and the racial make-up of the state's residents. There is robust evidence of a positive relationship between party competition and economic policy liberalism once we control for other factors. The size of these effects are substantial. For example, a change across the range of the 4-year Folded Ranney measure yields a 0.36 unit increase in policy liberalism-approximately the difference between Texas and Florida in 2010. The same is not true for social policy liberalism. While the estimated coefficients continue to be negative (in line with our theory), the uncertainty surrounding those estimates does not enable us to conclude that these effects are distinguishable from a null effect. Perhaps because of the weaker relationship between social policy preferences and social class or the fact that social policy liberalism is more slowmoving than economic policy liberalism, these results provide us with no evidence that social policy liberalism is related to party competition after accounting for a state's public preferences, racial make-up, and the partisan control of state govern-ment. While the presence of these control variables does not alter support for the first hypothesis, they do call into question the strength of the second hypothesized relationship. The control variables generally perform as expected, providing additional evidence in support of the relationship between party competition and policy liberalism discussed above. More Democratic legislative strength is associated with more liberal social, and economic policies. Democratic governors are associated with liberal economic policy, though the relationship between gubernatorial control and social policy is less apparent. The same is generally true for public opinion: more liberal publics tend to get more liberal policies. Similarly, states with a greater proportion of Black citizens tend to have more liberal social and economic policies. Over-Time Effects Party competition in America has ebbed and flowed with time. Key's (1949) analysis of southern politics that first motivated scholars to consider the relationship between competition and policy making examined a place and time that is remarkable for extremely low levels of party competition. As a result, it is reasonable to wonder whether the results we have presented to this point are time-bound. To investigate whether the relationship between competition and policy liberalism is constant over time, we subsetted our sample repeatedly to include any 1 year and Each point corresponds to the coefficient estimate for competition in a linear regression model. The figure includes 95% confidence intervals for each estimate. the 7 years before and after it. This procedure yields a number of datasets each covering a moving 15-year window of time. For each dataset, we estimated the multivariate specification mirroring that shown in Table 1. Of course, subsetting the data so severely drastically reduces the sample size for any one regression. This, in turn, increased the amount of uncertainty around our regression estimates and their corresponding confidence intervals. Figure 5 plots the over-time effect of party competition on economic (left-hand panel) and social (right-hand panel) policy liberalism. The value for any 1 year is the coefficient for party competition for the dataset using that year as its midpoint. The shaded parts of the plot correspond to the side of the plot that provides support for our hypothesis (positive effect of competition on economic policy, negative effect on social policy). There is a considerable amount of over-time variation in the direction and magnitude of the effect of competition on policy liberalism. Party competition appears to have a positive effect on economic policy liberalism (left-hand panel) for the 1930s, 1980s and 1990s. By contrast, increased party competition was associated with less economic liberalism during part of the 1950s and 1970s. While our exepctations are supported for part of the time-series, there are also periods where there is either insufficient-or even contrary-evidence regarding our hypothesis. The effect of party competition on social policy liberalism (right-hand panel) is not statistically significant for large portions of the time series. During the late 1930s and early 1940s, there is a positive effect, which defies our expectations. This largely corresponds to the results of the previous section, where we also find no statistically significant effect of party competition on social policy liberalism. However, this figure suggests that the relationship has grown stronger-and in line with our hypothesisin recent years. Since 2000, the effect is indeed negative and statistically significant, as predicted by our theory. In conclusion, this analysis suggests that the relationship between competition and policy liberalism may not be as straightforward as Key and others expected, even when economic policy liberalism is concerned. Rather than an unambiguous linear, positive effect of party competition on policy liberalism, there is a considerable overtime vacillation in the effect. These nonlinearities suggest one reason why the multivariate models in the previous section, which are essentially an aggregation of these time-based models, presented a null result for the relationship between social policy liberalism and party competition. Accounting for Possible Nonlinearities All of the regression models presented so far require a strict assumption: the effect of party competition on policy liberalism is linear. Put differently, these models require us to assume that the effect of competition on liberalism remains constant regardless of whether the amount of party competition is small, medium, or large. 11 There are reasons to doubt this assumption. For example, the results of the previous section suggest that the relationship between competition and policy liberalism has changed dramatically over recent US history. Or, one might think that there are hidden threshold effects: party competition only has an effect up to a certain point or competition only begins to affect policy liberalism once a state reaches a certain amount of competition. To investigate the possibility of nonlinearities such as a threshold or even a direction change (which Figure 5 suggests may be possible), we rely on a nonparametric regression-a model which does not assume a linear relationship between competition and the outcome variable. Specifically, we fit a generalized additive model (GAM) with a smoothing function for the competition term (Wood 2011). 12 Compared to other approaches to nonparametric regression, such as local regression (Cleveland, Grosse, and Shyu 1992) or Kendall-Theil regression (Siegel 1982), this method allows us to include all constituent terms, including the fixed effects for state and year. We smooth only the competition term, while all other terms are estimated parametrically. We do so because we have no theoretical reason to expect that their relationships with the dependent variable might be nonlinear. 13 Figure 6 shows the predicted values of economic (left-hand panel) and social (right-hand panel) policy liberalism (y-axis), given a specific level of party competition (x-axis). 14 It is evident that strong nonlinearities are in fact present in both relationships. The general trend for economic policy liberalism (left-hand panel) is positive, in line with our theory: as party competition increases, there is generally an increase in predicted policy liberalism. However, there are important exceptions for this trend. For example, there is an unexpected negative relationship between party competition for Folded Ranney Index values below 0.6: when one party completely dominates and then cedes a small amount of control to the opposition, economic policy is expected to The shaded parts of the plot correspond to the side of the plot that provides support for our hypothesis (positive effect of competition on economic policy, negative effect on social policy). The values for any single year are the results of a model conducted on that year and the 7 years to either side. The figure shows that there is a considerable amount of variation in the results over time, but that more often than not, they conform to our main model. become more conservative. 15 As party competition increases further, economic policy becomes more liberal. For values between 0.65 and 0.85 on the Folded Ranney Index, this relationship is almost linear and fully in line with our predictions. As competition reaches its highest level, the effect levels off. 16 Still, the highest level of party competition is in fact associated with the most liberal economic policies, as our theory predicts. For social policy ( Figure 6, right-hand panel), we observe an overall negative relationship between social policy liberalism and party competition, just as our theory predicts. Policy liberalism is predicted to be at its highest when party competition is extremely low (values below 0.6), and social policy liberalism is predicted to be much lower for higher values of competition than for lower values of competition. When one party enjoys complete dominance, so do its elites, who tend to hold socially liberal values. There is a slight uptick in social policy liberalism for values of competition between 0.7 and 0.8; for values above 0.08, the effect levels off. Evidently, it makes a much bigger difference to go from a state with complete one-party domination to one in which one party tends to win, than going from a reasonably competitive one to a state with razor-thin margins and frequent changes of power (to a lesser extent, this is also true for economic policy liberalism). This figure illustrates the advantage of the nonparametric regression: While the linear model finds no evidence of a relationship between competition and policy liberalism because it has to calculate a linear trend across the entire range of the competition variable, the GAM model shows a strong negative trend in predicted social policy liberalism as party competition moves from its lowest to its highest values. This is the relationship our theory predicts. 15 Such extreme levels of one-party domination predominantly occur in the mid-century South. Of the state-years with a Folded Ranney Index of less than 0.6, 95% are from the South, and 99% occur prior to 1980. At this range of the party competition variable, the model is also at its most uncertain, as indicated by the larger confidence interval. 16 Conclusion Competition-between parties and in the electoral arena-is one of the most widelyinvoked concepts in the study of political science, especially the comparative study of state politics. Understanding the relationship between this concept and the ideological direction of state policy making is essential for a full understanding of state politics. In this paper, we reexamined the relationship between party competition and state policy liberalism. This relationship has a long history in the literature, dating back to Key's (1949) suggestion that "A loose factional system lacks the power to carry out sustained programs of action, which almost always are thought by the better element to be contrary to its immediate interests. This negative weakness thus redounds to the benefit of the upper brackets" (308). Scholars have long claimed a positive relationship between party competition and policy liberalism. Our results-drawing upon a longer time period and more expansive set of state policies than any existing study of this relationship-both confirm and challenge Key's intuition. First, to the extent that Key's hypothesis was carefully limited to economic policy, we have found robust evidence that higher rates of Democratic control of legislatures are positively associated with more liberal economic and social policies. The amount of competition between the parties seeking power in a state matters, in part, because it shapes the ideological valence of the legislation that emerges from government institutions. At the same time, we have uncovered some evidence of a negative relationship between social liberalism and party competition. This relationship seems to be clearest in recent years and in a model that allows the effect of party competition on policy liberalism to be nonlinear. Given that society's have-nots tend to have cross-cutting preferences on social and economic liberalism, our results extend Key's theory beyond the economic realm. These results provide further evidence for the mechanism stated by Key and additionally suggest that enhanced party competition is particularly beneficial for those individuals who support both socially conservative and economically liberal policies. As our over-time and nonparametric results plainly show, the relationship between party competition and policy liberalism is far more complicated than the simple theory that has animated so much research on state politics and policy. In many ways, our results truly do raise more questions than answers, particularly with regard to social policy and the temporal dynamics of these relationships. We hope that our findings spark a new wave of research on this storied hypothesis. Finally, we must also acknowledge research demonstrating some of the most marginal members of Congress to be among the least responsive (Ansolabehere et al. 2001;Deckard 1976;Fiorina 1974;Miller 1970). Comparing the roll call votes of congresspersons to the opinions of their districts, Miller (1970) finds a strong positive relationship between social welfare and civil rights issues; however, when examining the votes of members from marginal districts, there is no relationship between the two. This is further validated in Gulati (2004), who shows legislators from safe seats to be more responsive to the ideological center of their constituencies than those in who ran more competitive races. Members running in competitive races face an incentive to increase their base of support to ensure access to the resources necessary to mount successful campaigns in the future. Our results-which aggregate competition across time-stand in stark contrast to these studies that examine responsiveness at the level of the individual legislator. Future research should examine how the process of aggregation from the legislator to the state level affects the relationship between competition and responsiveness. In sum, however, our results provide an important theoretical and empirical extension of our existing understanding of the relationship between competition and policy liberalism. By demonstrating party competition impacts both economic and social policy making, our results help to broaden our understanding of the connection between the electoral and policy realms of state government, providing further evidence that the obstacles politicians face to keep their jobs affect the types of policies they enact.
2020-01-06T20:37:38.460Z
2021-04-19T00:00:00.000
{ "year": 2021, "sha1": "bd4caaab1e9d0bc147c000e21c15dc3611662d13", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/6C923408FCB0165E2DF709E9D861DDD8/S153244002000002Xa.pdf/div-class-title-party-competition-and-policy-liberalism-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "4d13221c6e313a6a388727c90470d50763ee7d39", "s2fieldsofstudy": [ "Economics", "Political Science" ], "extfieldsofstudy": [ "Economics" ] }
255800856
pes2o/s2orc
v3-fos-license
Thoracic aortic aneurysm and atrial fibrillation: clinical associations with the risk of stroke from a global federated health network analysis Background An association with aortic aneurysm has been reported among patients with atrial fibrillation (AF). The aims of this study were to investigate the prevalence of thoracic aorta aneurysm (TAA) among patients with AF and to assess whether the co-presence of TAA is associated with a higher risk of adverse clinical outcomes. Methods and results Using TriNetX, a global federated health research network of anonymised electronic medical records, all adult patients with AF, were categorised into two groups based on the presence of AF and TAA or AF alone. Between 1 January 2017 and 1 January 2019, 874,212 people aged ≥ 18 years with AF were identified. Of these 17,806 (2.04%) had a TAA. After propensity score matching (PSM), 17,805 patients were included in each of the two cohorts. During the 3 years of follow-up, 3079 (17.3%) AF patients with TAA and 2772 (15.6%) patients with AF alone, developed an ischemic stroke or transient ischemic attack (TIA). The risk of ischemic stroke/TIA was significantly higher in patients with AF and TAA (HR 1.09, 95% CI 1.04–1.15; log-rank p value < 0.001) The risk of major bleeding was higher in patients with AF and TAA (OR 1.07, 95% CI 1.01–1.14), but not significant in time-dependent analysis (HR 1.04, 95% CI 0.98–1.10; log-rank p value = 0.187), Conclusion This retrospective analysis reports a clinical concomitance of the two medical conditions, and shows in a PSM analysis an increased risk of ischemic events in patients affected by TAA and AF compared to AF alone. Supplementary Information The online version contains supplementary material available at 10.1007/s11739-022-03184-6. Introduction A thoracic aortic aneurysm (TAA) is a localized dilatation of the ascending and thoracic aorta that can lead to dissection and rupture of the vessel wall [1,2]. The aortic aneurysm may silently progress with one in two cases being completely asymptomatic [1,2], and indeed diagnosed incidentally during imaging studies performed for other clinical conditions. The first clinical manifestation may occur as an acute event, either aortic dissection, which associates with a high-risk for mortality or cardiovascular event [1]. Recent observational studies have reported a high prevalence of aortic aneurysms among patients with AF, which is the most common cardiac arrhythmia worldwide [3,4]. However, the clinical significance of concomitant AF and aortic aneurysms remains undetermined. More specifically, the added risk for cerebrovascular events by the concomitant presence of aneurysms of the aorta in patients with AF is unknown. Currently, indication to oral anticoagulation (OAC) therapy in patients with AF in most guidelines is based on risk stratification built on the pattern of comorbidities and summarized in clinical risk scores, such as the CHA 2 DS 2 -VASc score [5,6]. In this score, the 'V' component has been framed to include myocardial infarction (including significant coronary artery disease on cardiac imaging), peripheral vascular diseases and the presence of atherosclerotic aortic plaque [7]. Nevertheless, diseases of the aorta such as aortic aneurysms are not formally considered in the 'V' criterion of the CHA 2 DS 2 -VASc score [6]. In this study using a global federated database of electronic health records, amongst patients with AF, the aims were to examine 1) the prevalence of TAA; and 2) associations between TAA and risk of ischemic stroke, systemic thromboembolic events and major bleeding. Methods We used TriNetX, a global federated health research network with real-time updates of anonymised electronic medical records (EMRs). The network includes healthcare organisations (HCOs, academic medical centres, specialty physician practices and community hospitals), with data for > 80 million patients predominately based in the United States. To comply with legal frameworks and ethical guidelines guarding against data re-identification, the identity of participating HCOs and their individual contribution to each dataset are not disclosed. As a federated research network, studies using the TriNetX health research network do not require ethical approval as no patient identifiable information is received. For the present study, the TriNetX research network was searched for the inclusion of AF patients (International Classification of Diseases, Tenth Revision, Clinical Modification [ICD-10-CM] code: I48) aged ≥ 18 years between 1 January 2017 and 1 January 2019. The cohort was placed into two groups based on the presence of AF and TAA with/without rupture (ICD-10-CM codes: I71.1 and I71.2, respectively) or AF alone (Fig. 1). In the TAA group, TAA should to have occurred in the 2017-2019 timeframe, whereas the group with AF alone should not have history ever of TAA. All diagnoses were identified by the ICD-10-CM codes in the EMRs. No other inclusion or exclusion criteria were defined. The searches were run in TriNetX on 22 February 2022. At the time of the search, there were 58 participating HCOs within the TriNetX research network. Patients have been not involved in the design of the study; however, dissemination of the results is planned thorough patients' associations. Follow-up and clinical outcomes All patients were followed up from inclusion to at least 3-years. The primary endpoint was the occurrence of ischemic stroke/transient ischemic attack (TIA). All-cause mortality, major bleeding (composite of intracranial haemorrhage [ICH] and gastrointestinal bleeding) and the composite of any arterial or venous thrombotic event (any of the following: myocardial infarction, other arterial thrombosis, venous thromboembolism [VTE], or ischemic stroke/TIA, systemic embolism) were secondary outcomes. Statistical analysis Continuous variables were expressed as mean and standard deviation (SD), and tested for differences with independentsample t tests. Categorical variables were expressed as absolute frequencies and percentages, and tested for differences with chi-squared tests. The TriNetX platform was used to run 1:1 propensity score matching (PSM) using logistic regression. The platform uses 'greedy nearest-neighbour matching' with a calliper of 0.1 pooled standard deviations and difference between propensity scores ≤ 0.1. Covariate balance between groups was assessed using standardised mean differences (SMDs). Any baseline characteristic with a SMD between cohorts < 0.1 is considered well matched [8]. Odds ratios (OR) with 95% confidence intervals (CI) were calculated following PSM. Hazard Ratios (HR) and 95% CI were also provided after PSM, as well as Kaplan-Meier survival curves with log-rank tests. No imputations were made for missing data. Two-sided p values < 0.05 were accepted as statistically significant. Statistical analysis was performed using the TriNetX Analytics function in the online research platform. Participant characteristics Between January 2017 and 2019, 874,212 patients aged ≥ 18 years with AF were identified. Of these, 17,806 had a TAA with or without rupture accounting for an overall prevalence of 2.04%. Table 1 summarizes the baseline characteristics of patients with AF and TAA and patients without TAA, before and after PSM. Patients with AF and TAA had a higher risk profile with higher prevalence of comorbidities except for diabetes. After PSM, both cohorts were well balanced. Ischemic stroke/TIA in patients with TAA and AF vs. those with AF alone After PSM, 17,805 patients were included in each of the two cohorts (i.e. 1:1). During the 3-years of follow-up, 3079 (17.3%) AF patients with TAA and 2772 (15.6%) patients Discussion The principal findings of this analysis are as follows: (i) there is a clinical co-occurrence of TAA and AF; (ii) patients with AF and TAA are characterized by a higher cardiovascular risk profile, compared to those with AF and no history of TAA; and (iii) in PSM analysis, amongst patients with AF, TAA was associated with an increased risk of ischemic and systemic thromboembolic events at 3-year follow-up. Emerging clinical evidence has shown a high prevalence of TAA among patients with AF. In a retrospective analysis from nationwide population database, Hsu et al. [3] reported a bidirectional association between aortic aneurysm and AF, showing that in patients with AF compared to those without AF, an increased incidence of aortic aneurysm was evident at 13 years follow-up (adjusted HR 1.24, 95% CI 1.10-1.40, p < 0.001). Similarly, patients with aortic aneurysm had a higher risk for presenting with AF at follow-up compared to patients without a diagnosis of aortic aneurysm (adjusted HR 1.187, 95% CI 1.079-1.301, p < 0.001) [3]. In a sub-analysis considering only TAA, a higher incidence was detected in patients with AF compared to those without (0.14% vs. 0.09%, p < 0.001) [3]. A cross-sectional study of patients with AF undergoing gated chest computer tomography performed as part of the assessment for pulmonary vein isolation, reported a TAA prevalence of 20%, with 1% of the TAA detected having a size approaching the current surgery indication [4]. From a pathophysiological perspective, atherosclerosis underlies TAA, and indeed, peripheral or coronary artery diseases are other common clinical manifestations of atherosclerosis. Both peripheral or coronary artery diseases are associated with incident AF and AF-related complications, and AF is a common complication after aortic procedures such as transcatheter aortic valve replacement [9]. The findings regarding AF and TAA could be a non-casual association related to the increasing prevalence of both diseases with advancing age and consequently shared risk factors such as hypertension and heart failure. Indeed, the prevalence of TAA is approximately 4% in patients > 65 years and accounts for 6000 deaths a year in the UK [2]. Similarly, the prevalence of AF increases exponentially with age with an estimated ~ 6.9% prevalence in people > 65 years, though the burden of mortality linked to AF remains more elusive [10]. The finding in this study of a co-occurrence of TAA among patients with AF is aligned with previous results, though our figure being based on EMRs can include also AF post-surgery and therefore be an overestimate. While the associations may simply reflect shared risk factors, these findings raise the clinical implications regarding monitoring of patients with AF for the risk of developing TAA. Of note, the data show a higher prevalence of cardiovascular risk factors in patients with TAA and AF (but not diabetes) which may outline a more hemodynamic and degenerative nature of the TAA than an atherosclerotic origin. Another major finding of this study is that patients with AF and TAA have an increased risk of stroke, TIA and systemic thromboembolic events compared to a matched AF population with a similar cardiovascular risk profile. Any attempt to provide a plausible mechanistic explanation for such a finding remains speculative although it may be hypothesized that the presence of an aneurysm may be linked to endothelial damage which is one pillar of Virchow's triad [11]. Also, the presence of complex aortic plaque on the descending aorta is an independent risk factor for ischemic stroke in AF patients [5]. In a general population, the Aortic Plaques and Risk of Ischemic Stroke (APRIS) study [12] and the Stroke Prevention: Assessment of Risk in a Community (SPARC) [13] have recently questioned prior studies [14,15], reporting a lack of association between the presence of a complex aortic plaque and risk of stroke at a general population level. On the other side, the association between TAA and aortic atherosclerotic plaque remains elusive while it has been shown for involvement of the abdominal and infra-renal aortic aneurysm [16]. In this analysis, there was an increased odd for major bleeding in the group with AF and TAA. The short-term follow-up we considered in this retrospective analysis may have biased this outcome, and the concomitant use of aspirin and/or OAC may have contributed to this. The increased mortality we detected in the group with AF alone compared to patients with AF and TAA which can be counterintuitive due to the well-known mortality linked to TAA, is possibly because patients with TAA of any size not requiring surgery were also included, as our search was based on ICD codes. Therefore, small thoracic aneurysms with slow rates of growth and no impact on survival have been potentially included. Clinical implications We believe that the relevance of our finding is linked to the clinical perspective. Though this association seems a discordant comorbidity, AF and TAA have been shown to share commonalities in pathological pathways [17], the requirement of a CT scan to detect diseases of thoracic aorta has limited the applicability of screening program to general population and cannot be supported among AF for the risk benefit ratio considering the overall low figure of AF associated with TAA [17]. The added piece of the results of our analysis is that the coexistence of the two clinical conditions may confer a higher risk of adverse outcomes. Indeed, this claims attention for the need of optimizing the comprehensive medical management which may be difficult to integrate since the two diseases seem different. As a matter of fact, considering the higher cardiovascular risk profile of patients with AF and TAA, the proportion on antiplatelets appears to be high, while OAC is low, being 58.6 and 70.6%, respectively. This finding may be correlated to a perception from the surgeons of a bleeding risk which may lead to hold OAC notwithstanding the increased risk of stroke posed by AF. Similarly, considering the bulk of evidence showing that statins improve outcomes in both AF [18] and TAA [19], the proportion of patients on statins seems to be low. It may be hypothesized that the contradictory findings on the medical therapy prescribed may reflect an absence of an integrated management of both conditions. Limitations Several limitations should be considered when interpreting the results of the current study. First, the participant information is based on EMRs, and from this, a distinction between pre-existing AF, new onset of AF and AF post-operative surgery cannot be made. This may explain the low proportion of patients on OAC therapy in the group with only AF, which is difficult to investigate further. Second, patients with thoraco-abdominal aneurysms were excluded in order to assess only the impact of TAA. Thirdly, information on the prevalence of complex aortic plaque in the two groups could not be recovered. In this study, the cohorts were PSM for several factors including age, sex, ethnicity and co-morbidities, but residual confounding factors may still be present and some health conditions may be underreported in EMRs. Finally, the analyses presented in this manuscript have been performed using the TriNetX platform which has the major limitations that data cannot be exported for the analysis. As a result, the graphical output of the software is not optimal and sometimes hinder a proper graphical appreciation of differences that are actually significant. Conclusion Our retrospective analysis from a large global federated dataset reports a clinical concomitance of AF and TAA. Importantly, in a PSM analysis, an increased risk of ischemic events in patients affected by both TAA and AF was evident, compared to AF alone. Whether this association simply reflects shared risk factors or commonality in pathophysiological pathways, it raises relevant clinical implications that deserve further investigation.
2023-01-15T06:16:21.344Z
2023-01-14T00:00:00.000
{ "year": 2023, "sha1": "406a7297b99ebee7cf15134e1d04769f1925be11", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11739-022-03184-6.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "851cb4b9a6e2bd40e5ae6d6533ff747898f4245f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
49544738
pes2o/s2orc
v3-fos-license
Thermoregulatory behavior and high thermal preference buffer impact of climate change in a Namib Desert lizard . Knowledge of the thermal ecology of a species can improve model predictions for temperature-induced population collapse, which in light of climate change is increasingly important for species with limited distributions. Here, we use a multi-faceted approach to quantify and integrate the thermal ecology, properties of the thermal habitat, and past and present distribution of the diurnal, xeric-adapted, and active-foraging Namibian lizard Pedioplanis husabensis (Sauria: Lacertidae) to model its local extinction risk under future climate change scenarios. We asked whether climatic conditions in various regions of its range are already so extreme that local extirpations of P. husabensis have already occurred, or whether this micro-endemic species is adapted to these extreme conditions and uses behavior to mitigate the environmental challenges. To address this, we collected thermoregulation and climate data at a micro-scale level and combined it with micro- and macroclimate data across the species ’ range to model extinction risk. We found that P. husabensis inhabits a thermally harsh environment, but also has high thermal preference. In cooler parts of its range, individuals are capable of leaving thermally favorable conditions — based on the species ’ thermal preference — unused during the day, probably to maintain low metabolic rates. Furthermore, during the summer, we observed that individuals regulate at body temperatures below the species ’ high thermal preference to avoid body temperatures approaching the critical thermal maximum. We fi nd that populations of this species are currently persisting even at the hottest localities within the species ’ geographic distribution. We found no evidence of range shifts since the 1960s despite a documented increase in air temperatures. Nevertheless, P. husabensis only has a small safety margin between the upper limit of its thermal preference and the critical thermal maximum and might undergo range reductions in the near future under even the most moderate climate change scenarios. INTRODUCTION Globally, some lizard populations may be at risk of extinction due to a rapidly warming climate, because ambient temperatures increasingly exceed the lizards' thermal tolerances (Huey et al. 2009, Sinervo et al. 2010. One possible mechanism is reduction in time active outside retreats, which translates ecologically into a constraint on available foraging time, particularly threatening during the reproductive season when energy demands are at their peak. Abnormally warm ambient temperatures have already been demonstrated to be associated with extinctions of local lizard populations due to what has been proposed to be an inability to balance the energetic demands of reproduction in an abbreviated period of daily activity (Sinervo et al. 2010(Sinervo et al. , 2011. Despite the availability of strong circumstantial evidence, further examination of this hypothesis is warranted. In addition, expanding its application to a wider range of environments and lizard species is necessary. A combination of species-specific ecological data at a micro-scale, experimental approaches, and mechanistic species distribution modeling approaches is appropriate for such hypothesis tests. Mechanistic species distribution models are valuable tools for testing potential threats using real ecological data (e.g., Kearney and Porter 2009, Sinervo et al. 2010, Kearney 2013. The Sinervo et al. (2010) model is based on the hypothesis that species' range edges were originally defined by the maximum operative temperatures at the outermost localities of their distribution (i.e., hottest localities for the high temperature physiological limit) before the onset of anthropogenic climate change (considered as beginning during the mid-1970s following the Intergovernmental Panel on Climate Change [IPCC]; Bindoff et al. 2013). The Sinervo et al. (2010) model integrates lizard thermophysiology and in situ operative temperatures (T e ) to elucidate the interrelationships between ambient temperature and population distribution. This model relies on the premise that ambient temperatures constrain the amount of time that lizards may be active outside of their retreats (during which time lizards complete life activities such as foraging and breeding). In this model, site-and taxon-specific empirical data are used to determine species' thermal tolerances and to estimate the thermal quality of the current environment. These data are then combined with climate change models to estimate restrictions in lizard activity in the past (before observed temperature increases beginning in the mid-1970s), present, and future. Not all lizard species are at equal risk of extirpation from rising ambient temperatures. Several factors may interact to influence a lizard species' susceptibility to altered thermal niches arising from climate change, including the habitat requirements and characteristics, daily activity patterns, and foraging behavior (e.g., Anderson and Karasov 1981, Huey and Pianka 1981, Kearney 2013, Tingley et al. 2013, B€ ohm et al. 2016). It has been criticized that the model of Sinervo et al. (2010) did not explicitly include species-specific habitat affinities, their microclimatic diversity, behavior (thermoregulation, foraging strategy), or life history (Kearney 2013). These factors are potentially necessary to accurately assess climatemediated lizard extinction risk, as has been found for the risk associated with viviparity in Mexican lizards (Sinervo et al. 2010). Deserts are among the most extreme habitats that are occupied by lizards, due to the challenges imposed by thermal and hydric constraints. Namibia includes some of the warmest and driest regions in Africa south of the Sahara Desert, yet these arid Namibian ecosystems support a high diversity of lizard species (Thuiller et al. 2006, Herrmann andBranch 2013). In the hyper-arid Namib Desert, which stretches nearly 2000 km along the coast from southern Angola through Namibia to northern South Africa, surface temperatures often exceed 60°C (Edney 1971, Lancaster et al. 1984, Seely et al. 1990, Viles 2005, Murray et al. 2014. Terrestrial lizard species that occur here are active in extreme environmental conditions. Furthermore, the Namib Desert is home to a rich array of endemic lizard species that have specific habitat requirements and may be particularly vulnerable to changes in their environment (Sinervo et al. 2010). Regarding foraging strategies, lizard species may be broadly characterized by using either an active or a sit-and-wait (or ambush) foraging mode (Pianka 1966). Daily foraging time and foraging strategy are strongly correlated. For example, sit-and-wait foragers usually have lower prey encounter rates and are therefore surface active for longer periods of time than actively foraging lizard species, and have lower field metabolic rates (Anderson and Karasov 1981, Nagy et al. 1984, Brown and Nagy 2007. We chose the diurnal, heliothermic, actively foraging lizard species Pedioplanis husabensis Berger-Dell'mour and Mayer 1989 (Sauria: Lacertidae) with a restricted range within the xeric western parts of central Namibia as a model to investigate thermal biology and extinction risk. This species' habitat is threatened not only by climate change but also by uranium mining, making an investigation of its ability to cope with potential abiotic threats particularly important. We apply a multidisciplinary approach combining the use of museum collections to reconstruct past distribution ranges and surveys of current population status across the majority of its distribution, with data on its thermal physiology determined in situ and experimentally. We investigate the thermal biology and estimate the thermal properties of the microhabitats used by P. husabensis. In addition, we assess the thermal quality of the microclimates available to this species in order to estimate how well the lizards thermoregulate across both the energy-intensive reproductive period and during non-reproductive periods. Furthermore, we apply and extend the Sinervo et al. (2010) model to assess how the historical distribution may be altered by contemporary climate warming that has already occurred (Dirkx et al. 2008) and by future climate change, across the range of P. husabensis. We sought evidence for local extirpations of P. husabensis that may have already occurred in regions where temperatures have increased (i.e., all historical collection localities), to test the hypothesis by Sinervo et al. (2010) that the physiological limits of lizard species' distributions are defined by thermal constraints experienced before anthropogenicinduced climate change. We further predict that an active forager like P. husabensis will be able to persist in an environment characterized by relatively long periods of time when activity is constrained by high temperatures and that it will exhibit high precision in thermoregulatory behavior. Natural history of Pedioplanis husabensis Pedioplanis husabensis is a medium-sized species in the family Lacertidae. Adults have a snout-to-vent length of 45-61 mm and a mass of 1.8-4.2 g (Branch 1998). The species occurs in habitats located near the confluence of the Swakop and Khan rivers in the Namib Desert and adjacent dry savannah in central western Namibia (Berger-Dell'mour and Mayer 1989). The geographic distribution of this species encompasses <5000 km 2 (Branch 1998, Cunningham et al. 2012. Despite the restricted distribution range, P. husabensis is currently not listed in the IUCN Red List of Threatened Species (www.iucnredlist. org). Individuals of P. husabensis are diurnal and inhabit expanses of flat rock on exposed bedrock. Their activity occurs primarily on slopes where it forages for insects on rock and on loose, friable, shrub-dotted substrates (Murray et al. 2014(Murray et al. , 2016a. Although it can also be found foraging away from the rocky slopes, it exploits shelters among rock crevices on the slopes (Murray et al. 2014(Murray et al. , 2016a. Previous work has reported on its foraging mode, energetics, and active body temperature (Murray et al. 2014), diet (Murray et al. 2015(Murray et al. , 2016a, as well as general natural history attributes (Berger-Dell'mour and Mayer 1989, Schwacha 1997, Cunningham et al. 2012. Pedioplanis husabensis is oviparous and breeding starts in November, with first hatchlings appearing in April (Schwacha 1997, Branch 1998. Local climatic conditions include a low, mean annual precipitation (approximately 25 mm), but additional moisture comes from the episodic fog events. Based on data from similar sites in the Namib, around 25-50 fog days per year may be expected within a range of P. husabensis (Olivier 1995, Haensler et al. 2011, Eckardt et al. 2013. Mean air temperature ranges from 22.3°to 24.8°C during summer and from 17.5°to 18.8°C during winter (Hijmans et al. 2005). Thermal preference and critical thermal maximum Lizard thermal traits are generally accepted to be species specific, although several studies have shown intraspecific daily, seasonal, spatial, ontogenetic, or sexual variation in thermal preference and critical thermal maximum with inconsistent patterns (see Clusella-Trullas and Chown 2014 for a review). This lability of thermal physiology appears to vary among taxa (reviewed by Angilletta et al. 2002). In our study, we chose the most conservative approach by sampling adult males and non-gravid females from different populations across the geographic range and during different periods of the year to acquire data for species-specific thermal preference. To account for plasticity and potential local adaptations, we analyze the mean preferred or selected body temperature of all individuals tested (mean T sel ) as well as the central 50% of all individual averages (the thermoregulatory set-point range or thermal preference T sel ). Sampling.-A total of 21 individual lizards (five females and 16 males) from three localities (Table 1) were captured by noosing. Sampling occurred between April 2013 and October 2014 and included both reproductive and non-reproductive seasons. We brought lizards back to the field laboratory where they were housed in glass terraria, maintained at an air temperature of 25°C with water provided ad libitum. Lizards were not fed the day before the experiment began. Experiments.-We determined thermal preference T sel as well as the critical thermal maximum (CT max ) for P. husabensis in the laboratory at the Gobabeb Research and Training Centre (23.5611°S, 15.0411°E; altitude 405 m; Fig. 1; all coordinates are provided in decimal degrees). We initiated the thermal experiments within one to two days of capture. As appropriate for a diurnal, heliothermic lizard, T sel was determined in a photothermal gradient (e.g., Light et al. 1966, DeWitt 1967, Paranjpe et al. 2013, Clusella-Trullas and Chown 2014, Gilbert and Miles 2017. The commonly used experimental determination of T sel using a gradient is generally preferred over field observations of body temperature (Huey 1982, Hertz et al. 1993) as the laboratory data reflect the temperature selected by an individual in the absence of costs and constraints that are present in field conditions (Hertz et al. 1993, Clusella-Trullas andChown 2014). The experimental setup consisted of a box with eight individual tracks constructed from 5 mm thick, opaque particle board (910 9 380 9 120 mm; Fig. 1 for numbers), latitude, longitude, and altitude. Additionally, the method (T sel Method) that was used to determine mean and median selected body temperature (mean T sel , median T sel ) and critical thermal maximum (CT max ) is provided. An (-) indicates no data. length 9 height 9 width of each track) following Paranjpe et al. (2012). An incandescent 100-W light bulb (full spectrum) was suspended 30 cm above one end and a frozen gel pack placed underneath the ground plate at the other end of each track to create a thermal gradient of approximately 10°to 55°C measured at ground level with an infrared thermometer (Testo 845, accuracy AE 0.75°C, resolution 0.1°C; Testo AG, Lenzkirch, Germany). Each lizard was allowed unrestricted movement within its individual track during its normal diurnal activity period. Lizard body temperature was measured over a period of two consecutive hours. An additional 30-min acclimation time at the beginning of each trial was discarded. In 18 individuals (Table 1), body temperature was determined by means of cloacal temperatures using a T-type thermocouple probe (diameter 1 mm inserted approximately 10 mm into the cloaca) and a digital thermometer (AE0.2°C; Omega HH202A, Stamford, Connecticut, USA). Measurements were taken every 20 min. In three individuals, body temperature was determined every minute by means of ultra-thin T-type thermocouples (OMEGA 5SC-TT-T-40-72, diameter = 0.076 mm, Norwalk, Connecticut, USA) affixed with medical tape to the lizards' venter and connected to an 8-Channel USB Thermocouple Data Acquisition Module (OMEGA TC-08, accuracy 0.2 percent AE0.5°C, resolution <0.1°C). During these trials, we conducted occasional cloacal temperature measurements to confirm that internal body temperature was represented accurately by the ventral skin measurements. For each individual, we determined the average of all body temperature measurements. Since lizards mostly have a range of temperatures that they strive to function within, rather than a single value (Hertz et al. 1993), we used the interquartile range of the ranked individual averages as the species' thermoregulatory set-point range (thermal preference T sel ) ranging from T sel25 (lower limit) to T sel75 (upper limit) (Hertz et al. 1993). For the extinction risk model, we also used the species' mean T sel determined as the arithmetic mean of all individual averages. Critical thermal maximum (CT max ) was determined by heating individual lizards (N = 8, six males, two females, from two localities; Table 1) in an incubator (Heraeus, Hanau, Germany) at a constant rate (approximately 0.8°C/min). Prior to determining CT max , we warmed individuals to the species' mean T sel . Cloacal temperature was measured every minute and lizards were flipped on their back before every measurement from 40°C until either loss of the righting response and/ or muscular spasms occurred (Lutterschmidt and Hutchison 1997). When either response occurred, lizards were immediately removed from the incubator, and cooled with moist paper towels. All of the lizards tested behaved normally after being cooled in this manner within 5-10 min of ending the CT max trial. While this is still the most commonly applied method to determine CT max , the repeated righting response method may exhaust lizards, and results should consequently be interpreted with caution (Camacho and Rusch 2017). Seasonal thermoregulatory efficiency and thermal quality of the habitat To assess the thermoregulatory efficiency of P. husabensis and the thermal quality of its habitat, we studied a population along a dry section of the Swakop River at Farm Hildenhof (22.7008°S, 14.9148°E; altitude 210 m, Fig. 1). At this location, the species occurs on the extensive rocky slopes along the canyon (Murray et al. 2014(Murray et al. , 2016a. We collected data for a period of ten days during the species' reproductive season in summer and another ten days during the nonreproductive season in autumn (Table 2; Schwacha 1997, Branch 1998. Weather conditions during the study period were characteristic for typical conditions during these seasons (data not shown). We quantified the thermal conditions available to lizards using operative temperatures (T e ), which represent the equilibrium temperatures of inanimate models, that is, non-thermoregulating objects whose heat-transfer properties, for example, morphology and reflectivity, approximate those of the study organism (Bakken andGates 1975, Bakken et al. 1985). We made T e models using empty, hollow, type M copper tubing (Shine andKearney 2001, Dzialowski 2005) with a length (L) = 47 mm and wall thickness = 12.7 mm. We painted all the copper models with primer gray (Sprayon gray primer, Sprayon Paints, Cleveland, Ohio, USA) to approximate lizard reflectance (Adolph 1990, Sinervo et al. 2010) and capped both ends of the tube. A thermistor probe attached to a Hobo temperature data logger (U12-001; Onset Computer Corporation, Bourne, Massachusetts, USA) was inserted through a hole in each tube to determine the interior temperature as a substitute for a nonthermoregulating P. husabensis field body temperature T b . Model temperatures were tested against cloacal temperatures of a freshly deceased adult P. husabensis (mass = 3.0 g) at the field site across a temperature range of 28-47°C. We found a high correlation of the temperatures recorded between the lizard and the copper model (R 2 = 0.97; P < 0.001; T b = 1.2 9 T e À 6.52). We obtained operative temperature data from 18 lizard T e models deployed in three different microhabitats frequently used by the lizards. The number of models in each locality was selected to cover a representative array of available microhabitat temperatures. Microhabitat categories were as follows: (1) "rock" (N = 7), included models on rocky slopes in the canyon, in direct sunlight, and in partially/temporarily fully shaded places around and beneath shrubs; (2) "silt" (N = 3), included models on sandy flats with loose substrates and small washes, in direct sunlight, and in partially/temporarily fully shaded places around and beneath shrubs: category; (3) "crevice" (N = 8), included shaded models deployed in potential lizard shelter sites on slopes (e.g., underneath rocks, in rock crevices). Orientation of the models with regard to the path of the sun was random, although it has been shown that lizards have preferred orientations toward the sun under different environmental conditions (e.g., Seely et al. 1988). Locations of individual models did not change during the course of the study. Each logger recorded T e every 10 min throughout each ten-day study period. Only data collected from 6:00 to 19:00 (incorporating daylight hours during both summer and autumn) were analyzed for this diurnal species. Lizards were monitored from 7.00 to 18.00 which bracketed the normal diurnal activity period of P. husabensis. We binned the proportion of lizards observed out of the total lizard observations during a season into time bins to estimate daily lizard activity periods (Murray et al. 2016a). After capturing lizards with a noose, we recorded body temperatures T b (N = 110; as reported in Murray et al. 2014Murray et al. , 2016a of surface active lizards immediately (within 30 s) by means of cloacal temperatures. All T e and T b data were subsequently analyzed in the context of the empirically determined T sel and CT max for this species to examine lizard thermoregulatory behavior and the thermal quality of the lizard's habitat. We compared the effects of season and microhabitat category on mean T e , mean maximum T e (open habitats), and mean minimum T e (shaded habitats) averaged across all models within each habitat type for every daylight hour of the study period, as used in previous analyses (Sinervo et al. 2010, Lara-Res endiz et al. 2015, Kubisch et al. 2016, Vicenzi et al. 2017) with twoway ANOVAs using season and microhabitat category as factors. We also tested effect of season and microhabitat category on thermal quality (d e ) with two-way ANOVAs using season and microhabitat as factors. Thermal quality of the environment (d e ) is the summed absolute value of the difference between T e and T sel . For every value of T e falling within the set-point range for T sel , d e equals zero. Higher or lower values were subtracted from the upper or lower limit of T sel and the absolute values are reported. A large value for d e suggests that temperatures within T sel occur relatively infrequently (Hertz et al. 1993). We used two-sample t tests to investigate potential differences in lizard thermoregulatory accuracy (d b ) for individuals captured on either silt or rock substrates, as well as for lizards active during summer and autumn. Thermoregulatory accuracy (d b ) is the absolute value of the difference between lizard field active body temperatures T b and the T sel for that species (Hertz et al. 1993). Again, a value of T b falling within the setpoint range for T sel , results in d b equals zero, higher or lower values were subtracted from the upper or lower limit of T sel and reported as absolute values. High values of d b indicate that lizards did not often achieve their T sel . Using these values, we calculated lizard thermoregulatory efficiency (E = 1 À d b ∕d e ), which ranges from 0 where microclimates are used randomly (e.g., a thermoconformer) to value of 1 indicative of perfect thermoregulation. We also determined the index of thermoregulatory efficiency proposed by Blouin-Demers and Weatherhead Current geographic distribution We assessed the geographic distribution of P. husabensis by using all available museum voucher specimens and respective publications known to us (Appendix S1: Table S1). In the case that no geographic coordinates were provided or coordinates were imprecise, we assigned coordinates based on the locality description (which was always provided) in combination with expert opinion from our local collaborators. In addition, we resurveyed 51% of the known sites (see Results, Fig. 1) using visual encounter surveys with two to four people to verify extant populations on various trips between January 2013 and February 2017. Special attention was given to the hottest (based on mean monthly T max ; downloaded from www.worldclim.org; Hijmans et al. 2005) eastern-most localities within the known distribution. Furthermore, we surveyed additional areas inside and outside of the known distribution range for potential new records (Fig. 1). To our knowledge, all of the sites we surveyed were anthropogenically unaltered and appeared not degraded. At each locality that we resurveyed, we found P. husabensis over the course of the first surveying day. Average person-hours needed to record the first individual at each new locality were 0.63 h (38 min) (N = 18) (S. Kirchhof, unpublished data). Extinction risk modeling To determine extinction risk, we chose four sites in addition to Hildenhof within the distribution of P. husabensis where we deployed T e models. At each of the four sites, we installed four T e models in the microhabitat of P. husabensis, with two models on a southwestern slope (one in full shade and one in direct sun) and two on a northeastern slope (one in full shade and one in direct sun), thereby encompassing the extremes of prevailing temperatures at each locality. Sites were chosen across the geographic distribution of P. husabensis (Fig. 1, sites 2-5, Table 2) to cover the range of operative environmental temperatures occurring there (Kubisch et al. 2016). To incorporate our data into the modeling framework of Sinervo et al. (2010), we followed their standard protocol and measured operative environmental temperatures (Hobo data loggers U23-003; Onset Computer) using models constructed from standardized hollow, empty, capped polyvinyl chloride (PVC) pipes (80 mm 9 15 mm, 1 mm wall thickness) spraypainted primer gray (NEO Dur semi-matt acrylic emulsion, Pastel Base, WOT, 1 L mixed with 100 mL NEO Charcoal 122, Windhoek, Namibia). These PVC T e models have been calibrated against live lacertid lizards of similar size to P. husabensis (Belasen et al. 2017; R 2 = 0.84, slope not significantly different from 1 and intercept not significantly different from zero). We compared the copper and PVC models by deploying both next to each other in P. husabensis habitat and left them for 15 d (24 June-8 July 2014) recording temperatures every 10 min. We found a high correlation of the temperatures recorded by copper (T cop ) and PVC (T PVC ) models (R 2 = 0.99; P < 0.001). For modeling, we corrected the values recorded by the copper models using the generated equation T PVC = 0.97 9 T cop + 0.85. We then selected four out of the 18 models deployed at Hildenhof from microhabitats bracketing the range of available T e in a fashion similar to that of other four sites to include in the extinction risk model. Ecophysiological models hypothesize that a species is optimally adapted to its local thermal conditions prior to climate change and that non-random extirpations will be concentrated at warmer range boundaries, where the velocity of climate change is most rapid, or where taxa are limited either by thermal physiology or by species interactions, for example, competition or predation (Terborgh 1973, Brown 1984. We adopted a metric of the critical daily hours of activity restriction for each species using the 95% quantile of daily hours of activity restriction h r , similar to the model developed for lizard families (Sinervo et al. 2010). If a given site was predicted to exceed the present-day critical h r value as computed from the 95% quantile, we assumed it would be extirpated. At each of the five study sites, we calculated h r by summing the amount of time that mean T e (during daylight hours) exceeded the mean T sel of P. husabensis. We repeated the same calculation, this time using T sel75 instead of mean T sel to account for the upper limit of the species' T sel . We used concurrent daily maximum air temperatures (T max ) recorded by the closest weather station (79 km for the Hildenhof study site) or measured directly on site using a Hobo data ❖ www.esajournals.org logger (at all other localities; in 2 m height exposed to the air and sheltered from direct solar radiation; WMO 1992) to determine the general species-specific relationship between daily h r and T max using the R package FLEXPARAMCURVE (Oswald et al. 2012). We calculated h r values during the breeding season using temperature data from November to January, which comprises the major period of reproductive activity (Schwacha 1997, Branch 1998. This period also corresponds with the time when the energetic demands of adult lizards are likely to be at a maximum. From our complete P. husabensis locality dataset (Appendix S1: Table S1), we discarded every known record separated by 1 km or less from the next record (see resolution of temperature rasters below) to avoid pseudoreplication. We used the climate dataset for the time from 1960 to 1990 downloaded from www.worldclim.org (mean monthly T max , spatial resolution 30 arc-s or 1 km 2 ) as a proxy for the air temperature conditions prior to the first records of increasing surface temperatures in 1975 (see Bindoff et al. 2013). The h r values for each site in the presentday and future time points were computed from fitted sigmoidal functions f(T max À mean T sel ) and f(T max À T sel75 ). For estimates of future T max , we used the MPI-ESM-LR model (spatial resolution 30 arc-s or 1 km 2 ; downloaded from www.worldclim.org) of the CMIP5 Earth System Models as used in the IPCC Fifth Assessment Report (IPCC 2014). This model performed best globally in predicting future climate conditions considering the current structure of the land carbon cycle as evaluated by Anav et al. (2013). We used two different pathways for the years 2050 and 2070, namely RCP 4.5 that assumes a medium rise in CO 2 concentration and a stabilization in the year 2100 without overshoot, and RCP 8.5 that assumes a rise of CO 2 beyond the year 2100 (Moss et al. 2010). Operative environmental temperatures, thermoregulatory efficiency, and thermal quality of the habitat Mean maximum T e (open habitats) was much higher than mean minimum T e (shade, crevices) during both summer and autumn (summer: 42.6°AE 13.7°C vs. 29.5°AE 7.7°C, t 82 = 5.37, P < 0.001; autumn: 38.3°AE 12.0°C vs. 26.2°AE 7.3°C, t 82 = 5.61, P < 0.001). Average daily mean T e was similar during summer and autumn (two-way ANOVA; F 1,78 = 2.63; P = 0.11) and did not differ by microhabitat (two-way ANOVA; F 2,78 = 1.03; P = 0.36; Table 3). The average daily mean maximum T e was also similar across season (two-way ANOVA; F 1,78 = 2.30; P = 0.13) and microhabitat (two-way ANOVA; F 2,78 = 0.58; P = 0.56; Table 3). Average daily mean minimum T e did not vary according to microhabitat (twoway ANOVA; F 2,78 = 0.11; P = 0.89), but autumn average daily mean minimum T e was about 3°C lower than during summer (two-way ANOVA; F 1,78 = 4.19; P = 0.04; Table 3). The effect of ❖ www.esajournals.org microhabitat on seasonal minimum T e was not significant (two-way ANOVA; microhabitat 9 season: F 2,78 = 2.00; P = 0.14). During the summer, mean operative temperatures exceeded mean T sel and T sel75 of P. husabensis for 54% (h r = 7) of the daylight period on rocks and for 62% (h r = 8) on silt (Fig. 2). Likewise, during autumn, mean operative temperatures exceeded mean T sel and T sel75 for 50% of the daylight period (h r = 5; both on silt and rocks) (Fig. 2). Nevertheless, in shaded areas, lizards had access to minimum operative temperatures considerably lower than T sel for the majority of the day (Fig. 2). For example, during the summer, mean minimum T e only exceeded T sel for three hours on rock substrates and two hours on silt substrates in the afternoon, exceeding CT max for only two hours in the afternoon on silt (Fig. 2). During autumn, mean minimum T e was never above CT max and only exceeded T sel for 1 h in the afternoon on silt substrates. Minimum crevice T e in both seasons was always below T sel (Fig. 2). Consequently, by exploiting both shaded and sunny patches (active thermoregulation through sun-shade shuttling), P. husabensis was able to prolong its activity periods (nine hours in summer, eight hours in autumn; translated into h r = 4 and 2, respectively; Murray et al. 2016a) in comparison with our estimation using T e models (five hours, summer and autumn; h r = 7 and 5, respectively). The thermal quality of silt and rock substrates (d e ) was highly variable throughout the lizard's diurnal activity period, but during both seasons there was a bimodal distribution of low d e values (high thermal quality) separated by high d e values (low thermal quality) during mid-day and during the early morning and evening hours (Fig. 3). Lizard surface activity periods, particularly in summer, largely corresponded to periods of high thermal quality during the morning. However, thermally optimal open surface habitats available during the late afternoon were used only rarely by lizards (Murray et al. 2016a). Average d e values were similar between seasons (two-way ANOVA; F 1,78 = 0.10; P = 0.75) and microhabitat type (two-way ANOVA; F 2,78 = 1.91; P = 0.16; Table 3). For rock and silt substrates in the summer, the periods of optimal thermal quality occurred at 10:00-10:30 and 17:00-18:00 (Fig. 3). During autumn, the highest thermal quality became available around 11:00 and between 15:00-16:00 (silt) and 16:00-17:00 (rock) (Fig. 3). In addition, the thermal quality of rock and silt habitat during the hot mid-day remained higher (lower d e index) during autumn than during summer (Fig. 3). The d e index for rock crevices slowly declined through the morning hours and reached the lowest values (highest thermal quality) between 15:00 and 17:00 in the summer and between 14:00 and 17:00 during autumn (Fig. 3), suggesting these sites were good thermal refugia. Lizards were only rarely seen surface active during that time, especially in summer (Murray et al. 2016a). Average T b was significantly higher in autumn (36.8°AE 1.6°C) than in summer (34.3°AE 1.7°C) (two-sample t test; t 108 = 7.75; P < 0.001; see also Murray et al. 2014). Both values were lower than the experimentally determined mean T sel (38.0°AE 1.4°C). The thermoregulatory accuracy (d b ) of lizards on rock substrates was the same as that for lizards on silt substrates during the summer (two-sample t test; t 41 = À1.27; P = 0.22). Similarly, d b did not differ by substrate during autumn (two-sample t test; t 39 = À0.68; P = 0.50; Table 3). On average, P. husabensis showed a thermoregulatory accuracy that was more than three times greater during autumn (d b = 0.9°AE 1.1°C) than during summer (d b = 2.9°AE 1.7°C; two-sample t test; t 108 = À7.15; P < 0.001). In general, the effectiveness of thermoregulation (E) was high for P. husabensis and was similar among substrates within a season (Table 3). However, E was consistently higher for lizards during autumn compared to summer (Table 3). Values for d e À d b were similarly high across substrates, but unlike the index E, the value dropped less during summer in comparison with autumn (Table 3). Distribution We obtained distributional records for P. husabensis dating from 1965 in museum collections and collected 25 additional specimens over the course of the current study (Appendix S1: Table S1). Together with tissue samples that we collected from the Khan Mine, Bloedkoppie, eastern Swakop River, Farm Palmenhorst, and our records from Hildenhof (no specimens; confirmed as P. husabensis due to the presence of an opaque to semi-transparent lower eyelid covered with several small scales, small tympanic shield, absence of lateral row of yellow spots, and genetic analysis), our efforts resulted in a combined dataset of 99 records. The reduction in this combined dataset down to one record per km 2 resulted in a final set of 42 populations, of which we visited 22 (51%) over the course of the current study (Fig. 1). We found P. husabensis populations at six localities from where there were no published records until today, but all were within the known distribution (Appendix S1: Table S1). The current distribution of P. husabensis appeared to be mainly restricted to the canyons of the Khan and the Swakop River and nearby isolated hills surrounded by vast plains (inselbergs). It occurred in the western Swakop from around 8 km west of its confluence with the Khan River (Goanikontes Rest Camp near Farm Hildenhof, voucher NHMUK 1988.510; Appendix S1: Table S1) extending eastward for roughly 75 km along both the Khan River (ZMB 83403; current study) and the Swakop River (ZMB 83405; current study). We also found vouchers collected from isolated populations outside the two riverbeds in the museum collections we examined (Roessing Mountain north of the Khan River, hills and mountains around the Langer Heinrich Mine, Tinkas and Bloedkoppie south of the Swakop River). In between the two rivers, the species occurred on isolated hills but only as far east as the Marble Portal ( Fig. 1; Appendix S1: Table S1). Unfortunately, three specimens of the original paratype series (SMR 4421,5311,5315) could not be located in any museum and appear to be lost. Pedioplanis husabensis occurred strictly parapatric to known, but as yet undescribed species belonging to the P. undata complex ("P. inornata north/central" and "P. undata south"; Mayer and Berger-Dell'mour 1987, Berger-Dell'mour and Mayer 1989, Makokha et al. 2007, Conradie et al. 2012). Extinction risk modeling The relationship of h r as a function of T max minus the mean T sel of P. husabensis was best explained with a logistic Richard's curve function with the general equation: where A (8.61 AE 2.47; P < 0.001), k (0.17 AE 0.05; P < 0.001), i (À0.63 AE 3.76; P = 0.87), and m (0.1) are the asymptote, rate parameter, inflection point, and shape parameter, respectively (Fig. 4). By using a sigmoidal curve rather than a linear equation as used in the study by Sinervo et al. (2011), we obtained two asymptotes (one approaching zero h r and one approaching maximum daylight hours) and prevented h r from becoming negative or exceeding the maximum possible activity times for this diurnal species. The maximum value of h r averaged over the critical breeding season months for all recorded extant populations in 1975 was 2.44. This means that no population of P. husabensis then occurred at a locality where on average more than 2.44 h per day was thermally unsuitable during the breeding season. This maximum value was estimated from the southeastern-most population of this species along the rocky banks of a tributary to the Swakop River (vouchers ZMB 83404, ZMB 83405; current study), a locality that has one of the highest mean maximum January air temperatures where P. husabensis is known to occur (Fig. 1). By the year 2050, the mean T max averaged for the critical reproductive period at this locality is predicted to be higher by 2.4°C (RCP 4.5) or 2.9°C (RCP 8.5) than in 1975, which would increase the h r2050 to 3.22 (RCP 4.5) and 3.36 (RCP 8.5), respectively. If our original hypothesis was true that the species' range edges before the onset of climate change (i.e., 1975) were defined based on the maximum temperatures occurring there, this increase in unsuitable hours for activity would push this lizard population to extinction. This ecophysiological hypothesis is a null hypothesis of sorts that the species distribution is dictated by ecophysiology per se (e.g., a concept similar to the Grinnellian niche). In October 2014, our survey confirmed this population as extant. Similarly, the two northeastern-most populations of P. husabensis along the rocky margins of the Khan River (voucher ZMB 83403; current study, close to SMR 7158 collected in 1987) were confirmed to be extant in 2013. These two sites are the next hottest localities within the species' range with h r1975 = 2.40 and 2.28, respectively. A clear cline with increasing modeled extinction risk from west to east is apparent for this species (Fig. 5). By the year 2050, our model based on RCP 4.5 data predicts that these eastern-most populations in the two rivers will become extirpated unless they can adapt to the changing conditions. Furthermore, the model predicts extirpation of populations from the inselbergs around the Langer Heinrich Mine, Tinkas and Bloedkoppie south of the Swakop River. These patterns suggest that 14 of the 42 known populations are at risk of extirpation due to climate change (Fig. 5). If we consider the worst-case scenario regarding carbon dioxide emissions (RCP 8.5), this number rises to 17 populations (Fig. 5). By the year 2070, the predictions are even more severe: 17 (RCP 4.5) or 25 (RCP 8.5) of the 42 known populations Fig. 5. Extinction risk of P. husabensis based on the assumption that activity restrictions due to climate warming will lead to local population extirpations. Occupancy likelihood was modeled for the years 2050 (A) and 2070 (B) under two different climate change scenarios (RCP 4.5 and 8.5). Circles represent all known, vouchered P. husabensis localities. Sites resurveyed in the current study are represented by solid circles; open circles stand for populations known from museum collections. Warmer colors symbolize a low occupancy likelihood, or high extinction risk. (40% or 60%, respectively) may become extirpated as a result of rising temperatures (Fig. 5). At this point, mean T max is predicted to have risen by 2.8°C (RCP 4.5) or 4.2°C (RCP 8.5) at the hottest eastern-most locality in the Swakop River, resulting in increased daily periods of unsuitable conditions by more than 1 h on average under the worst-case scenario (h r2070 = 3.47; RCP 8.5) in comparison with 1975. The Hildenhof site, located near the cooler western edge of the species' distribution, is modeled to have a high likelihood of persistence with an estimated h r1975 of 1.19 and a predicted h r of 1.51 (2050; RCP 8.5) or 1.73 (2070; RCP 8.5) in the future (Fig. 5). If lizards thermoregulate to achieve a range of preferred body temperatures (T sel ) rather than a single value (e.g., Hertz et al. 1993), then using the mean T sel value as a threshold that separates suitable from unsuitable conditions tends to underestimate the potential of this species to cope with restrictions in activity due to rising temperatures. The model using T sel75 (39.1°C) instead of mean T sel (38.0°C) as threshold resulted in a slightly different logistic Richard's curve equation: However, h r values are only marginally reduced under this scenario, that is, with the maximum h r1975 at the eastern-most Swakop River site decreasing to 2.17 (instead of 2.44) and reaching 2.88 (RCP 4.5) or 3.27 (RCP 8.5) in the year 2070 (instead of 3.36 [RCP 4.5] or 3.47 [RCP 8.5]). Therefore, plasticity in behavioral thermoregulation, as measured by T sel75 relative to mean T sel (an index of thermoregulatory scope), is unlikely to have a dramatic impact on persistence. DISCUSSION We found that Pedioplanis husabensis inhabits a thermally harsh environment, but has a high thermal preference and is not even surface active during the full range of thermally favorable periods of the day in cooler parts of its range. During the hot summer, individuals regulate at body temperatures below the species' high T sel to avoid body temperature excursions near the critical thermal maximum CT max . Nevertheless, our ecophysiological model predicts substantial range reductions under even the most moderate climate warming scenarios. Operative environmental temperatures and thermal quality of the habitat Based upon our estimation of T sel of P. husabensis, operative temperatures T e at the western limit of the species' range (Farm Hildenhof) would prevent this species from employing a thermoconforming strategy for substantial periods of each day. However, the ability to achieve physiologically optimal body temperatures T sel is a precondition for a lizard's survival, especially during the breeding season (Sinervo et al. 2010(Sinervo et al. , 2011. As a consequence, active thermoregulation (i.e., sun-shade shuttling) and eventually retreat from the surface was necessary for P. husabensis throughout most of each day during our study period if the species wanted to avoid lethally hot or unsuitably cold body temperatures. If air temperatures (and T e ) increase, as predicted by climate change, it is likely that P. husabensis activity time will be reduced to a point where local populations will become extirpated. During times when surface T e were unsuitable, P. husabensis generally had access to rocky crevices and other retreat sites where temperatures were slightly below or within its T sel for a large proportion of the day, and where mean T e never exceeded its CT max . Although for some reptiles, active thermoregulation around T sel may occur within retreats (e.g., Porter et al. 1973, Schall 1977, presumably engaging in feeding and social activities is impeded by staying within shelters, particularly for this heliothermic, insectivorous species. In the early morning hours, T e was generally below T sel in the overnight refugia in the crevices, and lizards became surface active around 8:00 (Fig. 3). Surface activity abruptly declined once the thermal quality of substrates decreased (rock and silt d e in autumn, rock d e in summer) to levels below those of refugia around mid-day. Importantly, despite a bimodal pattern of low d e index (high thermal quality) throughout the day, surface activity pattern of P. husabensis was almost unimodal and the species predominantly exploited the morning period of favorable temperatures in both seasons (Fig. 3). Costs and benefits of thermoregulation Cost-benefit models have been developed to assess when a thermoconforming strategy would be advantageous over thermoregulating for an ectotherm in situations when T e was below the preferred body temperature (Huey and Slatkin 1976). Vickers et al. (2011) extended this costbenefit model of thermoregulation to include the costs of thermoregulating in environments when available T e exceeded the species' T sel (as is the case in our study system). Fitness costs of thermoregulation are lowest when T e = T sel , but rise drastically when T e > T sel up to CT max , particularly for heliotherms (Vickers et al. 2011). This means that lizard species with higher T sel would have a proportionately more rapid escalation of fitness costs, particularly since CT max is more conserved across species than is T sel (Sinervo et al. 2010, Ara ujo et al. 2013. As a result, the risk of reaching lethal body temperatures for lizards active at T e beyond T sel increases drastically with increasing thermal preference. Under this additional dimension of the Huey andSlatkin (1976) model, Vickers et al. (2011) found that thermoregulatory accuracy (d b ) increased as environmental temperatures increased for three Carlia skink species in an arid woodland environment in Australia. However, our results contradict their deduction that thermoregulatory accuracy should generally be highest during favorable T e (low d e ) until the point where T e is so high that lizards are forced to suspend surface activity. Rather, we found that thermoregulatory accuracy was significantly lower during summer when mean minimum T e was significantly higher (by 3°C) than in autumn. In the lizard microhabitat of our study area, d e was 2-3 times higher (worse habitat thermal quality) than in Australia (Table 3; Vickers et al. 2011). At the same time, P. husabensis has a much narrower thermal safety margin between its high T sel and CT max in comparison with the three Australian Carlia species with T sel = 25.4-32.3°C and CT max = 43.6°C (Greer 1980, Vickers et al. 2011. As a result, the determined T b 's of these skinks were often above their T sel in a habitat with low thermal quality, while in P. husabensis, <5% of T b measurements were above mean T sel and none above T sel75 (Table 3; Murray et al. 2014). A similar scenario appeared in two Phrynosoma species from the Chihuahuan Desert with environmental temperatures similar to the ones in our study (Lara-Res endiz et al. 2015). These species had a lower T sel (32.5-36°C and 31.1-36.5°C) and a higher CT max (47.9°C; Prieto and Whitford 1971), and T b of the species was above T sel in up to 64.6% of the cases (Lara-Res endiz et al. 2015). In these examples, the wide range between T sel and CT max means that the lizard can maintain a T b above T sel yet still maintain a thermal buffer minimizing the risk of approaching CT max , unlike what we found for P. husabensis. It appears that in the hyper-arid Namib Desert, a threshold is reached for this species during summer, beyond which the risks of overheating outweigh the benefits of thermoregulating to achieve T sel . Consequently, during the hottest period of the year, P. husabensis may rather prefer staying in the shade reaching body temperatures slightly below T sel than exposing itself to the direct sun, which would increase T b to preferable level first but then quickly reach lethal temperatures. Other environmental factors The almost unimodal activity pattern of P. husabensis indicates that thermally favorable operative temperatures were not the sole reason for surface activity. Recent work on the actively foraging teiid Aspidoscelis exsanguis in an arid ecosystem in New Mexico showed that the availability of moisture (in this case rainfall) influenced lizard activity and microhabitat use more than soil and air temperatures, and suggested that lizard populations will not only be impacted by temperature shifts but also by differences in moisture regimes (Ryan et al. 2015). Similarly, a long-term drought in California dramatically reduced juvenile recruitment among most known populations of the endangered bluntnosed leopard lizard, Gambelia sila (Westphal et al. 2016), and future projections for reduced precipitation in California suggest it may drive extinction risk in this species. Moisture availability may also play a role in P. husabensis surface activity. Although not examined the current study, moisture (in this case specifically in relation to fog events) may be a factor in lack of P. husabensis activity during thermally favorable late afternoon periods. In the future, average maximum temperatures during the summer months are predicted to increase within the geographic range of P. husabensis, while annual precipitation is predicted to decrease even more (Dirkx et al. 2008, Niang et al. 2014. When we apply the temperature data from our micro-scale study at Hildenhof to the extinction risk model (macro-scale), the comparatively high h r = 7 at Hildenhof in January 2013 is averaged down to h r1975 = 1.19 for the period between 1960 and 1990, and h r2050 = 1.51 (RCP 8.5), respectively. At the hottest sites that P. husabensis currently inhabits in the Swakop River, during the reproductive season h r1975 was higher (2.44) than the Hildenhof h r1975 but lower than the h r1975 = 3.1 that was estimated for lacertids in general (Sinervo et al. 2010). That these eastern-most populations of P. husabensis were still present during our 2013-2017 resurveys even after the observed temperature increase (Dirkx et al. 2008) implies that this species is capable of inhabiting more thermally extreme areas than what we found at Hildenhof or what the model predicted based on the temperature changes since 1975. However, whether the lizards in these populations are able to exploit the limited hours of suitable conditions that remained or whether they were capable of counterbalancing the low thermal quality behaviorally, we cannot say. Alternatively, extinction may require several years of warm spells in succession during which impaired reproduction and recruitment may bring some but not all populations to the extinction threshold, as appears to be the case in extinctions of Mexican phyrnosomatid lizards (Sinervo et al. 2011). The maximum h r estimation of 3.1 for Lacertidae (Sinervo et al. 2010) is the average h r of 36 lacertid species. Previous validation of the model and demonstration of local extinctions were conducted using mainly sit-and-wait foraging or omnivorous/frugivorous species (Zootoca vivipara, Liolaemus lutzae in Brazil, diverse liolaemid species in Argentina, diverse species of lizards in several lizard families in Madagascar, and Liopholis spp. in Australia), which based on measurements of field metabolic rate (Nagy et al. 1984) may have different physiological and ecological constraints compared to an active forager such as P. husabensis. Our results cannot unambiguously prove that P. husabensis can sustain extended periods of inactivity (h r ). While our field observations suggest that the extant Hildenhof population experiences up to h r = 4 during the breeding season, the extinction risk model estimates that an h r = 2.44 is the maximum that a population can tolerate in the long term and remain viable. This confirms the results of a computational model incorporating different theoretical percentages of shade cover within a lizards' habitat (Kearney 2013). Kearney (2013) The T e models that we base our approach on here predict the equilibrium T b of a non-thermoregulating lizard at a specific location during a particular time (Bakken 1992). Critically, we acknowledge that the full range of thermoregulatory behavior that a lizard can employ (e.g., postural changes such as orientation to solar radiation, body flattening, minimizing or maximizing body contact with a hot or cold surface by straightening/retracting the legs; physiological thermoregulation such as panting, expelling water from the cloaca) is not considered. All of these behaviors influence lizard T b (e.g., Stevenson 1985, Seely et al. 1988, Mart ın et al. 1995, DeNardo et al. 2004. Consequently, without considering these behavioral changes, under the extreme conditions in the Namib, the T e model in most instances would be likely to provide an overestimate of the lizard's body temperature. Secondly, both copper and PVC models used in our study generally underestimated lizard T b during the calibration experiment (see Results). These caveats are common problems in studies using T e models and methods still need to be improved to account for them. Nevertheless, when applied in a modeling approach as the current study, these potential sources of uncertainty are likely to be negligible (Sinervo et al. 2011). When we thus apply our extinction risk model to the broader surroundings of the distribution of P. husabensis (assuming that suitable rocky habitat in the fog zone of the Namib with h r up to 2.44 is inhabitable by this species), we cannot entirely account for the small size of its current range. The small geographic extent of the species' distribution may not only be delineated through abiotic (i.e., temperature, rain/fog), but also biotic (e.g., interspecific competition, predation) constraints. As noted above, the modeling method herein tests for the action of the Grinnellian niche in driving species extinctions due to ecophysiological limits being exceeded, but does not have power to reject the action of factors related to the Eltonian niche (e.g., competition, predation, parasitism). For example, each locality with apparently suitable habitat that we surveyed adjacent to the current range edges was inhabited by morphologically similar congeneric taxa of the P. undata complex. These taxa have habitat requirements and thermal physiology very similar to those of P. husabensis (Berger-Dell'mour and Mayer 1989, Branch 1998, Cunningham et al. 2012S. Kirchhof, unpublished data). The fact that a potential competitor occupies suitable areas outside of the current range of P. husabensis suggests that interspecific interactions may contribute toward defining current distribution boundaries for P. husabensis. Notable in this regard, Sinervo et al. (2010) could only accurately predict 16 of 24 extinctions of Sceloporus lizards in Mexico, and thus, eight extinctions were unexplained by the null model of ecophysiology for climate change extinctions. It is noteworthy that at six of these eight sites, a range expansion of a warm-adapted congener had occurred, suggesting that climate-forced extinctions of the coldadapted species may have arisen from the action factors related to the Grinnellian niche (16 of 24 populations that went extinct) and Eltonian niche such as competition (six of the remaining eight populations that went extinct). CONCLUSIONS Our multidisciplinary approach (laboratory and field experiments, field surveys, utilization of museum records, and modeling) shows that behavioral and ecological data collected at a micro-scale level can greatly enhance macro-scale modeling approaches. Here, we show using museum records and ground-truthed data that despite an increase in temperatures over the past decades, the range boundaries of Pedioplanis husabensis apparently have not shifted, indicating that local extirpations have not yet occurred. We document that our study species is capable of prolonging daily activity beyond what we evaluated using the models by selectively moving between a heterogeneous landscape of open and sunexposed as well as shaded patches all available on relatively small scales within its habitat. Based on our observations as well as the results of our model, the minimum amount of surface activity time necessary to sustain a viable population in this species has not yet been reached within the species' distribution. The active-foraging mode conducted by this species appears to be favorable for precise thermoregulation and allows it to exploit the environment under the extreme climatic conditions in Namibia's xeric west. ACKNOWLEDGMENTS This study was conducted under permission issued by the Ministry of Environment and Tourism (MET) of Namibia (Namibian Research/Collecting Permits 1710/ 2012, 1890/2014). This research was supported by the German Academic Exchange Service (DAAD) and the Elsa-Neumann-Stipendium des Landes Berlin (Humboldt University of Berlin, Germany). Research funding was further provided through a FRC individual grant to Ian W. Murray from the University of the Witwatersrand's Faculty of the Health Sciences (South Africa) and a NRF/NCRST Namibia/ South Africa Research Cooperation Programme grant no. 89140 awarded) to Duncan Mitchell and Gillian Maggs-K€ olling (Gobabeb Research and Training Centre). Barry Sinervo and Donald B. Miles were supported by the National Science Foundation (EF-1241848). Ian W. Murray recognizes the support of the Claude Leon Foundation through postdoctoral fellowship support. We thank the curators and collection managers from the following institutions for providing access to their collections and sharing locality data for their specimens: Museum f€ ur Naturkunde Berlin (Frank Tillack), Natural History Museum London (Patrick Campbell), Zoological Research Museum Alexander Koenig in Bonn (Wolfgang B€ ohme), Naturhistorisches Museum Wien (Georg Gassner, Silke Schweiger), Ditsong National Museum of Natural History Pretoria (Lemmy Mashinini, Klaas Manamela), Port Elizabeth Museum (Werner Conradie), IZIKO South African Museum Cape Town (Erika Mias), and National Museum of Namibia in Windhoek (Mathilda Awases, Emma Uiras). We are thankful for the critical research and logistical support provided by Andrea Fuller. We are also very grateful to Cammy Ndaitwah, Banele Mngaza, Tomas Kleinert, Reyk Boerner, Titus Shuuya, and Novald Iiyambo for their help in the field. Hartwig Berger-Dell'mour provided his knowledge on the distribution and ecology of Pedioplanis husabensis and revisited some sites with us. We are deeply indebted to Aaron Bauer (Villanova University, Pennsylvania) and Jackie Childers (University of California, Berkeley, California) who were of major help during scientific discussions and exchange of ideas and for providing records of species from the P. undata complex. Roessing Uranium Limited provided temperature data from Pointbill weather station, thank you very much. Our gratitude further goes to Juan Santos (St. John's University, New York) who supported us whenever we needed help with the model. Special thanks go to William R. Branch for his time, support, and knowledge. Furthermore, we are grateful to the entire staff from the Gobabeb Research and Training Centre in Namibia, especially Gillian Maggs-K€ olling, for her help and moral support, as well as Mary Seely for productive discussion and critical research support. We further like to acknowledge Eugene Marais (National Museum of Namibia, Windhoek) for sharing his knowledge on interesting localities in Namibia. The help of two anonymous reviewers greatly improved the manuscript.
2018-07-01T13:00:34.409Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "add66c05ab1a04dc1fa974e26c37741843387e56", "oa_license": "CCBY", "oa_url": "https://esajournals.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ecs2.2033", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8797a67586df800e5d6528ea6cee4e31378024fe", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Geography" ] }