id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
54722557
pes2o/s2orc
v3-fos-license
A Comparative Study of Genetic and Firefly Algorithms for Sensor Placement in Structural Health Monitoring Optimal sensor placement (OSP) is an important task during the implementation of sophisticated structural health monitoring (SHM) systems for large-scale structures. In this paper, a comparative study between the genetic algorithm (GA) and the firefly algorithm (FA) in solving the OSP problem is conducted. To overcome the drawback related to the inapplicability of the FA in optimization problems with discrete variables, some improvements are proposed, including the one-dimensional binary coding system, the Hamming distance between any two fireflies, and the semioriented movement scheme; also, a simple discrete firefly algorithm (SDFA) is developed.The capabilities of the SDFA and the GA in finding the optimal sensor locations are evaluated using two disparate objective functions in a numerical example with a long-span benchmark cable-stayed bridge. The results show that the developed SDFA can find the optimal sensor configuration with high reliability.The comparative study indicates that the SDFA outperforms the GA in terms of algorithm complexity, computational efficiency, and result quality. The optimization mechanism of the FA has the potential to be extended to a wide range of optimization problems. Introduction The performance deterioration and the total collapse of largescale civil infrastructures induced by the environment and service loads highlight the importance of structural health monitoring (SHM) as a significant approach for the safe operation and the reasonable maintenance of structures.SHM, which involves an array of sensors to continuously monitor structural behavior, along with the extraction of damage-sensitive features from these measurements and the evaluation of current system health by analysis methods, can be used for rapid condition screening and aims to provide reliable information regarding the integrity of the structure in near real time [1][2][3].At present, successful deployment and operation of long-term SHM systems on newly constructed structures and existing structures have been reported throughout the world [4][5][6][7].In an SHM system, the sensor network provides original information indicating structural behavior for further parameter identification; therefore, the efficiency of an SHM system relies heavily on the reliability of the acquired data measured by the sensor networks on the structure.For the complexity of large-scale structures, such as long-span bridges and high-rise buildings, the degrees of freedom (DOFs) used to characterize structural performance count are on the order of thousands to tens of thousands.It is impossible to distribute sensors on all of the DOFs because of the high costs of data acquisition systems (sensors and their supporting instruments) and technology limitations [8,9].Therefore, selecting optimal sensor placement (OSP) is a critical task before a sophisticated SHM system is designed and implemented on a real structure [10]. The problem of determining OSP has been investigated using a large number of interesting approaches and criteria in the past few decades, which can be seen from the abundance of literature.Among them, conventional gradient-based local optimization methods were unable to efficiently handle multiple local optima and may present difficulties in estimating the global minimum.They lack reliability in dealing with the OSP problem, because convergence to the global minimum is not guaranteed [11,12].Thus, the shift of OSP research away from classical deterministic optimization methods toward the use of combinatorial optimization methods based on biological and physical analogues has been motivated by the high computational efficiency and success rate of intelligent optimization methods.Many contributions regarding the adoption of intelligent optimization methods to the OSP problem have been recently made.The genetic algorithm (GA) based on the Darwinian principle of natural selection is a representative example and has proved to be a powerful tool for OSP.Yao et al. [13] demonstrated that the GA can replace the effective independence (EfI) method when using the determinant of the Fisher information matrix (FIM) as the objective function.Subsequently, a number of improvements have been employed to overcome drawbacks of the original GA.To accelerate convergence, the simulated annealing (SA) algorithm was integrated into the GA by Worden and Burrows [14] and Hwang and He [15] to extract the OSP in structural dynamic tests.To keep the sensor number constant during the genetic operation, the coding system was replaced by decimal two-dimensional array coding [16] or dual-structure coding [17].With the purpose of improving the quality of solutions and convergence speed, two-quarter selection was adopted by Yi et al. [18,19].The GA was also extended to the optimal wireless sensor placement, which has many constraints [20][21][22].Particle swarm optimization (PSO), which is inspired by the movement of organisms in bird flocking or fish schooling, is another stochastic search technique and was successfully applied to the OSP problem [23,24].Furthermore, the monkey algorithm (MA), which imitates the mountain-climbing process of monkeys, is considered to be an effective numerical method in solving complex multiparameter optimization problems.Several changes developed by Yi et al. made the MA excellent in terms of generating optimal solutions, as well as providing fast convergence in dealing with complicated OSP problems [8,25,26]. Although the aforementioned methodologies demonstrated a strong capability, to some extent, in finding the acceptable solution for the OSP problem, the complex parameters and searching processes make those methods difficult to operate and susceptible to the application environment.The complexity of the optimal sensor configuration for largescale structures reveals the necessity for the development of efficient and robust algorithms to accurately explore the optimum solution.Recently, a new metaheuristic search algorithm, which is referred to as the firefly algorithm (FA), has been developed by Yang [27,28].The FA algorithm is based on the idealized behavior of the flashing characteristics of fireflies.A firefly tends to be attracted by other fireflies with high flash intensities.Previous studies indicate that the FA is particularly suited for parallel implementation and may outperform existing algorithms, such as PSO, GA, SA, and differential evolution, in terms of efficiency and success rates [28,29].At present, the FA has been applied to a large number of optimization problems, including continuous, combinatorial, constrained, multiobjective, and dynamic optimization [30]. However, the coding system and the movement scheme in the FA make it suitable only for global numerical optimization problems with continuous variables.In this paper, some improvements, including the coding system, the suitable distance, and the movement scheme, are introduced, and a simple discrete firefly algorithm (SDFA) is proposed based on the FA such that the outstanding optimization mechanism of the FA can be applicable in the OSP problem with discrete variables.The remaining part of this paper is organized as follows: Section 2 presents a detailed description of the SDFA after an outline of the FA.Section 3 gives a brief introduction to the GA with the aim of facilitating performance comparison between the SDFA and the GA in the next numerical simulations.Section 4 shows the comprehensive evaluation of the SDFA for OSP with different criteria employing a longspan benchmark cable-stayed bridge.Finally, conclusions are drawn in Section 5. Firefly Algorithm 2.1.Outline of Firefly Algorithm.The FA mimics the real firefly's swarm behaviors of communication, its search for food, and its process of finding mates.The optimization process of exploring the optimal solution is modeled in such a way that the firefly with low light intensity is attracted by the firefly with high light intensity and moves toward to it, such that the darker firefly has higher light intensity.Therefore, to establish the mathematical model of this movement, three hypotheses are adopted as follows: (1) the attractive action between two fireflies is only governed by the light intensity; (2) the light intensity of a firefly, which is deduced by the firefly's location, is proportional to the objective function; and (3) the light intensity decreases with increasing distance, such that the brighter firefly can only attract the fireflies within its attractiveness range.Then, the movement of firefly toward firefly is formulated as where and represent the locations of firefly and firefly , respectively, the superscript denotes time, means the light absorption coefficient, is the distance between any two fireflies and , and 0 is the attractiveness at = 0.The third item in (1) is a random vector, where is a random parameter generated from the interval [0, 1] and denotes a vector of random numbers drawn from a Gaussian distribution.Thus, the movement of firefly defined by ( 1) is not always directed to firefly .More details can be found in references [25,29,31,32].The location of a firefly is simply coded using a spatial coordinate, which consists of real vectors and continuous variables.Subsequently, the distance between any two fireflies and is generally defined by the Euclidean distance = ‖ − ‖ 2 or the 2 -norm.However, it is well known that, from the view of mathematics, the OSP is a specialized knapsack problem where some specified DOFs are selected to be placed by sensors, such that the structural performance can be described effectively.Thus, the parameters that are used for optimizing are states in which those DOFs are distributed by sensors and are discrete variables.As a result, the coding system and the movement strategies in the FA are inapplicable in the OSP problem with discrete variables.It is essential to do some modifications to the original FA, such that the underlying optimization concept of the FA can be moved to the OSP problem.Here, some improvements are integrated into the FA, and the SDFA is proposed to explore the optimal sensor configuration in structural health monitoring. Simple Discrete Firefly Algorithm. Being originated from the FA, the SDFA is integrated by three parts: the coding system, the definition of distance between two fireflies, and the movement scheme.The coding system involves the code of each firefly in feasible space.The distance definition is responsible for describing the distance between two fireflies so that the movement can be realized.And the movement scheme gives the evolution rules of the SDFA.All of the three parts are introduced in next three sections. Coding System. In the community of applying GA in finding the optimal sensor configuration, a widely used code approach is the one-dimensional binary coding system.Each individual in the population is coded by a one-dimensional binary string.In this code system, all of the candidate DOFs are put in a line.If the th DOF is placed by a sensor, the value of the th element in the string is 1.In contrast, if the th DOF is not placed by a sensor, the value of the th element in the string is 0. The total number of ones in the string is equal to the number of sensors that needs to be placed.This coding system is intuitive and easy to be initialized and operated.Here, in the SDFA, the one-dimensional binary coding system is employed.Each firefly in the population denotes a feasible sensor configuration, and the location of each firefly is represented by a one-dimensional binary string, as shown in Table 1.In the example of Table 1, it can be found that the 2nd, 3rd, 6th, and 9th DOFs are occupied by sensors.The total numbers of candidate DOFs and sensors are 10 and 4, respectively, because the length of the string is 10 and the total number of ones is 4 in this firefly code.This coding method is very simple and intuitive, which is beneficial for the next optimization operation.When initializing the firefly population, the first th elements in a string are set to 1, and the left elements are set to 0. Then, the shuffle algorithm is applied five times, such that the fireflies can be distributed in the feasible solution space uniformly as much as possible. Distance between Two Fireflies.In the FA, the positions of the fireflies are defined in a Cartesian coordinate system, such that the distance between two fireflies can be easily calculated by the 2 -norm.However, in the SDFA, the positions of the fireflies are represented by binary strings.As a result, the 2 -norm is no longer suitable for indicating the distance between two fireflies.The Hamming distance [31], which counts the number of positions at which the corresponding symbols are different between two strings of equal length, has many similarities with the problem at hand and is adopted to indicate the distance between any two fireflies.Supposing fireflies and with binary strings, the Hamming distance is equal to the number of ones in firefly XOR firefly , which is where and are vectors, ⊕ means XOR, ∨ is logical disjunction, ∧ represents logical conjunction, and ¬ denotes logical negation.As a matter of fact, the Hamming distance represents the number of incongruous sensors between two sensor configurations and is equivalent to the number of elements in two strings whose values are different on the corresponding locations.Thus, the Hamming distance between firefly and firefly can be rewritten as Generally, the number of sensors used for structural monitoring is predetermined, such that the total number of ones in any firefly is the same.If there is an incongruous sensor in firefly , which is located at the th DOF, it is not placed by a sensor in firefly .There must be an incongruous sensor in firefly , which is located at the th DOF, which is not placed by a sensor in firefly .Then, the value of is two.As a result, the distance , which is defined by (3), is always a nonnegative even number and is two times the number of incongruous sensors; this finding is beneficial for establishing the movement scheme.Therefore, the maximum value of is 2 (where is the total number of predetermined sensors), which implies that the sensors of firefly and the sensors of firefly are completely deployed on different DOFs in a structure.The minimum value of is zero, which indicates that the sensors of firefly and the sensors of firefly occupy the same DOFs.Thus, the distance has the range of [0 2].Table 2 gives an example of two codes for firefly and firefly .The number of ones in firefly XOR firefly is four, and the number of incongruous sensors is two.Therefore, the corresponding Hamming distance between firefly and firefly is four. Movement Scheme. The firefly movement in the Cartesian coordinate system is easily performed by changing the coordinate values, as formulated by (1).The distance of the movement is continuous and proportional to the attractiveness.However, the positions of the fireflies in the SDFA are defined by one-dimensional binary strings whose values are 0 and 1.The firefly movement, which indicates that the values in the string are varied, can only be realized by changing 1 to 0 or 0 to 1. On the other hand, the divergence of two fireflies originates from the incongruous sensors.Thus, replacing some incongruous sensors in a firefly may enhance its light intensity.Therefore, in the SDFA, the movement of firefly is operated by changing some elements in the string of firefly from values of 1 to 0 and changing some elements in the string of firefly from values of 0 to 1, simultaneously.Additionally, to keep the number of ones constant, the time of changing 1 to 0 should be equal to the time of changing 0 to 1.For this reason, the time of changing 1 to 0 or the time of changing 0 to 1 is defined as the movement distance.Under this definition, the nearest and farthest movement distance from firefly to firefly is 0 and 0.5 , respectively.By introducing stochastic searching, the movement distance from firefly to firefly is selected as Generally, the contribution of a sensor located on a DOF to the objective function cannot be predetermined.Therefore, it is difficult to judge which a sensor should be relocated.In the present paper, a semioriented movement scheme is proposed as follows. Step 1. Calculate the difference between the strings of firefly and firefly : Step 2. Randomly select elements from Δ with a value of 1 and change these elements to −1; again, randomly select elements from Δ with a value of −1 and change these elements to 1.The operated Δ is represented by [Δ ]. Step 3. Replace the string of firefly by In fact, the different light intensity between firefly and firefly comes from the Δ term, which is also induced by the incongruous sensors.The operated elements in the Δ term are randomly selected, because it is difficult to predict the influence of each element on the objective function.Therefore, replacing part of the by [Δ ], which implies relocating some incongruous sensors, would enhance the light intensity of firefly with a high probability.However, the movement cannot guarantee that firefly moves in a desirable direction.Thus, the movement scheme is described as a semioriented movement scheme.These random factors in the movement scheme are also in accord with the random term in (1). Brief Description of Genetic Algorithm The GA is briefly described here to facilitate a comparison in next section.The GA, which was first proposed by Holland in 1975 [32], tries to imitate natural evolution by assigning a fitness value to each individual in the problem and by applying the principle of the survival of the fittest [33].Each individual has a set of chromosomes that can be mutated and altered.Solutions can be represented by either onedimensional binary codes, dual-structure codes, or decimal codes.In this paper, the dual-structure coding method is employed to maintain a constant number of sensors.The evolution, which usually starts from a population of randomly generated individuals, is an iterative process and advances towards the next generation by applying genetic operators (crossover and mutation).An individual in the new population is generated by performing the crossover on two selected individuals from the current population and mutation on this generated individual [34].The two individuals selected for crossover are chosen according to their fitness values.The individual with a good fitness value has a high probability of being chosen.The new generation of individuals is then used in the next iteration of the algorithm.Commonly, the iteration terminates when either a maximum number of generations have been produced or when a satisfactory fitness level has been reached for the population.The GA has a distinct advantage over traditional optimization techniques, which starts from a single point in the solution space.The details about the GA have been presented in references [16,33,34].from main girder to the two towers [36].To understand the behavior of the bridge, an updated three-dimensional finite element model is also provided by the Center of Structural Monitoring and Control at the Harbin Institute of Technology [35].The towers and the main girder were simulated by threedimensional beam elements, and the cables were simulated by linear elastic link elements.The concrete transverse beam at every 2.9 m was simplified by a mass element.The main girder was modeled as floating on the main tower, and all of the towers were fixed to the ground.The longitudinal restriction effect of the rubber supports was simulated by linear elastic spring elements.The model consists of 564 beam elements, 88 link elements, 160 mass elements, and 8 spring elements.Modal analysis has been conducted, and the results can be found in [35]. Results and Discussion . The proposed SDFA is evaluated by two frequently used, but quite different, objective functions.The first objective function is the modal strain energy (MSE).With this objective function, the OSP becomes a maximal optimization problem.The second objective function involves the modal assurance criterion (MAC), which induces the OSP to be a minimal optimization problem.More importantly, a comparative study is conducted between the SDFA and the GA, in terms of computational efficiency and result quality.To achieve this goal, the GA is also applied to find the optimal sensor configuration under the same conditions. Optimization Based on Modal Strain Energy.Generally, it is desirable that most structural information is obtained through a set of sensors deployed on a structure, such that the structural behaviors can be described well.At present, the structural condition evaluation approaches based on the structural mode shapes and their derivations have been comprehensively explored.The MSE provides a rough measure of the dynamic contribution of each candidate sensor to the target mode shapes and implies that the DOFs capture most of the relevant dynamic features of the structure.The MSE helps to select those sensor positions with possible large amplitudes, which can increase the signal-to-noise ratio and improve the reliability of the mode identification results [9,16].Therefore, the MSE is selected as the first objective function.Supposing that the mode shape matrix of a structure is Φ = [ 1 , 2 , . . ., ] ( is the number of mode shape vectors) and the number of measured points is , the MSE can be expressed as where is the th component in the corresponding th mode shape, V denotes the Vth component in the corresponding th mode shape, V represents the stiffness coefficient between the th DOF and the Vth DOF, and ∈ and V ∈ state that and V are restricted to the locations where the sensors are placed.Indeed, the improvements applied in the SDFA further simplify the FA algorithm, and only one parameter (i.e., the number of fireflies) needs to be preset.After being explored by a parametric study, the best value for firefly number is selected as 100, which allows the algorithm to achieve the best performance.Being different from the SDFA, the GA has several problematic parameters, such as the population size, the probabilities of selection, the crossover, and the mutation.Parametric studies are also conducted, and the appropriate values are determined.The simplicity and easy implementation of the SDFA is apparent.Four scenarios with 20, 25, 30, and 35 sensors are simulated.In accordance with other heuristic optimization methods, the results extracted by the SDFA and the GA rely heavily on the randomly generated initial population.Therefore, to reduce the influence of the initialized individuals, the SDFA and the GA have been run 10 times with different stochastic initial populations in each occasion.The best iteration progress of the SDFA and the GA with 25 sensors are displayed in Figure 2, and the optimal locations of 25 sensors extracted by the SDFA and the GA with the aim of getting the maximal MSE value are illustrated in Figures 3 and 4, respectively.The statistical results of 10 runs for the four occasions are listed in Table 3.It can be seen from Figure 2 that both the SDFA and the GA can converge at the global optimum.In both Figures 2(a) and 2(b), the values of the objective function in the population increase with an increasing number of generations, and the average and minimum values of the objective function simultaneously approach the maximum value, which indicates good performance regarding optimum exploring.The effectiveness of the improvements adopted in the SDFA is validated.In the SDFA, the maximum MSE value tends to be a constant after 62 generations with a high speed.However, in the GA, converging to a constant spends 188 generations at an unacceptably low speed.Although increasing the number of individuals may reduce iteration generation, but the time of each generation becomes longer.As a result, the GA spends longer time finding the optimal solution than the SDFA.The high computational efficiency of the SDFA is revealed.Comparing Figures 3 and 4, it can be found that the optimal sensor locations extracted by the SDFA distribute on the span uniformly.However, the sensors in the optimal sensor configuration found by the GA crowd near the left tower, and the vibration and mode shapes of the right part of the bridge cannot be described clearly.Thus, the sensors in the optimal sensor configuration extracted by the SDFA are used with high efficiency and the identified mode shapes are more visual.The statistical results of each ten optimal solutions are listed in Table 3.In the table, the mean value and the maximal value represent the average and the maximal values of the ten optimal solutions, respectively, which indicate the superiority of the optimal solution; the standard deviation denotes the variability of the ten best solutions, which indicates the robustness of the algorithm.In every occasion, as listed in Table 3, the mean and maximum values of the MSE searched by the SDFA are larger than that explored by the GA.Thus, the solutions found by the SDFA have better quality than those extracted by the GA.Simultaneously, the standard deviation of the 10 results found by the SDFA in all four occasions are smaller than that searched by the GA, which shows that the SDFA can solve the OSP problem with high reliability and strong robustness. Optimization Based on Modal Assurance Criterion. The vibration-based structural condition assessment methodologies require that the measured mode shapes are discriminable from each other, such that they can be reliably identified.The modal assurance criterion (MAC) proposed by Carne and Dohrmann [16] provides a simple metric to check the linear dependence of the mode shapes.A small maximum off-diagonal term of the MAC matrix implies less correlation between corresponding mode shape vectors and high distinguishability among the identified mode shapes.Thus, the MAC off-diagonal terms are adopted to evaluate the sensor configuration.The MAC is defined as where and represent the th and th column vectors in matrix Φ, respectively, and the superscript denotes the transpose of the vector.With this definition, the values of the MAC range from 0 to 1, where 0 indicates that the modal vector is easily distinguishable and 1 indicates that the modal vector is fairly indistinguishable [19]. Consulting the numerical simulation performed in Section 4.2.1, the same parameters of the SDFA and the GA are adopted.The same four occasions are investigated, and each occasion is also calculated 10 times by the two methods.The best convergence process for the two optimization approaches are shown in Figure 5.The MAC values computed by the best sensor configurations obtained by the two methods are illustrated in Figure 6.To clearly show the MAC values of each mode, the maximum MAC off-diagonal values in each of the modes calculated by the SDFA and the GA are compared in Figure 7.The optimal sensor configurations found by the SDFA and the GA are displayed in Figures 8 and 9, respectively.The statistical data of the 10 runs in the four occasions are also listed in Table 4.The mean value, the minimal value, and the standard deviation have the similar meaning as that in Table 3. From the convergence process in Figure 5(a) and the optimal results in Figure 6(a), the strong ability to search for the global optimal solution in the SDFA is again demonstrated from the minimal optimization problem.When comparing the iteration progress in Figure 5(a) with that in Figure 5(b), the higher computational efficiency of the SDFA is further validated.Investigating the optimization results illustrated in Figures 6, 7, 8, and 9, a more desirable sensor configuration can be acquired when using the SDFA.The statistical results listed in Table 4 also indicate that the SDFA has higher reliability and stronger robustness than the GA. Conclusions Because finding the optimal sensor locations under a certain evaluation criterion is a complicated nonlinear optimization problem, traditional optimization methods often encounter many insurmountable difficulties in solving this problem.Intelligence optimization algorithms such as the GA and the FA provide powerful approaches to overcome these obstacles.Before implementing a comparative study between the GA and the FA regarding their performance in finding the optimal sensor configuration, some improvement are developed based on the basic FA and the SDFA is proposed.If all candidate DOFs can be accessible in real-word practice, the developed sensor placement method is applicable in any type of tethered sensor.The performance of the SDFA and the GA is compared using a numerical example with a long-span benchmark cable-stayed bridge.Some conclusions are drawn as follows. (1) The one-dimensional binary coding system and the Hamming distance can rationally describe the status of fireflies in the feasible sensor configuration space.The semioriented movement scheme provides an effective tool to move the original movement defined in the Cartesian coordinate system to the onedimensional binary coding system.These improvements make the underlying optimization mechanism of the FA applicable in discrete optimization problems. (2) In the case study, the improved SDFA shows good performance, both in the maximal value problem and the minimal value problem.The simulation results indicate that the SDFA has smooth convergence progress.The effectiveness of the proposed improvements is validated, and the strong capability of FA in finding the global optimization is also revealed. (3) Compared with the widely accepted GA, the SDFA, which has only one problematic parameter, can be implemented more easily.Regarding both the MSE and MAC criteria, the SDFA shows superior computational efficiency and robustness versus the GA.The optimal solution extracted by the SDFA is also more desirable than that provided by the GA.It should be noted that all the analyses in this paper are conducted based on the assumed theoretical models while the real environment is more complex.The advanced SHM requires a more versatile sensor configuration to comprehensively understand the performance of a structure; therefore, developing algorithms for multiobjective optimization may be a good direction for future work. Figure 1 : Figure 1: Overview of the cable-stayed bridge. Figure 3 :Figure 4 : Figure 3: The optimal sensor configuration extracted by SDFA with the MSE. Figure 5 : Figure 5: Iteration progress of the objective function. Figure 6 : Figure 6: MAC values obtained from the optimal sensor configuration. Figure 7 : Figure 7: Maximum MAC off-diagonal values in each of the modes. Figure 8 :Figure 9 : Figure 8: The optimal sensor configuration extracted by SDFA with the MAC. Table 1 : An example of a firefly code. Table 2 : Codes for firefly and firefly . Table 3 : Comparison of optimization results based on the MSE. Table 4 : Comparison of optimization results based on the MAC.
2018-12-16T22:49:53.208Z
2015-06-09T00:00:00.000
{ "year": 2015, "sha1": "a81a476977bdceb7efb5029cbf710b6eb4de5b5e", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/sv/2015/518692.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a81a476977bdceb7efb5029cbf710b6eb4de5b5e", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
269215102
pes2o/s2orc
v3-fos-license
Girdling behavior of the longhorn beetle modulates the host plant to enhance larval performance Background Preingestive behavioral modulations of herbivorous insects on the host plant are abundant over insect taxa. Those behaviors are suspected to have functions such as deactivation of host plant defenses, nutrient accumulation, or modulating plant-mediated herbivore interactions. To understand the functional consequence of behavioral modulation of insect herbivore, we studied the girdling behavior of Phytoecia rufiventris Gautier (Lamiinae; Cerambycidae) on its host plant Erigeron annuus L. (Asteraceae) that is performed before endophytic oviposition in the stem. Results The girdling behavior significantly increased the larval performance in both field monitoring and lab experiment. The upper part of the girdled stem exhibited lack of jasmonic acid induction upon larval attack, lowered protease inhibitor activity, and accumulated sugars and amino acids in compared to non-girdled stem. The girdling behavior had no effect on the larval performance of a non-girdling longhorn beetle Agapanthia amurensis, which also feeds on the stem of E. annuus during larval phase. However, the girdling behavior decreased the preference of A. amurensis females for oviposition, which enabled P. rufiventris larvae to avoid competition with A. amurensis larvae. Conclusions In conclusion, the girdling behavior modulates plant physiology and morphology to provide a modulated food source for larva and hide it from the competitor. Our study implies that the insect behavior modulations can have multiple functions, providing insights into adaptation of insect behavior in context of plant-herbivore interaction. Supplementary Information The online version contains supplementary material available at 10.1186/s12862-024-02228-z. Background Coevolutionary theories predict that plant defense and insect offense exert selective pressure that favors diversification of each other [1].Central to understanding the arms race between plants and insect herbivores have been the phytochemistry and the molecular adaptation of insects to it [2,3].However, the chemical conflict occurs at the latest stage of herbivory, only after the ingestion occurs.Before ingestion, an interesting class of insect behavior is observed in many clades: behavioral modulation.Vein cutters cut the veins of the leaf, trenchers cut a line through the leaf, and girdlers chew around the petiole or stem [4].These behavioral modulations are found in various insect taxa of Lepidoptera [5], Coleoptera [6,7], Hemiptera [8], and Orthoptera [9].The convergent evolution of similar behavioral modulations from distant taxa indicates it is an adaptation. One well-characterized function of the behavioral modulation is the neutralization of plant defense, especially canal-borne exudates that are blocked by vein cutting and trenching [6,10].However, though behavioral modulation is abundantly conducted on plants without secretory canals, their functional consequences in plantherbivore interaction is rarely tested.Particularly, only a few studies have proposed the potential role of the girdling behavior as exposure of vasculature to apply saliva [11] or nutrient accumulation at the girdled part [7,8] and the downstream effects on insect performance are not known, raising questions on the functional consequence of girdling behavior. Herbivorous insects largely depend on the quality of the host plants [12]; the dependency suggests that examination of the effect of the girdling behavior on the metabolic and defensive traits of plants is required to assess the functional consequences of the behavior.Insect attack has significant effects on the nutrient status of plants and the effect of modified plant chemistry on insect performance is diverse [13].Moreover, insect herbivore faces plant defenses comprising unpalatable substances (e.g., secondary metabolites [14], proteinase inhibitor [15], and hardened cell walls [16]), and natural enemies of herbivores [17].A substantial part of plant defense is activated by attack primarily through jasmonic acid (JA) signaling [18], which is referred induced defense. Different herbivores feeding on the same plant interact with each other via plant traits including systemic responses [19] and altered plant appearance [10].As behavioral modulation changes plant morphology [20] and palatability [21], it may alter the responses of other herbivores sharing the same host plant.Indeed, canal cutting promotes the feeding of other herbivores that originally do not feed on plants with secretory canals [21].However, it is not known whether the effect of behavioral modulation on other herbivores are beneficial to the modulating insects. To systematically assess the functional consequence of behavioral modulation of insect herbivores, we studied a girdling longhorn beetle Phytoecia rufiventris Gautier (Lamiinae; Cerambycidae) and a non-girdling longhorn beetle Agapanthia amurensis Kraatz (Lamiinae; Cerambycidae) which both oviposit on an introduced species Erigeron annuus L. (Asteraceae) in South Korea.The P. rufiventris female girdles around the stem of Asteraceae host plants including E. annuus before oviposition (Supplementary Video S1) while A. amurensis female lays an egg inside the stem without girdling.To determine the functional consequences of the girdling behavior using experimental girdling, we tested the following three hypotheses: (1) The girdling behavior of female P. rufiventris facilitates larval growth inside the girdled stem, (2) The girdling behavior impairs the plant resistance and enhances nutrients in the girdled stem, and (3) The appearance of the girdled stem is decreased to a competing herbivore. Two longhorn beetles sharing Erigeron annuus as the host plant show distinct natural histories We observed the natural history of two Lamiinae longhorn beetles, Phytoecia rufiventris and Agapanthia amurensis, by monitoring Erigeron annuus (Asteraceae) as a shared host plant of two beetles in our field sites in Korea (Fig. 1, Supplementary Table S1).Phytoecia rufiventris laid eggs in E. annuus predominately while they also laid eggs in Artemisia princeps.Erigeron annuus bolted in April and P. rufiventris adults emerged from mid-April to July, girdled and laid eggs inside the stem of E. annuus plants.Every E. annuus stem with a feeding pattern at the lower part also had a feeding pattern at the upper part but not vice versa i.e., the larvae first fed on the upper part of the girdled stem and then move down beyond the girdles.In September, the larva cut the basal part of the stem and pupated inside the root-shoot junction.The adult overwintered inside the stem until subsequent April. In contrast to P. rufiventris, A. amurensis overwintered as a larva and pupated during April.The adult A. amurensis thus emerged from May to July and laid eggs at the lower part of E. annuus stem without girdling.The hatched larva first moved and fed towards the shoot apex, then moved down to the lower part of the stem and cut in the middle of the dead stem.The larva stayed inside the stem until subsequent April. Girdling behavior of Phytoecia rufiventris cuts all vascular bundles to induce cell death at the upper part of the E. annuus stem To examine the functional consequence of girdling behavior of P. rufiventris, we first characterized the girdling behavior (Fig. 2a and b, Supplementary Video S1, approximately 6 min).Phytoecia rufiventris female girdles two arcs (lower and upper girdles) with about 1 cm apart.The female makes an oviposition cavity by chewing the epidermis and then lays an egg inside the stem.The upper part of the girdled E. annuus plant lost its turgor within an hour and showed dropping morphology (Fig. 2c, Supplementary Video S2, approximately 9 min).The girdles divided the stem into the upper (14.76 ± 4.07 cm) and lower part (50.72 ± 16.13 cm) respectively (N = 73 and 61, respectively).Though neither of the lower and upper girdles was a full circle, they together cut every vascular bundle (Fig. 2b).Two semicircle artificial girdles mimicked the natural girdling without fracture of the upper part (Supplementary Fig. S1).The upper part of the girdled stem underwent cell death within 1-week post girdling (wpg) while the lower part of the girdled stem remained viable (Fig. 2d).The cell death in the upper part of the girdled stem became more severe at 3 wpg, as shown by stringer Trypan-blue staining at the upper part of the girdled stem. Larval performance of P. rufiventris is enhanced by girdling behavior We found the girdled E. annuus occasionally reconnected the damaged vascular bundles and restored the turgor to the upper part in field observation.Phloroglucinol staining of the longitudinal section of experimentally girdled and recovered stem showed reconnection of the xylem in recovered E. annuus plants (Fig. 3a).In such recovered stems, larval survival of P. rufiventris was significantly lower than in successfully girdled E. annuus stem (Fig. 3a). We then mimicked the girdling behavior and inoculated P. rufiventris eggs in the E. annuus stem (Supplementary Fig. S1a).At three wpg, the larval mass was nearly five times higher in girdled plants than in nongirdled plants (Fig. 3b).As in the field monitoring, the experimentally girdled stems occasionally recovered (Supplementary Fig. S1b).The larvae in such recovered stems showed a similar mass to the larvae in non-girdled plants (Fig. 3b). The upper part of the girdled stem is modulated into a better food source As in our field observation, the larva preferentially fed on the upper part during the first week after egg inoculation whereas larvae in non-girdled stems showed no feeding preference (Fig. 4a).Moreover, larval survival and mass were not facilitated when the upper part of the stem was removed by decapitation (Fig. 4b and c).Thus, we suspected that the upper part of the girdled E. annuus stem is modulated into a better food source for the larva. To understand the mechanistic background of the enhanced larval performance in girdled plants, we compared phytohormonal responses of the upper and lower part of experimentally girdled and non-girdled plants.First, we found that while the upper part of non-girdled plants showed increased JA levels upon larval attack, the upper part of girdled plants was not able to increase JA levels during the first week of larval feeding (Fig. 4d).This impaired defense in the girdled E. annuus was only present in the upper part of the stem while the lower parts of the stem showed increased JA levels regardless of experimental girdling, 3 weeks after the egg inoculation (Fig. 4e).In contrast, decapitated stems showed increased JA levels upon larval attack (Supplementary Fig. S2), highlighting importance of the upper apart of the girdled stem in facilitating larval growth.While salicylic acid (SA) levels at the upper part of the stem were not affected by both experimental girdling and egg inoculation (Supplementary Fig. S3a), abscisic acid (ABA) levels were significantly increased by experimental girdling in the same region (Supplementary Fig. S3b).Egg inoculation had no significant effect on ABA levels in both girdled and nongirdled E. annuus plants. Additionally, we measured the proteinase inhibitor activity of girdled and non-girdled stems to assess the palatability of the girdled tissues (Fig. 4f ).The proteinase inhibitor activity was not induced by the larval attack but it was reduced by experimental girdling.The protease inhibitor activity was marginally restored in girdled E. annuus stem by larval attack but the activity level was not significantly different from not attacked and girdled E. annuus stem. We then tested the nutritional value of the girdled stem by sampling the upper and lower parts of stems with and without experimental girdling 2 days after experimental girdling, when the larval feeding starts.Primary metabolites were analyzed using gas chromatographymass spectrometry (GC-MS) (Supplementary Table S2).Principal component analysis (PCA) of the 40 identified metabolites showed that the metabolomes of the upper parts were largely altered by experimental girdling, while the lower parts were only affected to a modest degree (Supplementary Fig. S4).We identified nine metabolites that were significantly accumulated in the upper part of the experimentally girdled stems while not accumulated in the upper part of the non-girdled stems (Fig. 4g): palmitic acid, glycerol, valine, proline, sucrose, turanose, galactose, fructose, and arabinose.The accumulation of soluble sugar in the upper part of girdled stems was confirmed by an anthrone assay (Supplementary Fig. S5). Subsequently, we tested whether the accumulated nutrient sugars and amino acids in the upper part of the stems are sufficient to facilitate larval growth.We reared P. rufiventris larvae on semi-artificial diets made of non-girdled stem powder supplemented with sugars and amino acids, corresponding to the previous measurements.The three most abundant sugars (fructose, sucrose, and galactose) and two amino acids (proline and valine) in the upper part of the girdled stems were selected.Unexpectedly, the larval relative growth rate was not significantly altered by sugar and/or amino acid supplementation, even with the double amount of observed difference between the girdled and non-girdled upper part of the stem (Fig. 4h, Supplementary Fig. S6). Non-girdling Agapanthia amurensis does not benefit from girdling behavior With our experimental evidence on the adaptive value of girdling behavior, we tested whether girdling behavior provides general benefits to a non-girdling longhorn beetle, A. amurensis.Unlike in P. rufiventris, experimental girdling did not significantly facilitate larval growth of A. amurensis, regardless of the presence of P. rufiventris larva in the same stem (Fig. 5a). The asymmetry in the effect of girdling on two species led us to speculate about the differential chemical adaptation of two species to plant metabolites.To test this idea, we investigated the detoxification capacity of two species at the transcriptomic level.We used de novo assembled gut transcriptome of two species to extract putative detoxification genes of five families: Cytochrome P450s (CYP450s), UDP-glycosyltransferases (UGTs), Glutathione-S-Transferases (GSTs), Carboxylesterases (COEs), and ABC transporters.Interestingly, the number of putative detoxification genes was higher in the assembled transcriptome of A. amurensis than that of P. rufiventris for all 5 families of genes investigated (Fig. 5b). Phytoecia rufiventris avoids competition with Agapanthia amurensis via the decreased appearance of girdled stem We observed co-infestation of P. rufiventris and A. amurensis on same E. annuus plants in the field monitoring.Notably, A. amurensis larvae mostly persisted in stems of such plants while P. rufiventris larvae were dead (Fig. 5c and d).To test whether competition is sufficient to decrease the larval survival of P. rufiventris, we conducted a co-infestation assay (Fig. 5c).Regardless of experimental girdling, in most stems where two longhorn beetles were inoculated together, A. amurensis larvae showed priority (Fig. 5d).To test whether the girdled E. annuus plants are hidden from the oviposition choice of A. amurensis, we measured the oviposition rate of A. amurensis on naturally girdled and non-girdled E. annuus plants in field sites.The proportion of plants chosen for A. amurensis oviposition was higher in non-girdled E. annuus plants than in girdled plants (Fig. 5e).To confirm the decreased appearance of girdled E. annuus plants to A. amurensis, we conducted an oviposition choice assay of A. amurensis on girdled and non-girdled E. annuus plants.The female A. amurensis significantly preferred non-girdled E. annuus plants for oviposition over girdled E. annuus plants (Fig. 5f ). Discussion Natural history observations on behavioral modulations suggest various functions of behavioral modulations.As most behavioral modulations damage the vascular bundles [9], the subsequent effects have been predicted to be the inhibition of defense signaling [20] and accumulation of nutrients [22,23].Moreover, the modified morphology of modulated plants is suspected to be utilized for evasion of natural enemies [20].However, other than disarming defense mediated by canal-borne exudates, experimental testing on functional consequences of behavioral modulations at the molecular level has been seldom compared to copious hypotheses. In consistent to the cell death observed at the upper part of the girdled stem, JA induction and proteinase inhibitor activity were decreased at the region.Inhibition of JA signaling is strong evidence of inhibited induced defense against chewing herbivores [24].Although the inhibition of defenses was restricted to the upper part of the stem, the girdling-dependent feeding preference on the upper part enabled the larvae to selectively consume the modulated food source during the early stages.The susceptible region provided could boost the growth of vulnerable early larval stages and face the induced defense of the lower part of the stem with a better capacity to tolerate plant defense.The inhibition of plant defense at the physiological level is achieved by various insect strategies including symbionts [25] and effectors [26].We propose that the plant tissue which is freshly killed by the girdling behavior also exhibit inhibited defense. In addition to deactivated defenses, accumulation of nutrients also occurred in the upper part of the girdled stem, as in girdling of alfalfa hopper and twig girdler [7,8].However, the nutrient, especially soluble sugar, can be negatively correlated with insect performance [13].In our study, the nutrient supplementation in an artificial diet had no significant effect solely on larval growth.Nevertheless, the joint effect of accumulated nutrients and impaired defense in girdled E. annuus stem remains to be studied. Interestingly, experimental girdling had no significant effect on the growth of A. amurensis larvae.In studies performed using generalists to test the effect of trenching, the behavioral modulation was a prerequisite for feeding but not always sufficient [27].As A. amurensis is capable of feeding on E. annuus without girdling behavior, possible explanation are that A. amurensis may not induce a strong defense response in E. annuus or cope well with the defenses.To test the latter possibility, we mined the transcriptomes of P. rufiventris and A. amurensis and compared the number of putative detoxification genes, as the gene duplication events are thought to be favored by pressure exerted by plant substances [28,29].We interrogated the enzymes that are generally but not exclusively associated to plant metabolite detoxification as the specialized detoxification mechanisms of P. rufiventris and A. amurensis are not known.Insects detoxify toxic metabolites by cleavage (e.g., Carboxylesterase; COE [30]), oxidation (e.g., Cytochrome P450; CYP450 [31]), and the addition of moiety (e.g., Glutathione-S-Transferase; GST [32], UDP-Glycolic Transferase; UGT [33]).Moreover, plant metabolites are rapidly transported by transporters such as ATP-binding cassette (ABC) transporters [34].The higher putative detoxification gene numbers of A. amurensis in compared to P. rufiventris might be associated with their ability to digest plant metabolites. Nevertheless, our current analyses do not include identification of bioactive resistive compound produced by E. annuus and deactivated by the girdling behavior, leaving an interesting topic for further study.Erigeron annuus (See figure on previous page.)Fig. 4 The upper part of the E. annuus plant is modulated by girdling behavior.(a) Feeding patterns of P. rufiventris larva between the upper and lower parts of the stem one week after egg inoculation.Empty dots indicate the upper limit of the feeding pattern, whereas filled dots indicate the lower limit of the feeding pattern.Boxes indicate the 1st and 3rd quantiles.Statistic comparisons were performed between the upper and lower limits in the same treatment groups.(n.s., no significant difference; *** P < 0.001, Student's T-test).(b-c) Larval performance in decapitated plants, with (b) larval mass and (n.s., no significant difference, Chi-squared test) (c) larval survival rates (n.s., no significant difference, Student's T-test).(d-e) Jasmonic acid levels were measured with and without experimental girdling and egg inoculation at (d) the upper part and € the lower part of the stem.(Significant differences are indicated as different letters; P < 0.05, One-way ANOVA followed by Tukey's HSD).(f) Normalized % proteinase inhibition in experimentally girdled and non-girdled stems (significant differences are indicated as different alphabet letters; P < 0.05, Kruskal-Wallis' test followed by Dunn's test).(g) Metabolic profiles of the upper and lower stems of E. annuus with and without experimental girdling.Metabolites that accumulated only at the upper part of the girdled stem are indicated with the red box and their compound names (P < 0.05, One-way ANOVA followed by Tukey's HSD).The abundance values were log-transformed for plotting.GUS, girdled upper stem; GLS, girdled lower stem; CUS, control upper stem; CLS, control lower stem synthesizes a variety of secondary metabolites including terpenoids, flavonoids, organic acid glycosides, and polyacetylenes [35][36][37][38].The bioactivities of plant secondary metabolites are highly context-dependent; indeed, untargeted metabolomics coupled with bioassays are further required to the chemical interaction between E. annuus and P. rufiventris, along with the effect of behavioral modulation on plant secondary metabolism. The drooping stem is a widespread phenotype in herbaceous plants that decreases appearance to herbivores [20].The girdling-induced drooping of E. annuus decreased the appearance to A. amurensis which is a harmful competitor of P. rufiventris.Although coinfestation of multiple herbivores on a plant can be beneficial [39] or neutral [40] for herbivores, restricted unidirectional movement of larvae inside the stem causes unavoidable competition between P. rufiventris and A. amurensis.This indirect effect of girdling behavior, evading a competitor, is a novel function of behavioral modulation, which we propose is an exaptation of counteradaptation to plant defense.Rodent herbivores also modulate the physical properties of plants to decrease their appearance, implying that disguising through plant modulation has a general adaptive value across animals [41]. Conclusion In conclusion, we demonstrated that the girdling behavior of P. rufiventris decreases host plant defense and increases nutrients.The girdling behavior also decreases the risk of harmful competition with A. amurensis via the lowered appearance of the girdled plants.Insect herbivores have developed various array of offensive strategies to deal with plant defense [42].We conclude that the behavioral modulation on plants can have manipulating 5c (e-f) Oviposition preference of A. amurensis on girdled and non-girdled E. annuus in (e) field monitoring (***; P < 0.001, Chi-squared test) and (f) choice assay (**; P < 0.01, Fisher's exact test).The field-girdled E. annuus contained P. rufiventris egg inside the stem while experimentally girdled E. annuus stem did not effects on host plant physiology and morphology, which clearly are adaptive for insect herbivores. Plant growth and insect rearing Adult Phytoecia rufiventris beetles were collected from our field sites (Supplementary Table S1) between April and July 2019-2022.The adult beetles were maintained at 25-27 °C in transparent acrylic cages (40 × 40 × 40 cm 3 ) with Erigeron annuus plants under long-day (16 L:8D) conditions.Eggs were collected every three days and inoculated inside the E. annuus stem following the previously described method [43] with experimental girdling.The larvae were maintained in E. annuus stems in the lab until the basal part of the stem was cut by the larvae, then the larvae were collected from the stems and moved to the short-day condition (12 L:12D) at 25 °C to induce pupation [44].To break adult sexual diapause, the adults were placed at 0 °C for 16 weeks. Adult A. amurensis adults were collected from the field sites (Supplementary Table S1) from May to July 2019-2022 and reared under the same conditions as those of P. rufiventris.The collected eggs were inoculated inside the E. annuus stem without experimental girdling and reared inside the stem until the stem is cut by larvae then collected and placed at 0 °C for 16 weeks to induce pupation. We collected E. annuus seeds from a single individual at the Daejeon_1 site (Supplementary Table S1) in July 2019 and dried them with silica gel for 3 weeks.Seeds were germinated a 9 cm x 9 cm x 9 cm pot and were grown under long-day conditions (16 L:8D), 26 °C. Field monitoring Field monitoring was conducted from 2019 to 2022 at field sites in Daejeon, Cheongju, and Jeungpyeong, Korea (Supplementary Table S1).The angles of the girdled scars were measured by visually classifying the angle of the girdle arc by 45°.The positions of the girdles were measured from the ground to the lower girdle and from the shoot apical meristem to the upper girdle.The survival of P. rufiventris larvae in successfully girdled and recovered E. annuus stems was investigated.The oviposition of A. amurensis on girdled and non-girdled E. annuus plants was measured by checking the oviposition cavity.The larval survival of P. rufiventris in girdled E. annuus, with and without A. amurensis competition was measured by randomly dissecting naturally girdled E. annuus plants. Egg inoculation and experimental girdling Experimental girdling was performed by cutting the vascular bundles of E. annuus stems into two lines 1 cm apart, each forming a semicircle.Decapitation was performed by cutting the stem at the corresponding height.Phytoecia rufiventris eggs were inoculated between two girdles by digging a scar with a needle and gently inserting it with forceps, as described previously [43].The egg inoculation was performed right below the cut stem in decapitated E. annuus plants.Experimental girdling and egg inoculation were performed 8 weeks after the germination of E. annuus.We sampled the upper part of the girdled stem one wpg (weeks post girdling) while the lower part was sampled three wpg, when the actual larval attack was occurring at each part.To enable sufficient growth of larvae for measurement, larval growth was measured in three wpg. Trypan blue staining The E. annuus stems were longitudinally cut, stained in 0.4% Trypan blue (aq) for 5 min, and destained with 100% ethanol overnight. Primary metabolites and total soluble sugar measurement The upper and lower parts (each 15 cm) of the girdled stem were collected 2 days after experimental girdling.Samples were freeze-dried and ground by mortar and pestle. The metabolites in each sample were analyzed as described previously [45].Briefly, 10 mg of each freezedried sample was used for extraction with 1 ml of methanol and the solvent was evaporated by applying a nitrogen flow rate of 0.625 L/min for 5 min.To induce oximation, 30 µL of 20 mg/mL methoxyamine hydrochloride in pyridine, 50 µL of BSTFA (N, O-Bis (trimethylsilyl)trifluoroacetamide) with 1% TMCS (chlorotrimethylsilane), 30 µL of 300 µg/mL 2-chloronaphtalene in pyridine was added.10 µL of 300 µg/mL 2-chloronaphtalene in pyridine for internal standards, was added.The samples were incubated at 65 °C for 60 min.One microliter of the 1/100 diluted sample was injected into the Rtx-5MS column at an injection temperature of 250 °C.A GCMS-QP2020 system (Shimadzu, Kyoto, Japan) was used for analysis. The total amount of soluble sugar was measured following previously described methods [46][47][48] with modifications.5 mg of freeze-dried sample was used.Disrupting pigments were discarded using 100% acetone and soluble sugar was extracted using 0.5 mL of 80% ethanol.0.5 mL of each sugar extract was reacted with 0.5 mL of ice-cold anthrone (Daejung, Korea) in 72% sulfuric acid (2 mg/mL) at 100 °C for 11 min, and the absorbance at 630 nm was measured. Artificial Diet The semi-artificial diet was prepared by 3.33% agar solution with 5% freeze-dried E. annuus stem powder.The total amount of supplemented sugars and amino acids were determined by the observed difference between girdled upper part and non-girdled upper part of the stem using GC-MS.In addition, diets comprising double the amounts of supplements were prepared.The composition of supplemented nutrients in the diet was determined according to the data acquired from GC-MS (Fig. 4g): 37.97 mg/g galactose, 2.93 mg/g sucrose, 13.45 mg/g fructose, 0.11 mg/g valine, and 1.10 mg/g proline.Each larva was posited in an Eppendorf tube with a block of artificial diet.Every 3 days, larval mass was measured and a fresh diet was provided. Phytohormone quantification The upper and lower parts (each 15 cm) of the girdled stem were collected one wpg and three wpg, respectively, when the larval feeding at each part was occurring.For the decapitated stems, only lower parts were sampled.To exclude bias originating from different water contents, the upper part samples were freeze-dried.Phytohormones were quantified according to the previously described method [49], with modifications [16].One milliliter of ethyl acetate spiked with internal standards (20 ng each of d 4 -salicylic acid, d 5 -jasmonic acid, and d 6 -abscisic acid) was added to 10 mg of each sample.The solvent was evaporated using a centrifugal vacuum concentrator VC2124 (LaboGene, South Korea) at 30 °C and resuspended in 70% methanol and centrifuged at 13,000 rpm for 10 min.200 µL of the supernatant was collected. Protease inhibitor assay The upper lower part (15 cm) of the girdled stem was collected one wpg for protease inhibitor activity measurement.The procedure used here was modified from previous studies [50,51].Protein extraction was performed by adding 180 µL of extraction buffer (50 mM phosphate pH 7.2, 150 mM NaCl, and 2.0 mM EDTA) to 45 mg of each sample. The protease activity of papain with the extracted protein was measured for each sample.The papain solution was prepared as 50 µL/mL papain in 25 mM sodium phosphate buffer (pH 7.0).0.05 mL of papain solution, 0.1 mL of protein extract, and 0.1 mL of incubation buffer (0.25 M sodium phosphate buffer, pH 6.0, 2.5 mM EDTA, and 25 mM 2-mercaptoethanol) were mixed.After 5 min of incubation at 37 °C, the protease reaction was initiated by adding 0.1 mL of 1 mM Nα-benzoyl-DL-arginine β-naphthylamide hydrochloride (BANA) solution as the substrate.The reaction was terminated by adding 0.5 mL of 2% HCl in ethanol after 10 min of incubation at 37 °C.For color development, 0.5 mL of 0.06% p-dimethylaminocinnamaldehyde/ethanol was added.The resulting absorbance was measured at 540 nm as an indicator of protease activity and normalized by the fresh mass of the samples and the amount of total protein, which was measured using the BCA assay (Thermo Scientific kit 23,227, Waltham, MA, USA). Larval gut RNA extraction and RNA-sequencing Larval gut RNA was extracted from P. rufiventris and A. amurensis.Three, one, and three biological replicates were sequenced for P. rufiventris reared in girdled stems, ungirdled stems, and A. amurensis larvae, respectively.The larvae were sterilized with 70% ethanol, rinsed with distilled water, and ice-cooled.The midguts were collected, rinsed with Ringer's solution, and frozen in liquid nitrogen for grinding by pestle.The RNA was extracted using RNeasy Micro Kit (QIAGEN, Germany) and purified by RNeasy MinElute Cleanup Kit (QIAGEN).The library preparation was conducted using TruSeq Stranded mRNA LT Sample Prep Kit.Illumina pairedend 151 bp platform was used for sequencing. Gene search The Pfam annotations in Trinotate was used to initially search putative CYP450s, UGTs, COEs, GSTs, and ABC transporters in de novo assembled transcriptome of P. rufiventris and A. amurensis.The blastx search to SWISS-PROT was used to confirm predicted genes. Agapanthia amurensis choice assay The oviposition preference of Agapanthia amurensis for girdled and non-girdled E. annuus plants was investigated.One week after experimental girdling, two girdled and two non-girdled E. annuus plants were placed Fig. 1 Fig. 1 Natural history of Erigeron annuus and two longhorn beetles, Phytoecia rufiventris and Agapanthia amurensis.Adult emergence and oviposition timing in field observation sites are indicated as the adult.The larval feeding pattern is indicated as the larval position inside the stem Fig. 2 Fig. 2 Observations on girdling behavior of Phytoecia rufiventris.(a) Behavioral sequence of girdling and oviposition of P. rufiventris.(b) Girdle angles of lower and upper girdling (N = 100).Box indicates 1st and 3rd quantiles.(c) Morphology of naturally girdled E. annuus.Two girdles are indicated with white triangles.(d) Trypan-blue staining of the experimentally girdled and non-girdled stem of E. annuus at 1 and 3 wpg.Blue staining indicates cell death Fig. 3 Fig. 4 ( Fig. 3 Effect of girdling behavior on the larval performance of P. rufiventris.(a) Larval survival rates in successfully girdled (N = 47) and recovered (N = 42) E. annuus plants measured in naturally girdled plants (* P < 0.05, Chi-squared test).The left images represent the morphology of girdled and recovered E. annuus plants while the right images represent Phloroglucinol staining of the longitudinal section of girdled and recovered part.(b) Larval mass measured from the experimentally girdled and non-girdled E. annuus plants.Boxes indicate the 1st and 3rd quantiles (significant differences are indicated as different letters; P < 0.05, One-way ANOVA followed by Tukey's HSD) Fig. 5 Fig. 5 Effect of girdling behavior on Agapanthia amurensis.(a) Larval mass of A. amurensis measured from the egg inoculation experiment.Boxes indicate the 1st and 3rd quantiles (n.s., no significant differences).(b) Number of putative detoxification genes identified from de novo assembled transcriptome of P. rufiventris and A. amurensis (CYP450; Cytochrome P450, UGT; UDP-glucosyltransferase, GST; Glutathione-S-transferase, ABC transporter; ATP-binding cassette transporter, COE; Carboxylesterase).(c) Experimental scheme of the co-infestation assay.The scientific names of insects indicate egg inoculation.(d) Larval survival of P. rufiventris and A. amurensis larvae in co-infested E. annuus stem.Treatments for each group are indicated in Fig. 5c (e-f) Oviposition preference of A. amurensis on girdled and non-girdled E. annuus in (e) field monitoring (***; P < 0.001, Chi-squared test) and (f) choice assay (**; P < 0.01, Fisher's exact test).The field-girdled E. annuus contained P. rufiventris egg inside the stem while experimentally girdled E. annuus stem did not
2024-04-19T13:06:31.159Z
2024-04-18T00:00:00.000
{ "year": 2024, "sha1": "3619f65e451b805c7fbf2b3ac89899a6c98e5044", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b717f4afb44de540251a647ea407536f43d14217", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
11991946
pes2o/s2orc
v3-fos-license
Refractive index sensing utilizing a CW photonic crystal nanolaser and its array configuration We achieved a record high index sensitivity in a cw photonic crystal nanolaser with a potential index resolution of < 10-6. We also demonstrated spectrometer-free index sensing utilizing nanolaser array. Introduction Optofluidics now lies at the forefront of synthetic/analytical chemistry and nanobiotechnology. Here, light is used for controlling and efficiently analyzing fluids, colloidal solutions, solids in a fluid, etc, in microscale devices such as labs-on-a-chip [1,2]. Sensors are among the fundamental elements of optofluidics. They are required to be compact, cheap, disposable, and highly sensitive. Recently, optical microcavities open to air have been studied extensively for sensing. The resonant wavelength simply shifts as the environmental refractive index n env varies. Conversely, the index can be detected by measuring the wavelength shift via a spectrometer. Thus far, microspheres [3], microrings [4], microtoroidals [5], grating [6], and photonic crystals (PCs) [7,8] have been studied as microcavities for sensing. In particular, a nanocavity in the two-dimensional (2D) PC slab confines light into an ultrasmall volume of the order of optical wavelength (~ (λ/n) 3 , where λ is the resonant wavelength in vacuum and n is the index of the slab). It provides not only a high spatial resolution but also a high index resolution due to its high Q (narrow spectral linewidth). Thus, it enables analysis of even ultrasmall aliquots of liquids (1 fl ~ 1 fg). It can detect the molecular size when the protein monolayer is fixed adjacent to the nanocavity [9]. Label-free single molecule detection was also demonstrated [10]. These studies used passive cavities, which generally require a wideband light source, polarization controller and high-resolution spectrometer as well as high-precision optical input/output (I/O). In order to simplify the measurement system and reduce the cost, a sensor utilizing point defect nanolasers on an active PC slab has been reported [11 -13]. Its principle is the same as that for passive devices: the lasing wavelength shifts with the index. But active devices are advantageous because they are remotely operated by photopumping, and sensing is performed by detecting the laser light via a simple optical setup. Thus, we can neglect the external light source and high-precision optical I/O for passive devices. A critical problem with active devices is that continuous-wave (cw) operation is not obtained, but rather, only pulsed operation. When a PC nanolaser is operated by pulsed photopumping, spectral broadening often occurs due to thermal chirping [14]. It broadens the spectral linewidth to > 10 nm in the widest case. Thus, even if a high index sensitivity Δλ/Δn env of 250 nm/RIU [12] is used, the index resolution is limited to an order of 10 −3 by the spectral linewidth. This is far inferior to typical values in passive devices (10 −5 -10 −6 ). In this paper, we demonstrate higher-resolution sensing using a cw PC nanolaser, in which the thermal chirping is suppressed and a much narrower linewidth is obtained. We have already achieved room-temperature cw operation in a point shift PC nanolaser [15]. We call this a H0 nanolaser as it has no missing airholes, but consists of only a shift in two airholes in the PC slab. We show the stable narrow spectral characteristics of the nanolaser in liquids and a record high sensitivity of 350 nm/RIU, resulting in an index resolution of 9.0×10 −5 (potentially < 10 −6 ). Besides, we propose a spectrometer-free index sensor based on a nanolaser array. Here, many nanolasers whose lasing wavelengths are slightly different from each other are operated simultaneously, and their near field pattern (NFP) is observed through a step-like or delta-function-like bandpass filter (BPF). The wavelength shifts and the NFP changes with n env . In this paper, we present the first demonstration of such an operation. Sensing characteristics of cw PC nanolasers The H0 nanocavity consists of the lateral shift s x of two lattice points in a triangular lattice PC slab whose design parameters are the lattice constant a and the airhole diameter 2r. This nanocavity maintains the monopole mode and the dipole mode, having one and two primary antinodes, respectively, in the magnetic field component H z of the modal standing wave. In particular, the monopole mode exhibits an ultrasmall modal volume V m of less than 0.15 (λ/n) 3 and a Q factor of higher than 10 5 , giving rise to high performance lasing. The dipole mode exhibits a slightly larger V m of 0.21 (λ/n) 3 and a lower Q of 10 3 -10 4 . But still lasing is obtained because this Q is not necessarily dominant for the total Q affected by parasitic losses such as free carrier absorption and light scattering by disordering in the fabricated devices. We experimentally identified these modes [16], and observed room-temperature cw lasing with an effective threshold power P th of 1.2 μW in the monopole mode [15]. In the present study, we also introduce the shift s y of the other two lattice points located adjacent to those for s x . The resonant wavelength and Q can be flexibly controlled by varying s x and s y . This is effective for the optimization of the nanolaser array, which will be discussed in the next section. In this experiment, we first optimized the wafer structure. We prepared the following two epiwafers: GaInAsP single-quantum-well (SQW) wafer with a total thickness of the active layer including separate-confinement-heterostructure layers d of 180 nm, and GaInAsP five multi-quantum-well (MQW) wafers with d = 240 nm. The photoluminescence peak of both wafers was centered at 1.55 μm. Assuming an air-membrane structure, we calculated Δn mode /Δn env for these wafers, where n mode is the equivalent modal index. The SQW wafer was found to give a 1.8-fold higher value than that of the MQW wafer. Because of the thinner d, the evanescent field of the mode penetrated more deeply into the environment. Thus, we decided to fabricate devices into the SQW wafer. In the device process, a PC slab was formed by using e-beam lithography, HI inductively coupled plasma etching, and the HCl selective wet etching of InP claddings. In the measurement, the H0 nanolaser was photopumped by cw laser light at λ = 0.98 μm, which was focused to a 2.5-μm spot on the top surface of the device through a ×50 objective lens. The light output from the device was coupled to a multi-mode fiber by the same objective lens, and its emission spectrum was analyzed by using an optical spectrum analyzer (OSA). In this experiment, we deposited a chemically stable refractive index liquid (B-0700/0701, n env = 1.296 -1.451) on the device. Figure 1(a) shows an example of the lasing spectrum of the H0 nanolaser obtained at room temperature (293 K) with n env = 1.306 and an irradiated power of 130 μW. The inset shows the scanning electron micrograph (SEM) of the measured device with a = 520 nm, 2r = 300 nm (2r/a = 0.58), s x = 120 nm (s x /a = 0.23) and s y = 60 nm (s y /a = 0.12). The three-dimensional finite-difference time-domain (FDTD) calculation showed that the Q factor of the monopole mode for this design is higher than 4000, even in a liquid with the highest n env . This is sufficient for lasing, and indeed we clearly observed a cw lasing spectrum. The laser peak exhibited a 50-dB intensity over the background level and a spectral linewidth of < 26 pm, the resolution limit of the OSA used. The peak intensity was higher than in air [15]. We believe that this was attributable to the heat sinking by immersing the device in the liquid. Figure 1(b) shows a spectral redshift with n env . This simply indicates that we can use this shift for index sensing. The average index sensitivity in this measurement was 290 nm/RIU. On the basis of the spectral linewidth of < 26 pm, we can evaluate an index resolution of 9.0×10 −5 . Note that cw microlasers, e.g., vertical-cavity surface-emitting lasers, usually exhibit a narrow frequency linewidth of less than 10 MHz (a wavelength linewidth of less than 0.1 pm) [17]. Thus, a potential resolution of < 10 −6 is expected. Let us discuss the sensing characteristics in more detail. Figure 2 shows the normalized The solid line was calculated for a = 500 nm, 2r = 300 nm (2r/a = 0.60), d = 160 nm, s x = 80 nm (s x /a = 0.16) and s y = 60 nm (s y /a = 0.12) by using the FDTD method. We assumed a uniform slab index of 3.4, neglecting the complicated layer structure of the actual epiwafer. In general, the dipole mode has a higher index sensitivity than the monopole mode because of the different penetration depths of the evanescent field into the environment. Circular plots show the experimental values for various normalized lattice shift s x /a. Corresponding well to the calculation, a/λ shifts almost linearly with n env . The lasing wavelength could also be controlled by changing s x /a. Note that sensing using the nanolaser array is difficult if the sensitivity changes with s x and s y . Figure 3 shows the calculated and measured index sensitivity Δλ/Δn env as a function of s x and s y . The sensitivity of the monopole mode is less dependent on s x and s y and ranges from 250 to 300 nm/RIU. No cw operation was obtained in the dipole mode for s y /a < 0.12, which might be due to the low Q. At s x /a = 0.12 and s y /a = 0.12, a sensitivity of 350 nm/RIU was obtained for the dipole mode, and the estimated index resolution was 9.0×10 −5 . This sensitivity is 1.4 times higher than the reported value [12]. This suggests that the field penetration of the dipole mode in air is particularly large. At s y /a > 0.12 and s y /a = 0.12, the sensitivity decreases with s x /a and becomes almost comparable to that of the monopole mode. Although the high sensitivity of the dipole mode is attractive, the monopole mode is more advantageous for controlling the lasing wavelength by stabilizing the sensitivity. In addition, the Q of the monopole mode is always higher than that of the dipole mode for any parameter change. Dipole-mode lasing is always accompanied by monopole-mode lasing. In other words, single-mode lasing by the monopole mode is obtained by setting s y /a < 0.12. Thus, it is wise to use the monopole mode for stable sensing. Spectrometer-free sensing using nanolaser array Considering the field penetration from the nanocavity to the PC area, we roughly estimated that the H0 nanolaser occupies an effective area of 10×10 μm 2 . This means that we can fabricate one thousand nanolasers in a 340×340 μm area. If we fabricate a nanolaser array, in which each pair of lasing wavelengths is separated by 20 pm and the NFP can be sharply discriminated by a BPF, it becomes a spectrometer-free sensor having an index resolution of 10 −4 . It could be challenging to control such a narrow wavelength separation to be uniform. However, although it is somewhat inhomogeneous, a similar sensing is obtained by matching the observed NFP and a reference pattern taken in advance for each device. To demonstrate the concept, we fabricated an array of four nanolasers with a = 500 nm, 2r = 280 nm (2r/a = 0.56), s x = 60 -90 nm (s x /a = 0.12 -0.18) and s y = 60 nm (s y = 0.12), as shown Fig. 4. Let us denote their lasing wavelengths by λ 1 , λ 2 , λ 3 and λ 4 . The cavity spacing was set to 24a (= 11 μm) because we experimentally confirmed that the spacing was sufficiently large to suppress the mode coupling between cavities. But a smaller spacing might be possible by optimizing the cavity structure for reducing the field penetration and/or the cavity arrangement, taking into account the direction of the field penetration. In the measurement, we focused a laser light of λ = 0.98 μm to a 25-μm diameter spot and simultaneously photopumped all the devices. We observed the NFP by using an InGaAs image sensor through a BPF of multilayer dielectric stack (transmittance was < 10, 50, and > 90% at λ = 1.534, 1.544, and 1.552 μm, respectively). Figure 5 shows the lasing spectra and the corresponding NFPs for different n env . We confirmed during the individual pumping of each device that the lasing wavelengths were ordered as designed. The index sensitivity was ~300 nm/RIU and almost constant for all the lasers. The wavelength separation was inhomogeneous in the 2-10 nm range, and partial multimode lasing was observed. The multimode lasing was caused by the s y in this preliminary experiment; it was employed to ensure a high Q of the monopole mode for all the devices. If the fabrication process is improved so that the uniform lasing by the monopole mode is obtained more easily for all the devices even in the liquids, a smaller s y is more desirable for the complete suppression of the dipole mode. But anyway, we could clearly observe the target operation in the NFPs; the number of laser spots did not change without the BPF, but changed with BPF due to the wavelength shift. Conclusions We obtained clear cw operation in a H0 PC nanolaser soaked in liquids whose refractive index ranged from 1.00 to 1.37. The maximum intensity of the laser mode was 50 dB higher than the background level, and the spectral linewidth was < 26 pm, which is the resolution limit of this measurement. We observed a wavelength shift with the environmental index. The maximum sensitivity was 350 nm/RIU, the highest value recorded for nanocavity-based index sensing. Thus, we confirmed an index resolution of 9.0×10 −5 . But we expect to be able to obtain a resolution of < 10 −6 , given that the linewidth in microlasers is usually on the order of 0.1 pm. This will be checked in future studies using heterodyne detection. We also demonstrated index sensing using a nanolaser array consisting of different wavelength devices. This allows spectrometer-free sensing and potentially a 10 −4 -order resolution. As these sensors do not require a wideband light source, high-precision optical I/O and spectrometer, it will be useful for low-cost and disposal biochemical sensing and label-free single molecular sensing. Also, it is expected that they will be integrated as a functional element in the lab-on-a-chip technology.
2017-02-11T19:15:29.625Z
2008-05-26T00:00:00.000
{ "year": 2008, "sha1": "5854619bcd0cad9269867d0def46a23fe1ec23d7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.16.008174", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "21454b724ee9d0096bde76b2d8628daa394110ad", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
231963377
pes2o/s2orc
v3-fos-license
Sophisticated viral quasispecies with a genotype-related pattern of mutations in the hepatitis B X gene of HBeAg-ve chronically infected patients Patients with HBeAg-negative chronic infection (CI) have not been extensively studied because of low viremia. The HBx protein, encoded by HBX, has a key role in viral replication. Here, we analyzed the viral quasispecies at the 5′ end of HBX in CI patients and compared it with that of patients in other clinical stages. Fifty-eight HBeAg-negative patients were included: 16 CI, 19 chronic hepatitis B, 16 hepatocellular carcinoma and 6 liver cirrhosis. Quasispecies complexity and conservation were determined in the region between nucleotides 1255 and 1611. Amino acid changes detected were tested in vitro. CI patients showed higher complexity in terms of mutation frequency and nucleotide diversity and higher quasispecies conservation (p < 0.05). A genotype D-specific pattern of mutations (A12S/P33S/P46S/T36D-G) was identified in CI (median frequency, 81.7%), which determined a reduction in HBV DNA release of up to 1.5 log in vitro. CI patients showed a more complex and conserved viral quasispecies than the other groups. The genotype-specific pattern of mutations could partially explain the low viremia observed in these patients. Results Chronic infection: An inactive state of infection with higher QS functional complexity. The region of interest was amplified by a previously reported 3-round PCR protocol 16 . The external PCR did not affect the composition of the HBV quasispecies, as was confirmed by comparing the QS complexity indices obtained by analyzing serial dilutions of two samples amplified with 3-or 2-round PCRs ( Supplementary Fig. S1, p > 0.05). Only patients with ≥ 5000 reads (sequencing data) were included in the analysis, resulting in 52/57 patients in total: 15/16 CI, 17/19 CHB, 5/6 LC, and 15/16 HCC. Of note, in 7 of the 15 HCC patients the tumor had developed in the presence of cirrhosis, and in 3 patients this information was not available. Demographic, virologic, and serologic characteristics are reported in Table 1. After applying quality filters, 1,310,514 sequences were obtained, which yielded a median [Q1-Q3] of 20,821 [16,848] sequences per patient. NGS data were submitted to the GenBank SRA database (BioProject accession number PRJNA437055, BioSample accession numbers in Supplementary Table S1). In the comparison of viral quasispecies complexity, no differences were found between CI patients and the other groups for Shannon entropy (Sn), (median 2.26, 2.03, 1.72, and 2.94 for, respectively, CI, CHB, HCC, and LC) ( Table 2). Similar findings were observed for the Gini-Simpson index (G) with a median of 0.81 for CI, 0.77 for CHB, 0.73 for HCC, and 0.89 for LC (Table 2). Table 1. Main viral and serologic characteristics of the clinical groups enrolled in the study. CHB chronic hepatitis B, LC liver cirrhosis, HCC hepatocellular carcinoma, CI chronic infection; Q1, 25th percentile; Q3, 75th percentile; HBV hepatitis B virus, ALT alanine aminotransferase. p values (p) were obtained by applying the Kruskal-Wallis test in median age, HBV DNA and ALT, and by ANOVA in genotype distribution. Statistically relevant p values are reported in bold. *p > 0.05 when Tukey multiple comparison was implemented. a Genotype was evaluated by NGS sequencing, as described 16 . Fig. 1a). LC showed the highest mutation frequency (median 0.030), but as few LC patients were included in this study, these results should be confirmed in a larger cohort. Consistent findings were recorded for the nucleotide diversity index (π), in which CI showed a value up to fivefold higher than the CHB or HCC groups (median 0.027, 0.007, and 0.005 respectively, for CI, CHB and HCC (p = 0.005) ( Table 2; Fig. 1,b). Again, LC showed the highest nucleotide diversity value (median = 0.04). Similar quasispecies conservation between patient groups: Information content study. By calculating the information content and applying sliding window analysis, we confirmed the presence of highly conserved nt and aa regions previously reported by our group 16 (Supplementary Fig. S2 and S3). Conservation at both the nt and aa level was similar, but not identical, in all groups (Fig. 2a, c). By comparing the information content standard deviation of each group relative to the overall mean we observed that CI and LC sequences were the most highly conserved, mainly in the non-coding portion encompassing nt 1300-1375 (p = 0.005 between CI and CHB information content deviation) (Fig. 2b). The aa comparison yielded similar results: CI showed higher conservation than CHB, especially in the region between aa 20 and 50 (p = 0.019) (Fig. 2d). Genotype-specific pattern of mutations. To determine whether the presence of certain aa mutations might enable differentiation between the clinical groups, all haplotypes were aligned against each specific genotype consensus; some genotype-specific changes were observed ( Supplementary Fig. S4). Inspection of the www.nature.com/scientificreports/ overlapping polymerase ORF showed some mutations related to the genotype consensus, but there were no differences between the clinical groups. None of the mutations in haplotypes from genotypes A, C, E, F, or H were differentially represented between the four clinical stages. However, in the case of genotype D, the A12S, P33S, P46S, and T36A/D/G mutations were more highly represented in one specific group, CI patients. These mutations correlated with each other (rho ≥ 0.68, p ≤ 0.00001) (Fig. 3a), thereby forming a pattern of mutations (Fig. 3b). This pattern was found in CI patients at a rate around 12- (Fig. 4). A similar high rate was observed in LC (median [Q1; Q3] of 80.5 [57.5; 85.5]), however, due to the limited number of patients in this group, the difference was not statistically significant. On analysis of the total of patients, no correlations were found between viral load and frequency of the mutation pattern, whereas a weak correlation was observed when only CHB and IC patients were analyzed (p = 0.02 and rho = −0.4). Presence of the mutation pattern did not extremely modify the tridimensional structure of the HBx protein relative to the wild type (wt) (Supplementary Fig. S5). However, analysis of the effects of mutations on HBx stability (ΔΔg) between the wt protein and both patterns showed a reduction in protein stability (ΔΔg < 0). Of note, the A12S/P33S/P46S/T36D pattern presented a ΔΔg lower than the pattern with T36G (ΔΔg of − 1.4 and − 1.85 for respectively, T36D and T36G), indicating greater instability. Lower in vitro HBV expression in the presence of the mutation pattern. To investigate the effects of HBx aa changes on HBV expression, mutations were tested in vitro following the order of hierarchical clustering. All mutations were found to reduce viral particle release at 5 days post-transfection in cell supernatants relative to wt HBV ( www.nature.com/scientificreports/ relative to the wt virus (p = 0.00012). This trend was also observed at the protein level, where the quadruple mutant determined a reduction in HBcrAg release of 1.5 logU/mL related to wt ( Supplementary Fig. S6). Discussion Because of the low viral load in chronically infected HBeAg-ve patients, it has been difficult to analyze the HBV genome in this population. Therefore, little is known about the virological basis of their specific clinical characteristics. To investigate the viral QS in HBeAg-ve chronic infection, samples from 16 CI patients were analyzed by NGS, and the results obtained were compared with those of chronic hepatitis patients in different clinical stages. The QS was more complex in CI than in chronic hepatitis patients at the functional level. In agreement with these findings, high complexity and spontaneous reverse transcriptase mutations (in the A-B interdomain, overlapping with the HBsAg a determinant) have been reported in the viral QS related to the immune-tolerant and immune-active states 18 . Furthermore, an increase in haplotype number in a region including the 5′-end of HBX was detected in an HBeAg-positive woman (previously defined as immune-tolerant) who seroconverted to HBeAg-negative (CI state) 19 . Our results suggest that the accumulation of nt mutations in the QS of CI patients could influence HBV replication, thus providing a possible explanation for the low viral replication rate in this population. Clinical groups such as CHB and HCC patients are generally found to have highly replicative and/or carcinogenic variants, but curiously, the QS composition was less complex in these patients, suggesting that certain variants had been selected to guarantee HBV replication and persistence. We found the most complex QS in the LC group, but as very few patients with cirrhosis are attended in our outpatient clinic, only a small number were included in the study, which could make these results less reliable. The portion of the HBX gene examined here comprises the 5′ end and the upstream non-coding region, where some hyper-conserved regions have been reported by our group. The current analysis of QS conservation in different, well-characterized, clinical groups confirms these previous results and supports the idea that these regions could be useful targets for gene therapies, as they would be effective regardless of the clinical and virological conditions. In light of the higher functional QS complexity observed in CI patients, and taking into account that the nt and aa conservation findings were similar but not identical between the different clinical groups, we examined the intergroup conservation variability. Surprisingly, sequences from CI patients were the most conserved at both the nt and aa level, particularly relative to the CHB group. The difference between CI and CHB conservation was higher in the upstream non-coding region of HBX (nt 1250-1350, where several HBX transcript initiation sites have been reported 20 ) and in the protein dimerization site (aa 20-50) 21 . The lower conservation of CHB haplotypes may suggest that nt and aa variability help HBV to re-adapt to the external environment and guarantee replication. Conversely, in the CI QS, higher conservation may indicate selection of more highly conserved and probably less replicative haplotypes, which could presumably promote viral persistence. These results seem to indicate a more complicated HBV QS in CI patients. The high functional complexity would be due to the presence of a large number of mutated haplotypes at a low frequency, which do not, however, www.nature.com/scientificreports/ affect the nt and aa conservation of the main population. These results should be confirmed in a larger sample that includes study of the intracellular viral quasispecies, mainly in patients with HCC where compartmentalization of the viral variants between tumor and adjacent non-tumor tissue has been observed 22 . To explain the limited viral replication in CI patients, we investigated the presence of aa changes. Determination of aa mutations in these patients could be helpful to better classify those falling into the "grey zone" (viremia 2000-20,000 IU/mL and/or marginally elevated ALT) whose management is still difficult due to the lack of factors that distinguish this intermediate state from chronic HBeAg-ve infection or hepatitis 23 . HBX and its encoded protein have a key role in HBV replication and disease progression, and may be determinant for the low replication activity seen in CI. Recently, a higher HBX mutation rate was reported in CI patients than in "active" chronic hepatitis patients 24 . Nonetheless, the study investigated only dominant mutations determined by Sanger sequencing (at a minimum frequency of 15-20%) 25 . Deletions in the HBX 3′-terminal end were detected by NGS in the above-mentioned woman who experienced HBeAg seroconversion 19 . In the present study, the presence of mutations was analyzed by aligning haplotype sequences obtained by NGS with their corresponding genotype consensus. As genotyping was performed considering haplotype sequences, we were able to detect subtle mixtures of genotypes, usually not identified by Sanger sequencing, in keeping with another NGS study, where mixed genotypes were reported in the HBV X and precore region 26 . Notably, we observed a genotypespecific viral evolution. The HBV QS may evolve differently depending on the genotype, and thereby, differentially influence disease progression and therapy outcome 27,28 . Specific mutations in the HBsAg C-terminal domain of genotype D HBV have been associated with viremia < 2000 IU/mL 29 , and certain HBx mutations highly associated with HCC have been reported specifically in genotypes C and D 30,31 . Here, we detected a pattern of aa changes (A12S/P33S/P46S/T36G-D) that was highly represented in low-replicative groups, mainly genotype D CI haplotypes. These observations illustrate the importance of using NGS to accurately identify viral genotypes, useful information for following up the disease. The frequency of the mutation pattern showed a weak inverse correlation with HBV viremia in CI and CHB patients, suggesting a relationship between viral expression and mutations. This correlation was lost when HCC and LC patients were included in the analysis. This observation suggests that other mechanisms (eg, mutations in other genes) may influence HBV expression in these last two groups. The mutation pattern identified partially involved the HBx Ser/Pro-rich dimerization site 21 , and was mainly characterized by replacement of a hydrophobic aa (alanine or proline) by a polar aa (serine). This could be relevant, as some highly conserved polar aa (eg, Ser25 and Ser41) within the Ser/Pro-rich domain are targets of post-translational changes 32 . The potential addition of new phosphorylation and O-β-glycosylation sites could interfere with the tridimensional structure of the HBx protein and thereby limit its trans-activating activity. In vitro investigation of HBV expression in the presence of the A12S/P33S/P46S/T36G pattern showed an approximately 1 log reduction in HBV expression. More marked inhibition was detected in relation to A12S/ P33S/P46S/T36D. Analysis of protein stability showed that HBx was less stable in the presence of both patterns, more evidently so in relation to the T36D pattern, indicating that the specific threonine to aspartate change may further affect HBx stability and consequently, its trans-activating activity. The results of this study provide insight into the composition of the HBV QS in CI patients. However, further work is required to characterize the mechanisms responsible for the complex viral population observed and to determine the role of highly mutated haplotypes in viral replication. A genotype-specific pattern of mutations that reduced viral replication was detected in the CI QS. Application of ANOVA plus the Turkey test on genotype distribution among the haplotypes showed no difference in the prevalence of genotype D between the clinical groups. Nonetheless, genotype D was highly predominant in CI patients compared to the others and this may have affected the prevalence of the mutation pattern in this group. Studies in larger samples with various genotypes are needed to confirm the association between mutations and CI status and to detect other genotype-specific aa changes that may enable differentiation between HBV clinical groups. Additional in vitro and in silico studies are required to further understand whether and how these mutations interfere with the activity and tridimensional structure of HBx. In summary, the HBV viral population at the HBX 5′ end was investigated by NGS analysis in a group of HBeAg-ve chronically infected patients. The sophisticated (both conserved and complex) QS characterized may explain the limited viral replication in this patient population. The presence of aa mutations specific to a certain genotype underscores the need to accurately genotype the HBV virus during follow-up. The pattern of mutations observed in this study could help to better classify chronically infected HBeAg-ve patients and the state of low viral replication rate. Material and methods Patients and samples. All experiments and methods were performed in accordance with relevant guidelines and regulations. The study was approved by the Ethics Committee of Vall d'Hebron Research Institute (PR(AG)411/2016 and PR(AG)146/2020). Patients were enrolled from the population attending the outpatient clinics of Vall d'Hebron Hospital (Barcelona, Spain). All patients were informed about the aims of the project, and signed an informed consent form. Only those samples with a viral load > 100 IU/mL were included. Patients were stratified according to European guidelines. Briefly, chronic infected patients (CI) presented a viremia ≤ 2000 IU/mL or normal ALT (≤ 40 IU/mL). In case of viremia < 20,000 IU/mL with normal ALT (grey zone), patients were followed-up, and only those with persistent low viremia and no signs of liver damage were classified as CI patients. Chronic hepatitis patients (CHB) presented a viremia > 2000 IU/mL with ALT > 40. A plasma sample was collected from 16 CI, 19 CHB, 16 patients with signs of hepatocellular carcinoma (HCC), and 6 with liver cirrhosis (LC). Patients www.nature.com/scientificreports/ were all HBV-monoinfected and HBeAg-ve. Demographic, virologic, and serologic characteristics are reported in Table 2. HBsAg and HBeAg were tested using commercial enzyme immunoassays (COBAS 8000 analyzer, Roche Diagnostics). HBV DNA was quantified by real-time PCR (COBAS 6800, Roche Diagnostics) with a detection limit of 10 IU/mL. HBX amplification and sequencing. The region of interest encompassed nt 1255-1611, which includes the 5′ end of the HBX coding region (nt 1374-1611, corresponding to aa 1-76 in the coded protein) and the upstream non-coding region. HBV DNA was extracted from 500 µL of plasma with the QIAamp UltraSens Virus Kit (QIAGEN) according to the manufacturer's instructions. The region under study was amplified using a 3-round nested PCR protocol that enabled amplification of samples with viremia > 100 IU/mL. Briefly, the first-round PCR was performed using external primers (forward 5′-TGT ATT CCC ATC CCA TCA TC at position nt 599 and reverse 5′-AGW AGC TCC AAA TTC TTT ATA AGG , at nt position 1936) with the following protocol: 95ºC for 5 min, followed by 35 cycles of 95ºC for 20 s, 53ºC for 20 s, 72ºC for 15 s, and finally, 72ºC for 3 min. The volume of extracted DNA added to the amplification mix (5-10 μL) differed depending on the initial viral load of the sample, and the amount of water used was proportionately adjusted to reach the same final total volume. The second-round PCR (using primers carrying M13 universal adaptor) and third-round PCR (using primers including a unique multiplex identifier sequence [MID] for each sample/patient) were performed as described 16 , To ensure that the 3-round PCR did not change the viral populations, a control run with a 2-round PCR (excluding the external PCR) was performed in 2 CHB samples serially diluted up to e+03 IU/mL. All PCR steps were performed using high-fidelity Pfu Ultra II DNA polymerase (Stratagene, Agilent Technologies). PCR products were purified using the QIAquick Gel Extraction Kit (QIAGEN) following the manufacturer's instructions, and DNA quality was evaluated using the Agilent 2200 TapeStation (Agilent Technologies). Purified PCR products were quantified using the Quant-iT PicoGreen dsDNA Assay Kit (Life Technologies) to equilibrate representation of each sample in the pool, and then sequenced by NGS on the Illumina MiSeq platform (Illumina Inc., San Diego, CA, USA). Sequences obtained were bioinformatically filtered as previously described 33 , which resulted in unique sequences covering the full amplicon (haplotypes) that form the viral QS. Haplotypes included in the next analyses had common reverse and forward sequences, and an abundance of ≥ 0.1% in the QS complexity analysis and ≥ 0.25% in the study of QS conservation. The threshold for haplotype filtering was empirically selected based on results obtained by simultaneously sequencing known clones, as previously reported by our group 34,35 . Each haplotype was genotyped by a distance-based method, as previously reported 16,26 (Supplementary Table S2). The method used to analyze the reference sequences is reported in a supplementary file (Supplementary method; Supplementary Table S3). As was observed by UPGMA (unweighted pair group method with arithmetic mean), this amplicon sufficed to differentiate the viral genotypes (Supplementary Figure S7), but it could not distinguish genotype subtypes because of its limited length. Quasispecies complexity. QS complexity was determined in patients with ≥ 10,000 reads (15/16 CI,16/19 CHB, 14/16 HCC and 4/6 LC) by applying four parameters: Sn, G, Mf, and π 36 . The Sn and G are abundance indices that measure haplotype diversity based on the number of haplotypes and their relative frequencies. The functionality indices include the Mf, which measures the genetic diversity of the viral population with respect to the most prevalent haplotype, and the π index, which measures genetic diversity as the average number of mutations per site between each pair of haplotypes in the viral population 36 (Supplementary Table S4). Quasispecies conservation. Sequence conservation was determined by calculating the information content of each position (both nt and aa) in a multiple alignment of all haplotypes obtained with NGS, followed by sliding window analysis, as previously described by our group 16 . To determine the intergroup variability in sequence conservation, the standard deviation of the mean overall information content was calculated for each group of patients. Mutation analysis. Each haplotype was aligned to a reference consensus sequence of the same genotype to detect aa changes. Genotype consensus was generated by aligning the region of interest (nt 1255-1611) extracted from full-length HBV genome sequences obtained from GenBank (13 sequences for genotype A, 23 genotype C, 17 genotype D, 8 genotype E, 10 genotype F, and 5 genotype H; Supplementary Table S3). The frequency of each aa change observed was calculated as the sum of the relative frequencies of each mutated haplotype in the patient's QS population. The three-dimensional structures of HBx were predicted by I-Tasser 37 using a validated HBx reference model 38 as a custom-added modelling constraint. The fold-stability change ΔΔg between wt and both mutation patterns were calculated by Strum 39 , with ΔΔg (WT-Mut) < 0 indicating reduced stability in presence of mutations. In vitro HBV expression in the presence of mutations. Mutations that were differentially represented between the patient groups were tested in vitro by HBV linear monomer transfection 40,41 . The HBV monomer was obtained from the pCRII.HBV.ayw plasmid (kindly donated by Prof. Massimo Levrero and Dr. Laura Belloni), which contains a full-length HBV genotype D genome (subtype ayw). HBV monomer start/end nucleotide positions fall into L-HBsAg gene. The start encompassed nt 1 to nt 837, whereas the end included positions 2850-3182 of the same gene (positions are given in relation to the HBV ayw consensus sequence, NC_003977.2). Mutations were introduced by site-directed mutagenesis (QuikChange Lightening www.nature.com/scientificreports/ Site-Directed Mutagenesis kit, Agilent Technologies) using the manufacturer's procedure, and mutated plasmids were isolated using the Plasmid Midi kit (QIAGEN) according to the manufacturer's instructions. Linear HBV genomes, both wt and mutated, were obtained by digestion using EcoRI and PvuI, extracted from gels using the QIAquick Gel Extraction Kit (QIAGEN), and quantified with a Qubit fluorimeter (ThermoFisher). HepG2-hNTCP cells were cultured with Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum (FBS), penicillin (100 U/mL), streptomycin (100 μg/mL), Glutamax (2 mM), and Puromycin (5 μg/mL). To synchronize cells, they were treated with DMSO 2.5% at least 14 days before plating [42][43][44] . The day before transfection, cells were plated in 24-well plates at a density of 60,000 cells/mL in DMEM with 10% FBS (complete medium). The next day, cells were transfected with wt or mutated linear HBV monomer (250 ng/well) using the TransIT-X2 Dynamic Delivery System (Mirus). The pmaxFP-green plasmid (Amaxa Biosystem) was added at 1:10 to each well as transfection control. The medium was replaced the next day and changed every 2 days. Supernatants were collected at 5 days post-transfection. To remove residual linear DNA, supernatants were treated with DNAseI (Sigma, 1 mg/mL) in the presence of MgCl 2 (25 mM), and the reaction was stopped after 1 h using EDTA (25 mM). HBV DNA was quantified by real-time PCR (COBAS 6800, Roche Diagnostics) and hepatitis B core-related antigen (HBcrAg) was tested using a commercial chemiluminescent immunoassay (Lumipulse, Fujirebio) with a limit of detection of 2 logU/mL. Statistics. Intergroup differences for age, viremia, ALT, complexity indices, and aa changes were evaluated using the Kruskal-Wallis test and the Dunn posthoc test. Differences in QS conservation were evaluated by the Wilcoxon test, whereas differences in genotype distribution were assessed with ANOVA, followed by the Turkey test. The Kruskal-Wallis test and the Dunn posthoc test were used when comparing HBV DNA and HBcrAg titer in cell supernatants. p values were adjusted with the Bonferroni correction, and those < 0.05 were considered statistically significant. All tests were done with R language software (3.2.3) 45 . www.nature.com/scientificreports/ Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2021-02-20T06:16:17.703Z
2021-02-18T00:00:00.000
{ "year": 2021, "sha1": "99a5b4c79a9c200323045e62ae1721455c2d904a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-83762-4.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e937647ee17db244bb425e6a35111d87a99324f3", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
30190620
pes2o/s2orc
v3-fos-license
Prediction of Hopeless Peptides Unlikely to be Selected for Targeted Proteome Analysis In targeted proteomics using liquid chromatography-tandem triple quadrupole mass spectrometry (LC/MS/MS) in the selected reaction monitoring (SRM) mode, selecting the best observable or visible peptides is a key step in the development of SRM assay methods of target proteins. A direct comparison of signal intensities among all candidate peptides by brute-force LC/MS/MS analysis is a concrete approach for peptide selection. However, the analysis requires an SRM method with hundreds of transitions. This study reports on the development of a method for predicting and identifying hopeless peptides to reduce the number of candidate peptides needed for brute-force experiments. Hopeless peptides are proteotypic peptides that are unlikely to be selected for targets in SRM analysis owing to their poor ionization characteristics. Targeted proteomics data from Escherichia coli demonstrated that the relative ionization efficiency between two peptides could be predicted from sequences of two peptides, when a multivariate regression model is used. Validation of the method showed that >20% of the candidate peptides could be successfully eliminated as hopeless peptides with a false positive rate of less than 2%. INTRODUCTION Targeted proteomics is a method that is used to determine the abundance of target proteins in biological samples. [1][2][3] A crude protein fraction from a biological sample is digested to produce a mixture of proteotypic peptides (PTPs). e amounts of pre-selected peptides derived from the target proteins are determined by the selected reaction monitoring (SRM) mode of liquid chromatography-tandem triple quadrupole mass spectrometry (LC/MS/MS). 4) In usual SRM assay methods, 2-4 PTPs are selected for the analysis of each target protein, the amounts of which are determined by 3-4 SRM transitions per peptide. 5,6) Selecting the bestobservable or visible peptides is a key step in the development of the SRM assay method for the selective and sensitive analysis of target proteins. is is because numerous peptides with various lengths, sequences, and ionization efciencies are produced when a protein is digested with trypsin. For example, in the analysis of a phosphoglucokinase in yeast (Pgk1p from Saccharomyces cerevisiae) using the SRM method, 4 suitable peptides were selected from more than 30 candidate PTPs (6-25 amino acid residues) produced by trypsin digestion (Supplementary Table S1). 7,8) A er establishing comprehensive SRM assay methods, such as the SRMAtlas of human and yeast proteins, [9][10][11][12][13] these methods could be reused owing to their basic compatibility among triple quadrupole mass spectrometers. 14) However, SRM assay methods for the targeted proteome analysis of non-model organisms, such as various industrially important bacteria for biomaterial productions, are under continuous development. 15,16) For the e cient development of SRM assay methods, heuristic rules have been proposed for selecting suitable peptides. 17,18) In silico tools such as PeptideSieve, CONSeQuence, and PeptideRank have also been reported to predict the best-observable, visible, or yer peptides from a sequence of the target protein. [19][20][21] ese algorithms were developed based on training data containing lists of observable peptides in shotgun proteomics datasets. However, a literature-reported SRM assay method showed that these rules do not always explain the selected peptides. For example, the selection rules recommended using peptides within 8-20 residues and to avoid peptides that contained His residues. However, 4% and 10% of the peptides violated these rules in the yeast SRM assay method for the central metabolism-related enzymes. 7,8) Moreover, the selected peptides in the SRM assay method do not coincide with the results of in silico predictions. A peptide, VLENTEIGDSIFDK, which is employed in the SRM assay method for Pgk1p, is ranked at 25th and 15th by CONSeQuence and PeptideRank, respectively (Supplementary Table S1). ese results suggest that predicting the best-observable peptides still includes a measure of uncertainty, and a bruteforce experiment using LC/MS/MS is the most reliable approach for selecting suitable peptides from large numbers of candidate peptides in the development of an SRM assay method. 4,6,22) For example, an SRM method with more than 200 channels is required for an experimental survey of all y series product ions produced from divalent precursor ions [M+2H] 2+ derived from candidate peptides of S. cerevisiae Pgk1p. In this study, a method for predicting and identifying hopeless PTPs was investigated. Hopeless peptides refer to peptides that are unlikely to be selected as targets of SRM analysis owing to their poor signal intensity in the SRM chromatogram. e prediction of hopeless PTPs will reduce the number of candidate peptides to be investigated in a brute-force experiment. For this purpose, an SRM assay dataset was obtained from 203 lines of E. coli that overexpress central metabolism related enzymes. Using the total peak area data for 3,856 peptides derived from 203 di erent proteins, a multivariate regression model was constructed that permits the relative total peak areas between two peptides to be predicted. e prediction method developed in this study was able to reduce the number of candidate peptides by >20% with a false positive rate of less than 2%. Sample preparation Escherichia coli K-12 strains overexpressing the central metabolism-related enzymes were obtained from the E. coli ASKA library. e ASKA library is a complete set of an E. coli K-12 ORF archive including E. coli strains that overexpress each ORF. 23) Each E. coli strain was cultured in 15 mL of Luria-Bertani (LB) medium containing 30 mg/mL chloramphenicol, with shaking at 150 rpm at 37°C. When the OD 600 level reached 0.3, isopropyl β-D-1thiogalactopyranoside (IPTG, nal conc. 1 mM) was added to the culture. Crude proteins were extracted from E. coli cells in the exponential growth phase (OD 600 =1.0) by a previously described method using a cell lysis bu er containing 50 mM Hepes (pH 7.5), 5% glycerol, 15 mM dithiothreitol, 100 mM KCl, 5 mM ethylenediamine-tetraacetic acid, and complete protease inhibitors cocktail (Roche, Basel, Switzerland, 1 droplet/50 mL). 7,8) Protein concentrations were determined by the Bradford method. 24) Trypsin digestion was performed by the method described by Uchida et al. 17) e peptide solutions were desalted using GL-Tip GC micropipette tips (GL Science, Tokyo, Japan). Data analyses Multivariate regression analyses were performed by lm and step functions on R3.1.3. RESULTS AND DISCUSSION Hopeless proteotypic peptides ree types of proteotypic peptides (PTPs)-suitable, promising, and hopeless-are introduced in this study. For the case of the Pgk protein in Escherichia coli (UniProt ID: P0A799), an in silico analysis using the amino acid sequence (504 aa) indicated that 19 PTPs within 7-30 residues were produced by trypsin digestion (Table 1). To compare signal intensities among the PTPs, a tryptic peptide sample was prepared from an E. coli strain over-expressing Pgk and analyzed by LC/MS/MS using a brute-force approach (Fig. 1). e signal intensity of the peptide was determined as the total peak area of multiple SRM series of single charged yseries product ions produced from a precursor ion [M+2H] 2+ (See Materials and Methods). An SRM analysis showed that a signal derived from SLYEADLVDEAK was one of the most intense signals among the 19 candidate peptides (Table 1). e signal intensities of the candidate peptides were not correlated with the ranks predicted by CONSeQuence and PeptideRank 20,21) (Table 1). ese results suggest that the brute-force experiment using LC/MS/MS is promising in terms of developing a new SRM assay method. As mentioned above, 2-4 'suitable' peptides for the SRM assay method were selected considering their signal intensity, retention time, and overlap with interfering peaks. Here, the peptides whose total peak areas were more than 20% of that of the most intense peptide, were considered to be 'promising' candidates for use in SRM assay methods. For example, the literature-reported SRM assay method selected two suitable peptides, VATEFSETAPATLK and LTVLDSLSK, from the list of promising peptides. 16) In this study, PTPs whose total peak areas were less than 20% of that of the most intense peptide were de ned as 'hopeless' candidates for SRM assay methods. More strictly, the total peak area of hopeless peptide was required to be less than that of four or more other peptides, since 3-4 peptides are typically employed for the SRM assay methods. 17) is indicates that there would be no hopeless candidates in the case of a small protein. For example, the YAALCDVFVMDAFGTAHR peptide from Pgk is a hopeless peptide, since its signal intensity was only 2.4% of the most intense peptide (Table 1). e ndings also indicate that a sequence-based prediction of hopeless peptides would reduce the number of candidate peptides investigated by the brute-force experiment. It was also suggested that false positives should be avoided in the prediction, since suitable peptides would be overlooked by an error to consider a promising peptide as being classi ed as hopeless. Construction of the multivariate regression model A multivariate regression analysis was conducted to predict hopeless peptides from amino acid sequences. In this study, the total peak area determined by the SRM series of multiple y-ions produced from [M+2H] 2+ was considered. e reason for this is that 82% and 100% of the literature report SRM methods for yeast and E. coli consist of transitions of y-ions produced from [M+2H] 2+ , respectively. 7,8,16) e total peak area (the sum of the peak areas of all SRM transitions) was employed to represent the entire ionization e ciency of the peptides. A test dataset was obtained from the 203 lines of E. coli 23) overexpressing the central metabolism-related enzymes (Supplementary Table S2). For each enzyme, an E. coli strain over-expressing the target protein was cultured in synthetic medium, from which a crude protein extract was obtained. Following the preparation of a tryptic peptide sample by reduction, alkylation and protease digestion, the total peak area of all tryptic peptides produced from the overexpressed protein were determined by the SRM series of multiple y-ions produced from [M+2H] 2+ (Supplementary Table S2 1) Peptides less than 7 and more than 30 residues were removed from the candidates. 2) A total peak area of multiple SRM series of all y series product ions produced from a precursor ion [M+2H] 2+ . 3) Bartonella henselae was selected as the model organism. 4) Predicted from the result of score mode. 5) reshold for an S score of more than 4. Table S2). Using the total peak area values as an objective variable, a multivariable regression model was constructed as follows: where, I x and L x indicate the total peak area and the length of peptide x, respectively. N AA, x is the number of amino acid residues AA in the peptide x. P p, x indicates values of parameter p of peptide x. a AA and b p are coe cients, as shown in Table 2. Parameters (P) were selected from the amino acid index (AAindex) considering Akaike information criterion (AIC) level. 26) e coe cient of determination (R 2 ) of the prediction model was 0.567. A comparison between the predicted and measured total peak area I x (Fig. 2a) suggests that a direct prediction of the total peak area was di cult when the regression model was used. is can be attributed to a variation in the overexpressed protein levels among the samples of the training dataset. us, the relative total peak areas among the two peptides derived from an identical protein were used as an objective variable to develop the following modi ed model: Here, x and y indicate two peptides derived from an identical protein. e value of R 2 of the prediction model was 0.658, indicating that, when the relative total peak area among two peptides was employed, the prediction was improved (Fig. 2b). Factors correlated to ionization e ciency could be estimated from the results of multivariate regression (Table 2). In addition to the so called hydrophobic residues (I, L, and V), aromatic amino acids (F, W, and Y) are also preferable for ionization e ciency. However, the positive charge derived from R, and K has a strong negative e ect, probably due to the formation of multiple charged ions. e negative coe cient for M suggests that peptide abundance could be a ected by the partial oxidation of methionine side chain. Furthermore, several factors related to peptide length, steric conformation, and hydrophobicity also contribute to ionization e ciency, as suggested in previous studies. 19,27) Fig. 2. Multivariate regression analyses for predicting the total peak area of peptides. (a) A comparison between total peak area predicted by Eq. (1) (I x ′) and measured data (I x ). (b) A comparison between relative total peak area predicted by Eq. (2) ((I x /I y )′) and measured data (I x /I y ). Coe cients of determination (R 2 ) of the prediction model were also represented. Prediction of hopeless peptide Since the regression model, Eq. (2), predicts a relative total peak area between two peptides, a heuristic procedure was employed for selecting hopeless peptides as follows: 1. All sequences of proteotypic peptides within 7-30 residues were generated from a sequence of the target protein by in silico trypsin digestion. 2. e values of predicted relative total peak area (log(I x /I y )′) were calculated between peptide x and all other peptides y, using Eq. (2). e values were compared with a threshold value (thres) to determine the score (S) of peptide x, as the number of cases with log(I x /I y )′<thres. A peptide with a larger S would be hopeless because the total peak area of this peptide was signi cantly smaller than that of many other PTPs. 3. A peptide was considered to be hopeless if its S score was larger than 0.2×N. Here, N is the total number of proteotypic peptides produced from a target protein. In the case of a small protein (0.2×N<4), a peptide with S>4 was considered to be hopeless, since 3-4 peptides are employed for the SRM assay methods. 17) Since there is one variable (thres) in the procedure, a relationship between thres, total number of false positives, and the total number of predicted hopeless peptides was investigated. In the case of predicting the hopeless peptides for 203 E. coli proteins used in the training dataset, 33.1% (1275/3856) of the PTPs were predicted to be hopeless when the threshold level was thres=−2.0. A comparison with the predicted hopeless peptides and the measured promising peptides indicated that the false positive rate was 3.0% (50/1645). When a more rigorous threshold such as thres=−2.5 was employed, 27.1% (=1045/3856) of the total PTPs were still predicted to be hopeless, and the number of false positive rate was reduced 1.1% (18 cases in total) (Supplementary Table S3). e identical 203 E. coli proteins were also analyzed by the CONSeQuence web tool to predict hopeless peptides. e results showed that 20.4% (=787/3856) of the PTPs were predicted to be hopeless (with CONS levels=0). However, a relatively large false positive rate (18.1%=297/1645) was found (Supplementary Table S3). e results suggest that the method developed in this study is capable of e ciently removing hopeless peptides from candidate PTPs with a low false positive rate. Validation by other datasets e prediction method was also validated using by the literature reported SRM assay methods for 393 proteins of E. coli. 16) e prediction of hopeless peptides for the 393 proteins of E. coli by the developed method showed that 23.3% (1952/8371) of tryptic peptide can be classi ed as hopeless with a threshold level at thres=−2.5 (Table 3). A comparison of the predicted hopeless peptides with the literature reported SRM assay method revealed that the false positive rate was 0.4% (3/670 suitable peptides), although the training and validation datasets were obtained using di erent mass spectrometers (Shimadzu LCMS8040 and Sciex5500 QTrap, respectively). In the case of the SRM assay methods for 106 proteins in a model cyanobacteria (Synechocystis sp. PCC 6803, constructed for ermo Scienti c TSQ Vantage), 15) CONCLUSION In this study, a method for predicting hopeless peptides was investigated using a test dataset including total peak area values for 3,856 peptides derived from 203 E. coli proteins. e method developed in this study successfully predicted hopeless peptides without suitable peptides being overlooked. is indicates that the number of SRM channels required for a brute-force experiment could be decreased by >20% with a false positive rate of less than 2%. e required number of SRM channels could be further reduced by development of more e cient prediction methods by introducing a more sophisticated regression model using larger amounts of training data, and considering additional multivalent ions such as [M+3H] 3+ and the contribution of other product ions, such as b series ions.
2018-04-03T03:46:22.432Z
2017-06-02T00:00:00.000
{ "year": 2017, "sha1": "7741c3cb67bc8485e5b25e2bf03013a469393d5b", "oa_license": "CCBYNC", "oa_url": "https://www.jstage.jst.go.jp/article/massspectrometry/6/1/6_A0056/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7741c3cb67bc8485e5b25e2bf03013a469393d5b", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
233618077
pes2o/s2orc
v3-fos-license
Orchestration ‐ based mechanism for sampling adaptation in sensing ‐ based applications Currently, the world witnesses a boom in the sensing ‐ based applications where the number of connected devices is becoming higher than the people. Such small sensing devices are now deployed in billions around the world, collecting data about the surroundings and reporting them to the data analysis centres. This fact allows a better understanding of the world and helps to reduce the effects of potential risks. However, while the benefits of such devices are real and significant, sensing ‐ based applications face two major challenges: big data collection and restricted power of sensor battery. In order to overcome these challenges, data reduction and sampling sensor adaptation techniques have been proposed to reduce data collection and to save the sensor energy. The authors propose an orchestration ‐ based mechanism (OM) for adapting the sampling rate of the sensors in the network. OM is two ‐ fold: first, it proposes a data transmission model at the sensor level, based on the clustering and Spearman coefficient, in order to reduce the amount of data transmitted to the sink; second, it proposes a sampling rate mechanism at the cluster ‐ head level that allows searching the similarity between data collected by the neighbouring sensors, and then to adapt their sensing frequencies accordingly. A set of simulations on real sensor data have been conducted to evaluate the efficiency of OM, in terms of data reduction and energy conservation, compared to other exiting techniques. | Problem statement Indeed, sensing-based applications provide many challenges for both community and researchers. On one hand, sensor's hardware is limited in resources especially in battery supply which cannot be replaced or recharged in hostile or harsh environments [3,4]. On the other hand, the density deployment of the sensor nodes along with the need for continuous zone monitoring leads to a massive amount of data collection in such networks [5,6]. Consequently, transmitting such big data will quickly deplete the available energy of sensors. Therefore, researchers have focused on data reduction and adapting sensing frequency techniques to improve the energy consumption in sensors and reduce the complexity of data analysis at the sink node [7][8][9][10]. | Our contribution Here, an orchestration-based mechanism for energy conservation and minimising transmission in sensing-based applications is proposed. The objective of this mechanism is to adapt the sampling rate of each sensor according to the variation of the monitored condition, the remaining energy of the sensor and the similarity to data collected by neighbouring nodes. The contribution of this study is described as follows: • At the sensor level, a new version of K-means clustering algorithm called SK-means, for example, Spearman-based K-means that combines the Spearman coefficient to the traditional K-means is proposed. The new version aims to overcome the two main challenges of traditional K-means: the selection of the optimal number of clusters and the convergence function. Subsequently, SK-means allows reducing the periodic data transmission from each sensor to the sink thus avoiding network overload and saving sensor energies. • At the cluster-head (CH) node, the authors propose a model that, at first, allows searching for the spatial-temporal correlations between nodes. Then, based on the node correlation degree, the model adapts the sensing frequency of each sensor according to the similarity with data collected by its neighbours. This will lead to reduce the data collection size, eliminate the redundancy existing among nodes and enhance the power consumption of the sensor. Through simulations on real sensor data, the effectiveness of the proposed mechanism has been validated in terms of minimising energy consumption and data transmission compared to other existing techniques. The remaining article is organised as follows. Section 2 outlines different data reduction and energy-efficient techniques proposed in sensing-based applications. Section 3 depicts the periodic clustering architecture used in the network. Sections 4 and 5 present the data reduction model and the sampling rate model proposed at the sensor and CH levels, respectively. Simulation results are discussed in Section 6. Finally, the conclusion and future work are highlighted in Section 7. | RELATED WORKS In recent years, proposing energy-efficient techniques was the main target almost of all researchers' works. Their primary focus is to reduce the data routed over the network either at the sensor node or at intermediate nodes, mostly the CHs. Indeed, in most of the proposed techniques, the reduction process can be performed based on data aggregation [11], clustering [12], compression [13] or adapting sampling rate [7,8]. The authors in [9,10,[14][15][16][17] dedicated their works in reducing the raw data transmission at the level of sensors. In [14], the authors propose a priority-based compressed data aggregation (PCDA) technique to reduce the amount of heath data transmitted. PCDA uses compressed sensing approach, based on the sensing matrix and convex optimisation, followed by a cryptographic hash algorithm, which uses a key predistribution scheme, at the biosensor level to save information accuracy before sending data for diagnosis. The simulation shows that PCDA ensures a low execution time and communication overhead with a moderate energy consumption. In [15], the authors propose a sequential lossless entropy compression (S-LEC) which organises the alphabet of integer residues obtained from differential predictor into increased size groups. S-LEC codeword consists of two parts: the entropy code specifying the group and the binary code representing the index in the group. The performance of S-LEC is evaluated based on real word dataset from sensor scope and volcanic monitoring and the obtained results show reduced energy consumption for dynamic volcano dataset compared to other existing techniques, particularly LEC and S-LZW. In [9], the authors propose three mechanisms that allow the sensor to adapt its sampling rate to the variation of the monitored environment. The proposed mechanisms are respectively based on similarity functions (Jaccard coefficient), distance functions (Euclidean distance) and analysis variance with statistical tests (ANOVA and Bartlett test). The proposed techniques work on rounds, where each round consists of a set of period time, in which the sensor adapts its sampling frequency at the end of each round. By adapting different scenarios, the proposed techniques realise minimum energy consumption with accurate data collection. Finally, the authors of [10] propose an adapted version of the dual prediction scheme (DPS) algorithm. The new version uses a collection of models for data prediction during the past sequences of the DPS algorithm without updating the history data table classically. Indeed, the new prediction model is computed at the sensors and sent to the sink or vice-versa. The performance of DPS is tested using the data collected from the meteorological station located at Tlemcen (Algeria) while the results show that the data transmission ratio is reduced by more than 90% when accurate predictions are achieved. The authors in [18][19][20][21][22][23] dedicated their works to reducing the amount of data circulated in the network along the path to the sink, for example, at intermediates nodes. In [19], the authors propose a cluster-based data gathering algorithm for WSN called lifetime-enhancing cooperative data gathering and relaying (LCDGRA). LCDGRA works on three phases: the first phase aims to group the sensor nodes into clusters based on K-means clustering and Huffman coding algorithms. The second phase assigns a set of relay nodes to each CH in order to aggregate data before sending to the sink node. During the last phase, the aggregated data are coded based on random linear coding and then relayed to the base station. The simulations show that LCDGRA can ensure an efficient convergence of K-means (average of 31 iterations), a reduced data latency (up to 18%) and less energy consumption (up to 37%) compared to other techniques. In [20], an online data tracking and estimation (ODTE) is proposed to track poor data collected at the sink. ODTE is mainly based on two systems: Data prediction system (DPS) and distortion factor (DF). DPS is used at the sensor in order to reduce its transmission using a defined limit while DF estimates an optimal data collected at the sink node. Although ODTE can highly reduce the data transmission and conserve the battery nodes, it is very complex in terms of computation and processing speed. The authors of [18] propose a routing protocol called gateway clustering energy-efficient centroid (GCEEC) for WSN. The objective of GCEEC is to improve the load among the sensor nodes as well as selects and rotates the CH near the energy centroid position of the cluster. The results show that GCEEC can extend the network lifetime highly and reduce the network overload. However, this is limited to many assumptions taken during the tested scenario. Finally, the authors of [21] propose a structure fidelity data collection (SFDC) technique dedicated to the cluster-based periodic applications in WSNs. SFDC searches both spatial and temporal correlation between nodes, using distance functions and similarity metrics, respectively. Then, it exploits the dependencies to reduce the number of nodes required to work for sampling and data transmission and prove that such reduction is bound to save energy. The authors in [3][4][5][6]24,25] dedicated their works to minimising the data transmission at several levels in the network, for example, sensor and CH levels. In [24], the authors propose a data management framework for data collection and decision making in connected healthcare. The framework relies on three algorithms: first, an emergency detection algorithm aims to send critical records directly to the coordinator; second, an adaptive sampling rate algorithm based on ANOVA and Fisher test in order to allow each sensor to adapt its sampling frequency to the variation of the patient situation; and third, a data fusion and decision making model is proposed at the coordinator and it is based on a decision matrix and the fuzzy set theory. Although it has great advantages for patient monitoring and assessment, the proposed framework suffers from two main disadvantages: (1) in case of low critical patient, none of the data would be archived in the hospital thus, revising patient archive to check patient progress from doctors is not possible and (2) predicting the progress of the patient situation in the subsequent periods of time is not possible by the doctors. The authors of [5] propose a spatial-temporal model to extend the network lifetime based on three similarity metrics: Euclidean distance, cosine similarity and Pearson product-moment coefficient (PPMC). Then, they propose a scheduling algorithm for switching correlated sensor nodes to the sleep mode. By performing real experiments, the authors show that PPMC gives the best results, in terms of conserving network energy, compared to other similarity metrics. However, PPMC has several disadvantages: (1) it does not search the similarity at the sensor node level; (2) it does not take into account the residual energy of the sensors when switching them to the sleep mode; and (3) it assumes that all the correlated sensors have the same degree of correlation. Finally, the authors of [25] propose a two-level node mechanism which is dedicated to periodic sensor applications. First, the authors propose an onnode aggregation method to remove redundant data collected by the sensor. Then, an in-network data reduction called prefix frequency filtering (PFF) is introduced at the CH level. PFF allows CHs to find similarities between data collected by neighbouring nodes in the same cluster, using the Jaccard similarity function. Despite that most of the proposed techniques allow efficient energy saving, they fail to satisfy all aspects of sensing-based applications and lack maturity. In addition, they are very complex and require massive processing. In the proposed work, an energy-efficient data reduction mechanism that is less complicated and suitable for limited resources sensor nodes is presented. Furthermore, the proposed mechanism takes into account several parameters when adapting the sampling rate of the sensor in order to save the integrity of the collected information. | CLUSTER-BASED ARCHITECTURE NETWORK In sensing-based applications, the network architecture represents one of the most important challenges after deploying the sensor nodes. Indeed, some metrics (such as congestion, energy consumption, network overload, and data loss) are highly dependent on network architecture. Here, the proposed mechanism relies on two main concepts of the network: cluster-based architecture and periodic data acquisition. On one hand, the cluster-based network has been considered as an efficient architecture for sensing applications in terms of energy conservation, high network scalability and data transmission. Typically, cluster-based architecture divides the sensors in the network into clusters and assigns a cluster-head (CH) for each cluster. CH is responsible for managing the data collected by the sensor members in that cluster. Subsequently, CH can perform any type of data processing (like aggregation, compression, scheduling, spatio-temporal correlation etc.) over the sensor data before sending toward the sink node. Figure 1 illustrates a simple network based on the cluster architecture in which the communication between the sensors and their CHs or the CHs and the sink is performed according to a single-hop transmission. But, dividing network into clusters is not an easy task and it faces many challenges. Hence, one can find a lot of works in the literature that are interested in issues related to cluster network like selection of cluster heads [26][27][28], optimisation of cluster size [29,30], communication between sensors/ CHs and CHs/sink [31,32] etc. However, the concern of the authors is to study the variation of data collected by the sensors F I G U R E 1 Cluster-based architecture network HARB ET AL. and not the formation of clusters themselves. Therefore, a geographical clustering scheme in which near sensors are already assigned to the same cluster is considered. On the other hand, sensor nodes are responsible for monitoring the target zone and transmit the collected data toward the CHs, which, in turn, forward them to the sink. Unfortunately, data transmission is a high-cost operation in terms of energy consumption. Thus, considering its limited energy power, the lifetime of the sensor will decrease drastically if all the collected data are sent to the CH. Hence, periodic data acquisition model has been introduced in sensing-based applications with the aim of reducing the amount of collected and transmitted data from the sensors. Basically, in a periodic acquisition model, data are collected periodically where each period p is partitioned into time slots. At each slot t, each sensor node N i captures a new reading r i then it forms, at the end of p, a vector of F readings as follows: After that, the sensor will send its vector of data, for example, R i , to its appropriate CH. | SENSOR DATA REDUCTION MODEL As mentioned before, data transmission consumes lots of the available sensor energy. Thus, in order to extend its lifetime, the amount of data transmission from the sensor should be reduced. However, data collected by the sensor devices are mostly redundant and contains useless information. Thus, one of the most effective solutions to reduce the data transmission is by eliminating redundancy and filtering non-useful information before sending them to the CH. This section proposes a data reduction model that allows each sensor to locally search the similarity between the data collected periodically, remove the existing redundancy then send toward the CH. The proposed model is based on the K-means algorithm adapted to the Spearman coefficient metric that allows finding similarities among data by grouping them into clusters. | Recall of K-means clustering algorithm Generally, clustering is a data exploratory task that aims to group data into clusters in a way that the similarity among data in the same cluster is high and that among clusters is low. Researchers have proposed many data clustering techniques for various types of data. One of the most popular algorithms in data clustering is K-means [33]; it is flexible, simple, already adapted to a vast number of applications and used with various kinds of data [34][35][36]. Typically, the K-means is an iterative algorithm in which the process starts by randomly selecting an initial centroid for each cluster. Then, each data point is assigned to the nearest centroid and the first round of cluster formation is performed. After that, the cluster centroids are updated and the process is repeated until the convergence of the criterion function (Algorithm 1). Subsequently, one of the most criterion functions that have been used in K-means is the sum of the square errors. Algorithm 1 K-means algorithm Require: Set of readings: randomly choose centroid c j among R i belongs to C j 3: end for 4: repeat 5: for each reading r i ∈ R i do 6: Assign r i to the cluster C j with nearest c j (i.e., |r i , c j * | ≤ |r i , c j |; j ∈{1, …K}) 7: end for 8: for each cluster C j , where j ∈{1, …K} do 9: Update the centroid c i to be the centroid of all readings currently in C j ; so that c j ¼ 1 |C j | ∑ i∈C j r i 10: end for 11: until all K clusters meet the criterion function convergence 12: return C | Spearman coefficient metric Spearman correlation is a non-parametric test used to measure the degree of association between two data sets. Spearman correlation is determined by ranking each point of the two data sets and, in case of ties, an average rank is used. Moreover, the Spearman correlation gives a value between + 1 and − 1; + 1 indicates a perfect association of ranks, −1 indicates a perfect negative association of ranks while 0 indicates no association between the ranks of the two data sets. Unlike other tests, especially the Pearson coefficient, the Spearman test does not make any assumption about the distribution, or the linear relationship, of the values in the two data sets. Mathematically, the Spearman correlation, ρ, between two data sets R i and R j can be calculated according to the following equation: where • d k is the difference between the ranks of corresponding values in the sets. • F is the number of values in each set. Therefore, R i and R j are considered correlated sets with similar values if and only if the Spearman correlation is greater than a threshold ρ s which is as follows: | K-means adapted to spearman coefficient: SK-means After collecting the data at each period, for example, R i , the sensor tries to minimise its size before sending it to the CH in order to save its energy. The authors propose to use the K-means algorithm to group similar data in R i into clusters then the data redundancy, in each cluster, will be eliminated before data transmission. However, the use of the traditional K-means faces two main challenges: the selection of the cluster number (K) and the convergence criterion function. On one hand, selecting the number of clusters is a crucial decision as it determines the data transmission ratio from the sensor and affects the accuracy of the information sent to the sink. On the other hand, the number of iterations generated by K-means is highly dependent on the selection of the convergence criterion function; thus, an inappropriate criterion function can lead to an increase in the computation process of K-means and thus affecting the data latency metric. Therefore, in order to overcome these challenges, a new version of K-means, called SK-means, adapting the Spearman correlation to the traditional K-means algorithm is proposed. The idea behind SK-means is that all readings collected by a sensor during a period are similar and so they are assigned to the same cluster, for example, initial cluster. Then, it recursively divides the initial cluster into small clusters every time a high similarity between the readings inside the new cluster is detected. The criterion function used in SK-means to stop the cluster dividing and obtain the final clusters is the Spearman correlation. Algorithm 2 describes the process of SK-means that is applied over the readings collected at each period, R i . First, all the readings are considered similar and R i is assigned to a temporary set of clusters, e.g. L (line 1). Then, the Spearman correlation between the readings is calculated by dividing them into two equal subsets using the function Partition (lines 2-4). Thus, if the correlation exceeds the Spearman threshold, then the readings are considered similar. Consequently, the average of the readings is computed (e.g. r) and added with its weight (e. g. wgtðrÞ) to the final reading set that will be sent to the CH (lines 5-9). Otherwise, for example, if the correlation does not exceed the Spearman threshold, the readings are considered unsimilar and the K-means algorithm is applied in order to divide them into two clusters (lines [10][11][12][13][14]. The process is repeated on the new clusters until all readings within each cluster become similar. Therefore, each sensor will send a reduced set of readings C i in the form fðr 1 ; wgtðr 1 ÞÞ; ðr 2 ; wgtðr 2 ÞÞ; …; ðr k ; wgtðr k ÞÞg to the CH at the end of each period. 7: wgtðrÞ ¼ |R j | // the total number of elements in R j | Analytical illustration of SK-means This section shows the process of SK-means algorithm using an analytical example ( Figure 2). It assumes a set R i consisting of 8 readings (i.e. F ¼ 8) collected during a period. The first step is to divide R i into two equals partitions R i l and R i r of size 4, using the Partition function. Then, the Spearman correlation is calculated between the partitions and it indicates that the partitions are not correlated (e.g. ρ > ρ s ). Thus, the K-means algorithm is applied over the set R i to divide the readings into two clusters C 1 and C 2 . For each cluster, the process of dividing the readings into equal partitions and calculating the correlation between them is repeated. Whilst, the K-means is applied each time, a high correlation is detected between the readings until obtaining the final clusters. Finally, the mean values for all clusters are calculated while assigning their weights (e.g. the number of readings in each cluster). Therefore, the reduced set of readings (e.g. C i ) will be sent towards the CH. Mostly, the data collected by the sensors are spatial-temporal correlated; on one hand, the spatial node correlation results from the dense deployment of the sensors along with the random scattering strategy. On the other hand, the temporal node correlation is dependent on the variation of the monitored condition, which can speed up or slowdown, which pushes the neighbouring nodes to collect redundant data. Thus, after receiving the datasets coming from the sensors, OM proposes a sampling rate model that allows the CH to search the spatial-temporal correlations among the sensors in order to adapt their sampling rate for the next period. The objective of the model is to reduce the amount of data collection at the sensors, thus enhancing their energies, and minimising the data correlation among neighbouring nodes before sending the data to the sink. Furthermore, the proposed CH sampling model takes into account two features to calculate the new sensing frequencies of each sensor: the spatialtemporal node correlation and the remaining energy of each sensor. | Spatial-based node correlation In large zone sensing applications, there is need to deploy a massive number of nodes to ensure the full zone coverage and maintain a high-reliability level of the collected data. In addition, in some harsh and hostile zones, the nodes are scattered over the target zone in a random manner. This leads to a specific spatial degree among the deployed nodes in which the distance between more the spatial degree is close, and vice versa. Then, two sensor nodes are considered spatially correlated if the geographical distance between them is less than a defined threshold. Let first define that each node N i is representing by the following 4-tuples: N i = {x i , y i , S r , T r }, where x i and y i indicate the position of N i , S r is its sensing range and T r is its transmission range ( Figure 3). Thus, it is considered that two nodes N i and N j are spatially correlated if the intersection between their sensing ranges is greater than a threshold α: F I G U R E 3 Spatial correlation between nodes where: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi and α is the threshold for the sensing range intersection between two nodes that may be in [0, 2 � S r ], 0 indicates that both sensors monitor the same zone area and 2 � S r indicates that no spatial correlation among the nodes. Whilst E d is the geographical Euclidean distance between N i and N j . | Temporal-based node correlation The temporal correlation among nodes aims to find the similarities between their collected data, even when they are spatially correlated or not. But, the closer the geographical distance between nodes or the low variation of the monitored zone can increase the similarity between the data nodes. Subsequently, one can find several functions that allow searching the similarity among data sets such as Jaccard, Dice and Cosine. Here, the authors focus on the Jaccard similarity as one of the most used and well adapted functions to several domains. Then, in order to calculate the temporal correlation among two data sets C i and C j sent from two nodes N i and N j during a period, a score table (ST) is defined at first. The ST is a customisable guide defined by the end-user or the expert that aims to determine the criticality of readings collected about a condition. Thus, the ST allows early detection of the critical situation and alerting the end-user as fast as possible. Typically, ST defines a normal range of readings, for example, r i ; r j � � , indicating that the monitored condition is in the normal situation. Readings in the normal range are assigning a score of 0. Then, the more the readings are far from the normal range, the more their criticalities (or scores) are. Table 1 shows ST that determines the criticality of the captured readings. First, ST determines the normal range of readings and then it defines a threshold ɛ in order to calculate the deviation of a reading from the normal range; readings can be more or less than the range normal. ɛ is a user-defined threshold determined according to the application requirements. The score of readings can take a value between 0 and 3 where 3 indicates high critical readings according to the normal range. Based on the ST, the CH calculates the temporal correlation, according to Jaccard similarity, between two data sets C i TA B L E 1 Score table Score 0 1 2 3 Readings or or or and C j collected by nodes N i and N j , respectively, according to the following steps: • For each reading set, the CH calculates its score set. For instance, the score set O i of C i is as follows: where o t is the score of the mean reading r t , wgtðo t Þ ¼ wgtðr t Þ and o t ∈ [0, 3]. • The Jaccard similarity is calculated based on the reading scores between both sets: where |wgtðo i t Þ| is the total weight of readings having score t in O i • N i and N j are temporally correlated if the Jaccard similarity between their score sets is greater than a threshold t J : where t J takes a value in [0, 1]. | Degree of node correlation In order to adapt the sampling rate of a sensor, the CH searches its correlation degree with other nodes. The correlation degree, indicated as D i , of a node N i represents the set of neighbour nodes that are spatially-temporally correlated to N i . Subsequently, based on Equations 3 and 4, two nodes N i and N j are considered spatial-temporal correlated if they are geographically close and their generated data sets C i and C j are similar as follows: Therefore, the correlation degree of the node N i can be defined as: It is also assumed that |D i | is the number of nodes in D i . | Sampling rate algorithm In addition to the correlation degree of each node, the CH takes into account the remaining energy of the node in order to adapt its sampling rate. The intuition is that if the node has a high correlation degree and its battery level is at a low level compared to its correlated nodes, then its sampling rate should be decreased, and vice versa. Consequently, the energy of the sensor will be conserved and the redundancy among the data collected by neighbouring nodes will be minimised. Algorithm 3 shows the sampling rate model applied at the CH after receiving data sets from sensors at each period. The algorithm takes the initial sampling rate of a sensor (e.g., the period size, F ), its initial energy (e.g. E i ) and its correlation degree as input. Then, the algorithm calculates, as output, the new sampling rate of the sensor, for example, S t , for the next period. Furthermore, OM defines a threshold known as the minimum sampling rate, for example, S min , that allows taking into account the criticality of the monitored application; S min takes a value between 0% and 100% of the period size in which a value of S min is close to 0% indicating a less critical application while a value of S min is close to 100% indicating a more critical application. Moreover, an energy sampling threshold β% has been defined indicating the percentage of sampling rate that the sensor must add or reduce from its current sampling rate depending on the energy level of its correlated nodes; β% takes a value in [0%, 100%]. The process of Algorithm 5.4 starts by initialising the sampling rate of the sensor to its maximum, for example, the period size (line 1). After that, if the node has a spatial-temporal correlation with other nodes, then its sampling rate is reduced according to its correlation degree (lines 2--3). In addition, for each correlated node N j ∈ D i , if the remaining energy of N i is less (respectively more) than that of N j then the sampling rate of N i must be further reduced (increased) by β% (lines [4][5][6][7][8][9][10][11]. Finally, if the new sampling rate of N i is less than the minimum sampling rate determined for the application, then the new sampling rate of the sensor is set to S min (lines 12-14). Algorithm 3 Sampling rate algorithm Require: A node: N i , A period size: F , Initial energy: E i , Set of correlated nodes: D i , Minimum sampling rate: S min , Energy sampling threshold: β. Ensure: New sensor sampling rate: S t . 1: | SIMULATION RESULTS In order to evaluate the performance of OM, multiple series of simulations using real sensor data collected from Intel Berkeley Research Laboratory [37] is performed. This data contains readings for 46 Mica2Dot sensors with weather boards collect humidity, temperature, light and voltage values. For the sake of simplicity, the simulation only focuses on the temperature field. For every 31 s, each Mica2Dot sensor collects new reading for each feature and it sends towards the sink for archive purpose. In the simulation, a file that includes a log of about 50,000 readings for each sensor is used. It is assumed that each sensor reads the data from its corresponding file for a period of time, and then it sends them toward a CH placed at the centre of the lab after applying the proposed mechanism. Figure 4 shows the geographical distribution of Mica2Dot sensors in the Intel lab where each sensor takes an Id from 1 to 54 (yellow sign indicates some failure sensors). The algorithms used in OM are implemented based on the Java simulator and the obtained results are compared to those obtained in the PFF [25] and S-LEC [15]. Table 2 summarises the parameters used in the simulation with their tested values. Furthermore, the ɛ threshold used in the score is set to 1 and thus the customizable score table adapted to the temperature readings is shown in Table 3: 6.1 | Reading score study Figure 5 shows the reading values collected by five sensors randomly selected (Figure 5a) with their calculated scores (Figure 5b) according to Table 3. The obtained results show the following observations: (1) the temperature condition in the Intel lab changes very slowly due to the high redundancy existing among the data collected by the sensors; (2) The spatial-temporal correlation among neighbouring nodes is high; for instance, sensors 1 and 2 or sensors 3 and 4; (3) The spatial correlation between sensors does not always lead to a temporal correlation among the collected data; For instance, the sensors 1 and 2 are spatial correlated and they generate similar data until the period number 500 then, after this period, the collected data are becoming unsimilar; (4) Sensors that are not spatially correlated can sometimes have a temporal correlation; For instance, the sensors 2 and 5 are not geographically close but, starting from the period number 700, they are collecting redundant data; and (5) The correlation among the nodes can also be verified through the same scores calculated for the data collected by the correlated nodes; the readings scores of sensors 1 and 2 are mostly varying between 0 and 2 while those of sensors 3 and 4 are between 2 and 3. Thus, the criticality of the temperature condition in the Intel can change from one place (e.g. sensor) to another. (Figure 6f). It is observed that OM can reduce the data transmission to the CH more than the PFF and S-LEC in all cases. Subsequently, it allows each sensor to send up to 65% of data less than PFF and up to 78% of data less than S-LEC. Furthermore, the obtained results show that: | Data transmission ratio at sensor • By decreasing the period size, OM allows each sensor to reduce its data transmission to the sink (Figure 6a). This is because when the period size increases, the redundancy among the collected data decreases. Thus, the sensor must increase its data transmission in order to save the information integrity. • By increasing the spatial correlation threshold or decreasing the Jaccard similarity threshold, the periodic data transmission from each sensor will decrease (Figure 6c,d). This is due to the spatial-temporal correlation which will increase when α increases or t J decreases. Thus, the CH will reduce the sampling rate of neighbouring nodes in order to reduce the data redundancy among their collected data. • By decreasing the minimum sampling rate, OM gives better results in terms of data reduction at the sensor (Figure 6e). This is because the criticality of the monitored application will increase when S min increases and thus, the CH must increase the sensor sampling rate in order to increase the reliability of the decision making. | Sensor sampling rate study This section shows the performance of the sampling rate model proposed at CH level in OM in terms of adapting the sensing frequency of a sensor based on the spatial-temporal correlation with its neighbouring nodes. The performance is studied based on the variation of the energy sampling threshold (Figure 7a) and the minimum sampling frequency ( Figure 7b) for a set of 15 periods. The obtained results show that the sampling rate of the sensor is dynamically adapted after each period depending on the spatio-temporal correlation between nodes. Although the spatial correlation is fixed respecting to the nodes deployment, the temporal correlation can differ from period to another. Furthermore, the following observations are eminent: • By varying the energy sampling threshold (Figure 7a), it is shown that the sampling rate of the sensor is more reduced with the increasing value of β. Therefore, with β = 10%, the sensor will reduce its sampling rate to the minimum sampling threshold more than with β = 5%. This confirms the behaviour of OM by reducing its sampling rate in order to save further its energy. • By varying the minimum sampling threshold (Figure 7b), it is shown that the sensor reduces more its sampling rate when the value of S min . For instance, the sampling rate of the sensor varies mostly between 20 and 70 when S min = 20%, between 30 and 80 when S min = 30% and 40 and 90 when S min = 40%. This also confirms the behaviour of OM by increasing (respectively decreasing) the sampling rate when the criticality of the application is high (respectively low). Figure 8 shows the percentage of data loss after applying OM and the PFF technique, with respect to several parameters. But the study of data loss is an essential metric in sensing-applications that can affect the decision made at the end-user. In the simulation, a reading is considered as a lost reading if a sensor collects it but it is not sent either by their correlated neighbouring to the sink. Thus, the percentage of data loss is calculated by dividing the lost readings by all the sensors over the entire raw data. The obtained results show that OM can outperform PFF in terms of maintaining the accuracy of data compared to PFF in all cases. Subsequently, the percentage of data loss using OM does not reach 4%, in the worst case, while it exceeds 6% using PFF. This is because PFF reduces the data transmission from sensors based on the temporal correlation only while OM uses both spatial and temporal correlation, which increases the accuracy of the collected data. Furthermore, it can be observed that percentage of data loss usually decreases with the increase of the amount of data transmitted (see Figure 6). Therefore, the data accuracy of OM increases with the increasing values of ρ s and t J , or the decreasing values of F and α. Figure 9 shows the number of obtained clusters after applying the SK-means algorithm over the periodic data collected by a sensor, for a set of 10 periods. In addition, the figure shows the number of readings assigned to each cluster, for example,.
2021-05-05T00:09:51.628Z
2021-03-10T00:00:00.000
{ "year": 2021, "sha1": "9c7c1754a02d2db3be2f14e8ee55c4934004d4cc", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1049/smc2.12002", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c22154062d1c0c32cd993011f00cb2af7cce664c", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
267371028
pes2o/s2orc
v3-fos-license
The ultrasound-based radiomics-clinical machine learning model to predict papillary thyroid microcarcinoma in TI-RADS 3 nodules Background Conventional ultrasound (CUS) technology has proven to be successful in the identification of thyroid nodules. Moreover, the American College of Radiology Thyroid Imaging Reporting and Data System (ACR TI-RADS) was developed for the purpose of evaluating the risk of thyroid nodules based on ultrasound imaging. Nevertheless, identifying papillary thyroid microcarcinoma (PTMC) from TI-RADS 3 nodules using this system can be difficult due to overlapping morphological features. The main objective of this study was to investigate the efficacy of a machine learning model that utilizes ultrasound-based radiomics features and clinical information in accurately predicting the presence of PTMC in TI-RADS 3 nodules. Methods A total of 221 patients with TI-RADS 3 nodules were included, consisting of 91 cases of PTMC and 130 benign thyroid nodules. They were randomly divided into training and test cohort in an 8:2 ratio. Radiomics features were extracted from CUS images by manually outlining the targets, while clinical parameters were obtained from electronic medical records. The radiomics model, clinical model, and combined model were constructed and validated to distinguish between PTMC and benign thyroid nodules. Radiomics variables were extracted via the Pyradiomics package (V1.3.0). Moreover, least absolute shrinkage and selection operator (LASSO) regression was used for feature selection. Light Gradient Boosting Machine (LightGBM) was employed to build both radiomics and clinical models. Ultimately, a radiomics-clinical model, which fused radiomics features with clinical information, was developed. Results Among a total of 1,477 radiomics features, fifteen features that were found to be associated with PTMC through univariate analysis and LASSO regression were selected for the development of the radiomics model. The combined “radiomics-clinical” model demonstrated superior diagnostic accuracy compared to the clinical model for distinguishing PTMC in both the training dataset [area under receiver operating curve (AUC): 0.975 vs. 0.845] and the validation dataset (AUC: 0.898 vs. 0.811). We constructed a radiomics-clinical nomogram, and the clinical applicability was confirmed through decision curve analysis. Conclusions Utilizing an ultrasound-based radiomics approach has proven to be effective in predicting PTMC in patients with TI-RADS 3 nodules. Introduction Thyroid carcinoma is the most common malignant tumor in endocrine system.Over the years, there has been a gradual rise in its occurrence, making it a subject of growing concern within the medical and scientific community.Presently, the incidence of thyroid carcinoma has surpassed that of all other malignant tumors in terms of its increment (1). The escalation in its rates can be attributed to papillary carcinoma, particularly in its early stages (2)(3)(4).Another significant factor contributing to the upsurge in thyroid carcinoma cases is the higher occurrence of papillary thyroid microcarcinoma (PTMC), which comprises carcinomas measuring 1.0 cm or less (3).Thus, studying PTMC has gained more and more importance. The small size of the cancer, its subtle onset, slow advancement, lack of apparent clinical symptoms, and frequent co-occurrence with other thyroid disorders pose difficulties in making an accurate preoperative diagnosis, resulting in a certain degree of both underdiagnosis and misdiagnosis.Despite concerns regarding ultrasound's ability to differentiate between benign and malignant thyroid nodules, it remains the preferred imaging technique for evaluating the morphological features of thyroid nodules.This is due to its numerous advantages, including high resolution, lack of ionizing radiation, portability, and ease of use (5)(6)(7).Recently, several ultrasound centers have embraced Thyroid Imaging Reporting and Data System (TI-RADS) criteria to evaluate benign and malignant thyroid nodules.However, due to significant overlap in the ecomorphological features of both types of nodules, some thyroid nodules categorized as TI-RADS 3 are later confirmed to be PTMC through pathology examination (8).Furthermore, the interpretation of conventional ultrasound (CUS) characteristics is subject to personal judgment and operator-dependent.This further leads to differences in readings by various individuals (9).A prior study has shown that the risk of malignancy for TI-RADS 3 thyroid nodules is 2.1% (10). Ultrasound-guided fine-needle aspiration biopsy (FNAB) is widely utilized as the primary diagnostic approach for thyroid nodules.It is recognized as a highly efficient, uncomplicated, and secure technique for detecting head and neck anomalies, notably thyroid nodules (11)(12)(13). Research has revealed that diagnosing PTMC via FNAB of thyroid nodules can be difficult.In nodules with a size of less than 1 cm, there is a relatively high incidence of falsenegative outcomes due to inadequate cytology samples (14,15).Moreover, the rate of metastasis of PTMC to the cervical lymph nodes has been reported to be high (16)(17)(18).Importantly, lymph node metastasis is closely associated with PTMC recurrence.Additionally, certain patients may experience distant metastases to the lungs or bones (19).Therefore, the early and accurate diagnosis of PTMC holds immense importance.Consequently, there is an urgent need for a reliable and non-invasive method to classify and identify PTMC. Radiomics is a rapidly developing research field by incorporating computational methods (9).Radiomics harnesses the extensive and intricate digital data obtained from imaging modalities to uncover a plethora of quantitative disease features that may not be visible to the human eye (20)(21)(22).This powerful technique allows us to extract and analyze a wide range of intricate details, providing valuable insights into the disease process.CUS-based radiomics has demonstrated good diagnostic performance for various diseases such as ovarian epithelial cancer, intrahepatic cholangiocarcinoma, and breast cancer (23)(24)(25).The integration of clinical information and radiomics may further improve diagnostic performance.Hence, the principal aim of this study was to assess the efficacy of CUS-based radiomics for distinguishing between benign thyroid nodules and PTMC.We present this article in accordance with the TRIPOD reporting Key findings • Ultrasound-based radiomics approach is effective in predicting papillary thyroid microcarcinoma (PTMC) in patients with Thyroid Imaging Reporting and Data System (TI-RADS) 3 nodules. What is known and what is new? • Ultrasound-based radiomics has demonstrated good diagnostic performance for various diseases such as ovarian epithelial cancer, intrahepatic cholangiocarcinoma, and breast cancer.• This is the first study to investigate the potential of ultrasoundbased radiomics for predicting PTMC in patients with TI-RADS 3 nodules. What is the implication, and what should change now? • The application of a radiomics approach to ultrasound images can effectively predict PTMC in patients with thyroid nodules with a TI-RADS score of 3. Incorporating radiomics features into clinical variables can improve the accuracy of PTMC prediction compared to using clinical variables alone.Further investigation is needed to test the value of our findings on a broader multicenter patient sample. Patient enrollment and data acquisition From January 2019 to October 2022, a retrospective study was conducted on 221 patients who had TI-RADS 3 nodules and were pathologically diagnosed with PTMC and benign thyroid nodules.(III) patients who have a history of other malignancies or coexisting malignancies (Figure 1). CUS examination and interpretation of CUS features All ultrasound examinations were carried out by board-certified radiologists who possess a minimum of five years of experience in conducting ultrasound imaging specifically for superficial tissue, utilizing ultrasound machines, including Resona7 (Mindray, Shenzhen, China), Aplio 500 (Toshiba Medical Systems, Tokyo, Japan) and ESAOTE (MyLab 90 X-vision, Genoa, Italy) with corresponding high-frequency probes.Images of the largest long axis cross-section of target nodules were obtained for subsequent analysis.Two experienced radiologists (with over five years of experience in thyroid sonography) independently reviewed all images without knowledge of clinical information or final diagnoses.The CUS features reinterpreted included tumor dimension, echotexture (homogeneous, heterogeneous), echogenicity (hypoechoic, iso/hyperechoic, or mixed), margin (well-defined and ill-defined), presence of calcification (absent, macrocalcification, and microcalcification), aspect ratio (>1 or ≤1). The radiomics analysis process consisted of segmenting the lesions, extracting features, selecting relevant features, and constructing a model.Two blinded radiologists manually segmented the regions of interest to ensure accuracy.Prior to feature extraction, intensity normalization was performed.The patients were then randomly divided into a training and test cohort in an 8:2 ratio.This division provided a suitable dataset for training the model and evaluating its performance. Segmenting the lesions and extracting radiomics features The radiomics analysis workflow is shown in Figure 2.An Feature selection and radiomics model establishment To analyze our data statistically, we utilized different tests based on the distribution of the features.For features that followed a normal distribution, we conducted a Student's t-test.On the other hand, for features that did not exhibit a normal distribution, we employed the Mann-Whitney U test.We set the significance level at 0.05 and retained only those features with a P value below this threshold.This rigorous approach allowed us to identify statistically significant features that could potentially contribute to the study findings.To ensure the reliability of our analysis, we examined the repeatability of features.For features that exhibited high repeatability, we calculated the correlation between them using Spearman's rank correlation coefficient. If the correlation coefficient between any two features was greater than 0.9, we retained only one of them to avoid redundancy. To further enhance the comprehensiveness of our feature set, we implemented a greedy recursive deletion strategy for feature filtering.This involved iteratively removing the feature with the highest redundancy in the current set.After this process, we retained fifteen features.In order to construct the signature, we employed the least absolute shrinkage and selection operator (LASSO) regression model on the discovery data set.The LASSO method shrinks regression coefficients towards zero based on the regularization weight λ and selectively forces coefficients of irrelevant features to be precisely zero.To determine the optimal λ, we conducted ten-fold cross-validation using minimum criteria, and selected the value of λ that resulted in the lowest cross-validation error.The features with nonzero coefficients were utilized for fitting the regression model, and subsequently combined to form a radiomics signature.To calculate the radiomics score for each patient, we computed a linear combination of the retained features, which were weighted by their respective model coefficients.The LASSO regression modeling was conducted using the Python scikit-learn package. Following the LASSO feature screening, we selected the final set of features to be used for constructing the risk model.To achieve this, we employed the Light Gradient Boosting Machine (LightGBM) model.To ensure the reliability and generalizability of our model, we implemented five-fold cross-validation.By averaging the results across the five folds, we obtained the final radiomics signature. The building of the clinical model and radiomics-clinical model All of the thyroid nodules that were classified as TI-RADS 3 had well-defined margins, no calcifications, and an aspect ratio of ≤1.We just selected ultrasound features of tumor dimension, echotexture (homogeneous or heterogeneous), and echogenicity (hypoechoic, iso/hyperechoic, or mixed) to differentiate between benign thyroid nodules and PTMC.The term "tumor dimension" refers to the measurement of the largest long axis cross-section of the target thyroid nodule on the image, indicating its diameter.The term "echotexture" refers to the overall appearance and texture of an ultrasound image.It describes the patterns and characteristics observed within tissues or structures visualized through ultrasound scans, such as the level of echogenicity, homogeneity, and presence of any abnormalities or variations.In this study, we focus on investigating whether the echotexture of thyroid nodules is homogeneous or heterogeneous.In brief, the term "echogenicity" refers to the ability of a tissue or structure to reflect ultrasound waves.In this study, the echogenicity is defined based on a comparison with the surrounding parenchyma, categorized as hypoechoic, isoechoic, hyperechoic, or mixed.The process of constructing the clinical model closely resembled that of the radiomics model.The selection of features for the clinical model was based on baseline statistics, considering features with a P value <0.05.Additionally, the same machine learning model was employed in both the radiomics and clinical model building procedures.To ensure fairness in comparison, we maintained a fixed test cohort and implemented five-fold cross-validation during the process. To efficiently evaluate the prognostic significance of the radiomics signature alongside clinical risk factors, we introduced a radiomics nomogram on the validation data set.Utilizing logistic regression analysis, the nomogram was developed by combining the radiomics signature with clinical risk factors. Statistical analysis In order to evaluate the comparability of patient characteristics among different cohorts, we conducted independent t-tests for normally distributed data and used Mann-Whitney U tests to express non-normally distributed data as medians (interquartile range).The Chi-squared tests were utilized to analyze the categorical variables.To assess the predictive performance of the models, we employed several evaluation metrics.First, we constructed receiver operating characteristic (ROC) curves, which illustrate the trade-off between sensitivity and specificity at various classification thresholds.We calculated the area under receiver operating curve (AUC), which serves as an indicator of the model's discriminatory ability.Additionally, we determined the balanced specificity and sensitivity of the cut-off point that maximized the Youden index.To ensure the generalizability of the models, we evaluated their performance in both the training and test cohorts.To compare the AUC between the three models, we used the Delong test.To assess the clinical utility of the three models, decision curve analysis (DCA) was employed.We conducted all statistical analyses using SPSS (version 21.0;IBM Corp., Armonk, NY, USA), with statistical significance defined as a two-sided P value ≤0.05. Baseline characteristics of patients A total of 130 patients with benign thyroid nodules and 91 patients with PTMC were enrolled for the study.We conducted independent sample t-tests, Mann-Whitney U tests, or Chi-squared tests, as appropriate, to compare the clinical characteristics of the patients.Tables 1,2 display the baseline characteristics of patients. Establishment and evaluation of the radiomics model A total of 1,477 handcrafted features were extracted across six categories, comprising 309 first-order features, 13 shape features, and the remaining texture features (Figure 3A) (table available at https://cdn.amegroups.cn/static/public/tcr-23-1375-1.xlsx)provides a detailed list of the handcrafted features.To extract all handcrafted features, we utilized an in-house feature analysis program implemented in Pyradiomics.For more information on Pyradiomics, please refer to its documentation at http://pyradiomics.readthedocs.io. Figure 3B displays all features and their corresponding P value results. We used a LASSO logistic regression model to select the nonzero coefficients for establishing the Rad-score.Figure 3C,3D illustrates the coefficients and mean standard error (MSE) resulting from ten-fold validation.Following selection process, a total of fifteen features retained a nonzero coefficient value.The details of these features are shown in Figure 4. After selecting features with non-zero coefficients, we utilized the LightGBM model to analyze and construct a radiomics signature, also referred to as the radiomics score (Rad-score).Within the training cohort, the model exhibited an AUC of 0.974 [95% confidence interval (CI): 0.954-0.994],accompanied by a sensitivity of 0.901 and specificity of 0.962.In the test cohort, the AUC was 0.867 (95% CI: 0.757-0.977),and the sensitivity and specificity were 0.900 and 0.760, respectively (Figure 5, Table 3). Establishment and performance of the clinical model and radiomics-clinical model We employed a significance level (P value ≤0.05) to identify the characteristics in the training cohort that would be used for constructing the clinical model.Only tumor dimension, echogenicity and echotexture met this condition, and therefore, these features were used to build the clinical model (Table 2). In the training cohort, the clinical model demonstrated a balanced sensitivity of 0.704 and specificity of 0.848, resulting in an AUC of 0.845 (95% CI: 0.785-0.905).In the test cohort, the AUC was 0.811 (95% CI: 0.667-0.955),with a sensitivity of 0.600 and specificity of 1.000 (Figure 5, Table 3). The radiomics-clinical model had an AUC of 0.975 (95% CI: 0.954-0.996) in the training cohort.The model exhibited a balanced sensitivity of 0.887 and a specificity of 0.981.In the test cohort, the AUC was 0.898 (95% CI: 0.791-1.000),with a sensitivity of 0.900 and a specificity of 0.840 (Figure 5, Table 3). We constructed a nomogram based on the radiomicsclinical model (Figure 6).In this study, each model was scenarios where no prediction model was utilized (i.e., treatall or treat-none scheme), both the radiomics-clinical model and the radiomics model consistently exhibited significant advantages in the majority of cases (Figure 7).The AUC of the models was compared using the Delong test.In the test cohort, there was a statistically significant difference in AUC between the radiomics-clinical model and the clinical model (P=0.01).Nevertheless, the comparison of AUC between the radiomics model and the radiomics-clinical model in the test cohort did not yield any statistically significant difference (P=0.49). Discussion There has been a notable surge in the incidence of PTMC in recent years, resulting in a sharp increase in its morbidity rates.Although the surgical treatment of PTMC remains controversial, lymph node metastasis in PTMC is indisputable and the rate of metastasis is high.For instance, lymph node metastases contribute to as much as 24-64% of cases and are strongly linked to recurrence (26).The gravity of the situation demands utmost diagnostic accuracy.However, PTMC's low diagnostic precision makes it vulnerable to frequent misdiagnosis or even overlooking (27,28).Many thyroid nodules with a TI-RADS score of 3 were pathologically confirmed as PTMC. The rapid advancement of radiomics has created new opportunities for radiologists to evaluate tumor characteristics in a high-throughput manner.By detecting features that may not be visible to the human eye, radiomics offers the potential to provide non-invasive assessment and achieve more accurate tumor characterization.By quantifying images and analyzing the extracted information in-depth, radiomics could potentially address the diagnostic challenge of PTMC by identifying previously undetected features. To the best of our knowledge, this is the first study to investigate the potential of CUS-based radiomics for predicting PTMC in patients with TI-RADS 3 nodules.According to univariate and multivariate analysis, this study observed associations between clinical variables, including tumor dimension, echotexture, and echogenicity, with PTMC.Subsequently, we proceeded to develop three predictive models utilizing clinical variables, radiomics features, and a fusion of both.In the validation datasets, we achieved an AUC of 0.867 with radiomics features alone in the differentiation of PTMC from TI-RADS 3 nodules.Nonetheless, the amalgamation of radiomics features with clinical variables led to enhanced performance of the radiomics-clinical model, resulting in an achieved AUC of 0.898.In contrast, the clinical model had a lower AUC value of 0.811, which was significantly different from the radiomicsclinical model.The predominant factors contributing to this phenomenon are primarily attributed to the subjective assessment and operator-dependent nature of interpreting CUS features, as well as the variation in ultrasound image quality across different machines.It is insufficient for clinical variables to distinguish PTMC from TI-RADS 3 nodules. The study conducted by Zhang et al. revealed that ultrasound alone achieved an AUC of 0.728 in diagnosing PTMC, while Gao et al. examined the diagnostic capabilities of ultrasound-guided FNAB for PTMC, which exhibited remarkable performance with an AUC of 0.947 (29,30).Nevertheless, it entailed an invasive procedure.Our current study demonstrated that the radiomics-clinical model achieved an AUC of 0.898.These results suggest that the radiomics-clinical model could be a preferable alternative to ultrasonography alone or invasive FNAB examinations. The aforementioned findings indicate that the radiomicsclinical model has the potential to serve as a valuable and quantitative tool for PTMC prediction.In a previous study, the diagnostic performance has been categorized into three levels based on the AUC value.These levels include low performance (AUC =0.5-0.7),moderate performance (AUC =0.7-0.9), and high performance (AUC >0.9) (31).Although both the radiomics-clinical model and radiomics model produced excellent predictive results, the combined model did not perform better than the radiomics model.It means that radiomics may be the best predictive factor for PTMC predicting.Furthermore, we constructed a nomogram derived from the radiomics-clinical model to facilitate clinical decision-making. This research possesses several limitations that should be acknowledged.Firstly, being a retrospective study, there is a possibility of selection bias.Secondly, the study was conducted in a single center, thus warranting the need for multicenter studies with larger patient populations to validate the findings.Thirdly, in this retrospective study, we focused solely on the development of a CUS-based radiomics model.This was because the data pertaining to advanced CUS techniques like contrast enhanced ultrasound (CEUS) or ultrasound elastography were incomplete.We are confident that conducting additional studies that incorporate both radiomics and multimodal ultrasound techniques may demonstrate enhanced diagnostic performance. Conclusions Building an effective machine learning model based on radiomics and clinical information for distinguishing between benign thyroid nodules and PTMC in cases with a TI-RADS score of 3 is crucial for clinical practice.Based on our findings, it can be concluded that the utilization of a radiomics approach applied to ultrasound images provides an effective means of predicting the presence of PTMC in patients presenting with thyroid nodules having a TI-RADS score of 3. Incorporating radiomics features into clinical variables can improve the accuracy of PTMC prediction compared to using clinical variables alone.To validate the significance of our findings, it is imperative to pursue further investigations on a larger and more diverse patient sample, encompassing multiple centers.Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013).This study was approved by institutional ethics committee of the Second Affiliated Hospital of Wenzhou Medical University (No. 2023-K-43-01) and individual consent for this retrospective analysis was waived.However, written consent was obtained from each patient before surgery or biopsy. Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license).See: https://creativecommons.org/licenses/by-nc-nd/4.0/. Figure 3 Figure 3 Radiomics features selection.(A) Handcrafted features were extracted from ROIs, including first order features, shape features and texture (GLRLM, GLSZM, GLDM, GLCM) features.The ratio of handcrafted features was presented.(B) The picture shows all radiomics features and their corresponding P value results.(C,D) To obtain the optimal penalty coefficient lambda in the LASSO model, a ten-fold cross-validation and minimum criteria procedure were employed.GLSZM, gray-level size zone matrix; GLCM, gray-level co-occurrence matrix; GLRLM, gray-level run length matrix; GLDM, gray-level dependence matrix; MSE, mean standard error; ROI, region of interest; LASSO, least absolute shrinkage and selection operator. Figure 7 displays the decision curve analysis for the clinical model, radiomics model, and radiomics-clinical nomogram.In comparison to Figure 5 Figure 5 Diagnostic performance of different models.The AUC of the radiomics model, clinical model, and radiomics-clinical model (nomogram) in the training cohort (A) and test cohort (B).AUC, area under receiver operating curve; CI, confidence interval. Figure 7 Figure 7 Decision curves of different models.The DCA of three models in the training cohort (A) and test cohort (B).The vertical axis represents the net benefit, and the horizontal axis represents different risk thresholds.DCA, decision curve analysis. Table 1 Baseline clinical information of all patients Data presentation: the number of patients is presented as n (%), while the age and tumor diameter in ultrasound images are displayed as mean ± standard deviation. Table 2 Baseline clinical information of patients in the training cohort Table 3 Predictive performance of three models in the training and test cohort Sensitivity and specificity were assessed at the cutoff value that yielded the maximum Youden index value.AUC, area under receiver operating curve; CI, confidence interval; Sen, sensitivity; Spe, specificity; Rad-clinic model, radiomics-clinical model.
2024-02-02T16:12:50.612Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "2166188ad72d41ff64ce24a76bd6bf7146b355b4", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.21037/tcr-23-1375", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa322a2febce1ed3cde094db3788766546897d0b", "s2fieldsofstudy": [ "Medicine", "Engineering", "Computer Science" ], "extfieldsofstudy": [] }
119334909
pes2o/s2orc
v3-fos-license
Closed-form formulas for calculating the extremal ranks and inertias of a quadratic matrix-valued function and their applications This paper presents a group of analytical formulas for calculating the global maximal and minimal ranks and inertias of the quadratic matrix-valued function $\phi(X) = (\, AXB + C\,)M(\, AXB + C)^{*} + D$ and use them to derive necessary and sufficient conditions for the two types of multiple quadratic matrix-valued function {align*} (\, \sum_{i = 1}^{k}A_iX_iB_i + C \,)M(\,\sum_{i = 1}^{k}A_iX_iB_i + C \,)^{*} +D, \ \ \ \sum_{i = 1}^{k}(\,A_iX_iB_i + C_i\,)M_i(\,A_iX_iB_i + C_i \,)^{*} +D {align*} to be semi-definite, respectively, where $A_i,\ B_i,\ C_i,\ C,\ D,\ M_i$ and $M$ are given matrices with $M_i$, $M$ and $D$ Hermitian, $i =1,..., k$. L\"owner partial ordering optimizations of the two matrix-valued functions are studied and their solutions are characterized. Introduction This is the third part of the present author's work on quadratic matrix-valued functions and theii algebraic properties. A matrix-valued function for complex matrices is a map between two matrix spaces C m×n and C p×q , which can generally be written as Y = f (X) for Y ∈ C m×n and X ∈ C p×q , or briefly, f : C m×n → C p×q . As usual, linear and quadratic matrix-valued functions, as common representatives of various matrix-valued functions, are extensively studied from theoretical and applied points of view. In this paper, we consider the following two types of multiple quadratic matrix-valued function where A ∈ C n×p , B ∈ C m×q , C ∈ C n×q , D ∈ C n H and M ∈ C q H are given, and X ∈ C p×m is a variable matrix. We treat it as a combination φ = τ • ρ of the following two simple linear and quadratic Hermitian matrix-valued functions: ( 1.4) For different choices of the given matrices, this quadratic function between matrix spaces includes many ordinary quadratic forms and quadratic matrix-valued functions as its special cases, such as, x * Ax, XAX * , DXX * D * , ( X − C )M ( X − C) * , etc. It is well known that quadratic functions in elementary mathematics and ordinary quadratic forms in linear algebra have a fairly complete theory with a long history and numerous applications. Much of the beauty of these quadratic objects were highly appreciated by mathematicians in all times, and and many of the fundamental ideas of quadratic functions and quadratic forms were developed in all branches of mathematics. While the mathematics of classic quadratic forms has been established for about one and a half century, various extensions of classic quadratic forms to some general settings were conducted from theoretical and applied point of view, in particular, quadratic matrix-valued functions and the corresponding quadratic matrix equations and quadratic matrix inequalities often appear briefly when needed to solve a variety of problems in mathematics and applications. These quadratic objects have many attractive features both from manipulative and computational point of view, and there is an intensive interest in studying behaviors of quadratic matrix-valued functions, quadratic matrix equations and quadratic matrix inequalities. In fact, any essential development on the researches of quadratic objects will lead to many progresses in both mathematics and applications. Compared with the theory of ordinary quadratic functions and forms, two distinctive features of quadratic matrix-valued functions are the freedom of entries in variable matrices and the non-commutativity of matrix algebra. So that there is no a general theory for describing behaviors of a given quadratic matrix-valued function with multiple terms. In particular, to solve an optimization problem on a quadratic matrix-valued function is believed to be NP hard in general, and thus there is a long way to go to establish a perfect theory on quadratic matrix-valued functions. In recent years, Tian conducted a seminal study on quadratic matrix-valued functions in [8,10], which gave an initial quantitative understanding of the nature of matrix rank and inertia optimization problems, in particular, a simple and precise linearization method was introduced for studying quadratic or nonlinear matrix-valued functions, and many explicit formulas were established for calculating the extremal ranks and inertias of some simple quadratic matrix-valued functions. For applications of quadratic matrix-valued functions, quadratic matrix equations and quadratic matrix inequalities in optimization theory, system and control theory, see the references given in [8,10]. Throughout this paper, C m×n stands for the set of all m × n complex matrices; C m H stands for the set of all m × m complex Hermitian matrices; A * , r(A) and R(A) stand for the conjugate transpose, rank and range (column space) of a matrix A ∈ C m×n , respectively; I m denotes the identity matrix of order m; [ A, B ] denotes a row block matrix consisting of A and B; the Moore-Penrose inverse of A ∈ C m×n , denoted by A † , is defined to be the unique solution X satisfying the four matrix equations AXA = A, XAX = X, (AX) * = AX and (XA) * = XA; the symbols E A and F A stand for E A = I m − AA † and F A = I n − A † A; an X ∈ C n×m is called a g-inverse of A ∈ C m×n , denoted by A − , if it satisfies AXA = A; an X ∈ C m H is called a Hermitian g-inverse of A ∈ C m H , denoted by A ∼ , if it satisfies AXA = A; called a reflexive Hermitian g-inverse of A ∈ C m H , denoted by A ∼ r , if it satisfies AXA = A and XAX = X; i + (A) and i − (A), called the partial inertia of A ∈ C m H , are defined to be the numbers of the positive and negative eigenvalues of A counted with multiplicities, respectively; A ≻ 0 (A 0, ≺ 0, 0) means that A is Hermitian positive definite (positive semi-definite, negative definite, negative semi-definite); two A, B ∈ C m H are said to satisfy the inequality A ≻ B (A B) in the Löwner partial ordering if A − B is positive definite (positive semi-definite). Problem formulation Matrix rank and inertia optimization problems are a class of discontinuous optimization problems, in which decision variables are matrices running over certain matrix sets, while the rank and inertia of the variable matrices are taken as integer-valued objective functions. Because rank and inertia of matrices are always integers, no approximation methods are allowed to use when finding the maximal and minimal possible ranks and inertias of a matrix-valued function. So that matrix rank and inertia optimization problems are not consistent with anyone of the ordinary continuous and discrete problems in optimization theory. Less people paid attention to this kind of optimization problems, and no complete theory was established. But, the present author has been working on this topic with great effort in the past 30 years, and contribute a huge amount of results on matrix rank and inertia optimization problems. A major purpose of this paper is to develop a unified optimization theory on ranks, inertias and partial orderings of quadratic matrix-valued functions by using pure algebraic operations of matrices, which enables us to handle many mathematical and applied problems on behaviors of quadratic matrix-valued functions, quadratic matrix equations and quadratic matrix inequalities. The rank and inertia of a (Hermitian) matrix are two oldest basic concepts in linear algebra for describing the dimension of the row/column vector space and the sign distribution of the eigenvalues of the square matrix, which are well understood and are easy to compute by the well-known elementary or congruent matrix operations. These two quantities play an essential role in characterizing algebraic properties of (Hermitian) matrices. Because concepts of ranks and inertias are so generic in linear algebra, it is doubt that a primary work in linear algebra is to establish (expansion) formulas for calculating ranks and inertias of matrices as more as possible. However, this valuable work was really neglected in the development of linear algebra, and a great chance for discovering thousands of rank and inertia formulas, some of which are given in Lemmas 3.2, 3.3, 3.5 and 3.6 below, were lost in the earlier period of linear algebra. This paper tries to make some essential contributions on establishing formulas for ranks and inertias of some quadratic matrix-valued functions. Taking the rank and inertia of (1.3) as inter-valued objective functions, we solve the following problems: Problem 2.1 For the function in (1.3), establish explicit formulas for calculating the following global extremal ranks and inertias (2.1) (i) establish necessary and sufficient conditions for the existence of an X ∈ C p×m such that (ii) establish necessary and sufficient conditions for the following inequalities to hold for an X ∈ C p×m , respectively; (iii) establish necessary and sufficient conditions for to hold, respectively, namely, to give identifying conditions for φ(X) to be a positive definite, positive semi-definite, negative definite, negative semi-definite function on complex matrices, respectively. Problem 2.3 For the function in (1.3), establish necessary and sufficient conditions for the existence of X, X ∈ C p×m such that hold for all X ∈ C p×m , respectively, and derive analytical expressions of the two matrices X and X. Preliminary results The ranks and inertias are two generic indices in finite dimensional algebras. The results related to these indices are unreplaceable by any other quantitative tools in mathematics. A simple but striking fact about the indices is stated in following lemma. (f) All X ∈ H satisfy X ≻ 0 (X ≺ 0), namely, H is a subset of the cone of positive definite matrices (negative definite matrices), if and only if min X∈H i + (X) = m ( min X∈H i − (X) = m ). (g) H has a matrix X 0 (X 0) if and only if min X∈H i − (X) = 0 ( min X∈H i + (X) = 0 ). (h) All X ∈ H satisfy X 0 (X 0), namely, H is a subset of the cone of positive semi-definite matrices ( negative semi-definite matrices ), if and only if max X∈H i − (X) = 0 ( max X∈H i + ( X) = 0 ). The question of whether a given matrix-valued function is semi-definite everywhere is ubiquitous in matrix theory and applications. Lemma 3.1(e)-(h) assert that if certain explicit formulas for calculating the global maximal and minimal inertias of Hermitian matrix-valued functions are established, we can use them as a quantitative tool, as demonstrated in Sections 2-7 below, to derive necessary and sufficient conditions for the matrix-valued functions to be definite or semi-definite. In addition, we are able to use these inertia formulas to establish various matrix inequalities in the Löwner partial ordering, and to solve many matrix optimization problems in the Löwner partial ordering. The following are obvious or well known (see [8]), which will be used in the latter part of this paper. Then, the following expansion formulas hold In particular, the following hold. Lemma 3.4 ([5]) Let A ∈ C m×p , B ∈ C q×n and C ∈ C m×n be given. Then the matrix equation AXB = C is consistent if and only if R(C) ⊆ R(A) and R(C * ) ⊆ R(B * ), or equivalently, AA † CB † B = C. In this case, the general solution can be written as 3,6,11]) Let A ∈ C m×n , B ∈ C m×p and C ∈ C q×n be given, and X ∈ C p×q be a variable matrix. Then, the global maximal and minimal ranks of A + BXC are given by Lemma 3.6 ([4]) Let A ∈ C m H , B ∈ C m×n and C ∈ C p×m be given, X ∈ C n×p be a variable matrix. Then, Main results We first solve Problem 1.1 through a linearization method and Theorem 1.8. Then, the global maximal and minimal ranks and inertias of φ(X) are given by where Proof. It is easy to verify from (3.6) that that is, the rank and inertia of φ(X) in (1.3) can be calculated by those of the following linear matrix-valued function Note from (4.7) and (4.8) that Applying Lemma 3.6 to (4.9), we first obtain It is easy to derive from Lemmas 3.2 and 3.3, elementary matrix operations and congruence matrix operations that Hence, (b) φ(X) is nonsingular for all X ∈ C p×m if and only if r( D + CM C * ) = n, and one of the following four conditions holds (c) There exists an X ∈ C p×m such that φ(X) = 0, namely, the matrix equation in (2.2) is consistent, if and only if (d) There exists an X ∈ C p×m such that φ(X) ≻ 0, namely, the matrix inequality is feasible, if and only if i + (N 1 ) = n and i + (N 3 ) n, or i + (N 1 ) n and i + (N 3 ) = n. (e) There exists an X ∈ C p×m such that φ(X) ≺ 0, the matrix inequality is feasible, if and only if (h) There exists an X ∈ C p×m such that φ(X) 0, namely, the matrix inequality is feasible, if and only if (i) There exists an X ∈ C p×m such that φ(X) 0, namely, the matrix inequality is feasible, if and only if where Setting (4.24) equal to n, we see that φ(X) is nonsingular for all X ∈ C p×m if and only if r(D + CM C * ) = n, and one of the following four rank equalities holds which are further equivalent to the result in (b) by comparing both sides of the four rank equalities. ✷ where φ(X) = 0 means that the rows of AXB + C are orthogonal. Further, if AXB + C is square, φ(X) = 0 means that AXB + C is unitary. Applying Theorem 4.1 and Corollary 4.2 to (4.25) will yield a group of consequences. Theorem 4.3 Let φ(X) be as given in (1.3), and define Then, the global maximal and minimal ranks and inertias of φ(X) are given by where Whether a given function is positive or nonnegative everywhere is a fundamental research subject in both elementary and advanced mathematics. It was realized in matrix theory that the complexity status of the definite and semi-definite feasibility problems of a general matrix-valued function is NP-hard. Corollary 4.2(d)-(k), however, show that we are really able to characterize the definiteness and semi-definiteness of (1.3) by using some ordinary and elementary methods. These results set up a criterion for characterizing definiteness and semi-definiteness of nonlinear matrix-valued functions, and will prompt more investigations on this challenging topic. In particular, definiteness and semi-definiteness of some nonlinear matrix-valued functions generated from (1.3) can be identified. We shall present them in another paper. Recall that a Hermitian matrix A can uniquely be decomposed as the difference of two disjoint Hermitian positive semi-definite definite matrices (4.32) Applying this assertion to (1.3), we obtain the following result. Corollary 4.4 Let φ(X) be as given in (1.3). Then, φ(X) can always be decomposed as for all X ∈ C p×m . Proof. Note from (4.32) that the two Hermitian matrices D and M in (1.3) can uniquely be decomposed as So that φ 1 (X) and φ 2 (X) in(4.33) are positive matrix-valued functions. ✷ and R(C * ) ⊆ R(B * ), and let N = where A ∈ C n×p , C ∈ C n×m , D ∈ C n H and M ∈ C m H are given, and X ∈ C p×m is a variable matrix, and define where B ∈ C p×m , C ∈ C n×m , D ∈ C n H and M ∈ C m H are given, and X ∈ C n×p is a variable matrix, and define Then, We next solve the two quadratic optimization problems in (2.5), where the two matrices φ( X) and φ( X), when they exist, are called the global maximal and minimal matrices of φ(X) in (1.3) in the Löwner partial ordering, respectively. In this case, the following hold. Correspondingly, where V 1 and V 2 are arbitrary matrices. (b) The inertias and ranks of φ( X) and φ(X) − φ( X) are given by Proof. Let Then, φ(X) φ( X) is equivalent to ψ(X) 0. Under A = 0, we see from Corollary 2.2(j) that ψ(X) 0 holds for all X ∈ C p×m if and only if In this case, the following hold. Correspondingly, where V 1 and V 2 are arbitrary matrices. (b) The inertias and ranks of φ( X) and φ(X) − φ( X) are given by In this case, holds for all X 1 , X 2 ∈ C p×m ; said to be concave if and only if holds for all X 1 , X 2 ∈ C p×m . It is easy to verify that which is a special case of (1.3) as well. Applying Theorem 4.1 to (5.3), we obtain the following result. In consequence, the following hold. if and only if both BM B * ≻ 0 and r(A) = n. (e) There exist X 1 , X 2 ∈ C p×m with X 1 = X 2 such that φ 1 if and only if either BM B * ⊀ 0 or r(A) < p. is a negative semi-definite matrix-valued function, then φ(X) is concave. 6 Semi-definiteness of general Hermitian quadratic matrix-valued functions and solutions of the corresponding partial ordering optimization problems As an extension of (1.3), we consider the general quadratic matrix-valued function where 0 = A i ∈ C n×pi , B i ∈ C mi×q , C ∈ C n×q , D ∈ C n H and M ∈ C q H are given, and X i ∈ C pi×mi is a variable matrix, i = 1, . . . , k. We treat it as a combined non-homogeneous linear and quadratic Hermitian matrix-valued function φ = τ • ψ: ψ : C p1×m1 ⊕ · · · ⊕ C p k ×m k → C n×q , τ : C n×q → C n H . This general quadratic function between matrix space includes many ordinary Hermitian quadratic matrixvalued functions as its special cases. Because more than one variable matrices occur in (6.1), we do not know at current time how to establish analytical formulas for the extremal ranks and inertias of (6.1). In this section, we only consider the following problems: (i) establish necessary and sufficient conditions for φ( X 1 , . . . , X k ) 0 ( φ( X 1 , . . . , X k ) 0 ) to hold for all X 1 , . . . , X k ; (ii) establish necessary and sufficient conditions for the existence of X 1 , . . . , X k and X 1 , . . . , X k such that φ( X 1 , . . . , X k ) φ( X 1 , . . . , X k ), φ( X 1 , . . . , X k ) φ( X 1 , . . . , X k ) (6.2) hold for all X 1 , . . . , X k in the Löwner partial ordering, respectively, and give analytical expressions of X 1 , . . . , X k and X 1 , . . . , X k . Then, the following hold. which, by (4.66)-(4.68), is further equivalent to In this case, holds, and therefore, (6.22) is equivalent to This is a general two-sided linear matrix equation involving k unknown matrices. The existence of solutions of this equation and its general solution can be derived from the Kronecker product of matrices. The details are omitted here. Result (d) can be shown similarly. ✷ Two consequences of Theorem 6.1 are given below. 24) where 0 = A i ∈ C n×pi , B i ∈ C mi×qi , C ∈ C n×qi , D ∈ C n H and M i ∈ C qi H are given, and X i ∈ C pi×mi is a variable matrix, i = 1, . . . , k. Also define Then, the following hold. (c) There exist X 1 , . . . , X k such that ψ( X 1 , . . . , X k ) ψ( X 1 , . . . , X k ) (6.25) holds for all X 1 ∈ C p1×m1 , . . . , X k ∈ C p k ×m k if and only if In this case, the matrices X 1 , . . . , X k satisfying (6.25) are the solutions of the k linear matrix equations holds for all X 1 ∈ C p1×m1 , . . . , X k ∈ C p k ×m k if and only if In this case, the matrices X 1 , . . . , X k satisfying (6.30) are the solutions of the k linear matrix equations Correspondingly, Proof. Rewrite (6.24) as which a special case of (6.1). Applying Theorem 6.1 to it, we obtain the result desired. ✷ 36) where 0 = A i ∈ C n×pi , B i ∈ C mi×qi , C i ∈ C n×qi , D ∈ C n H and M ∈ C q1+···+q k H are given, and X i ∈ C pi×mi is variable matrix, i = 1, . . . , k. Also define Then, the following hold. (c) There exist X 1 , . . . , X k such that ψ( X 1 , . . . , X k ) ψ( X 1 , . . . , X k ) (6.37) holds for all X 1 ∈ C p1×m1 , . . . , X k ∈ C p k ×m k if and only if BM B * 0, R(BM C * ) ⊆ R(BM B * ). (6.38) In this case, the matrices X 1 , . . . , X k satisfying (6.37) are the solutions of the linear matrix equation holds for all X 1 ∈ C p1×m1 , . . . , X k ∈ C p k ×m k if and only if In this case, the matrices X 1 , . . . , X k satisfying (6.42) are the solutions of the linear matrix equation ψ( X 1 , . . . , X k ) − ψ( X 1 , . . . , X k ) Proof. Rewrite (6.36) as which a special case of (6.1). Applying Theorem 6.1 to it, we obtain the result desired. ✷ Many consequences can be derived from the results in this section. For instance, (i) the semi-definiteness and the global extremal matrices in the Löwner partial ordering of the following constrained QHMF can be derived; (ii) the semi-definiteness and the global extremal matrices in the Löwner partial ordering of the following matrix expressions that involve partially specified matrices 7 Some optimization problems on the matrix equation AXB = C Consider the following linear matrix equation where A ∈ C m×n , B ∈ C p×q and C ∈ C m×q are given, and X ∈ C n×p is an unknown matrix. Eq. (7.1) is one of the best known matrix equations in matrix theory. Many papers on this equation and its applications can be found in the literature. In the Penrose's seminal paper [5], the consistency conditions and the general solution of (7.1) were completely derived by using generalized inverse of matrices. If (7.1) is not consistent, people often need to find its approximation solutions under various optimal criteria, in particular, the least-squares criterion is ubiquitously used in optimization problems which almost always admits an explicit global solution. For (7.1), the least-squares solution is defined to be a matrix X ∈ C n×p that minimizes the quadratic objective function The normal equation corresponding to (7.2) is given by which is always consistent, and the following result is well known. Lemma 7.1 The general least-squares solution of (7.1) can be written as where V 1 , V 2 ∈ C n×p are arbitrary. Define the two QHMFs in (7.2) as Hence, we first obtain the following result from Lemma 3.5. (a) There exists an X ∈ C n×p such that φ 1 (X) φ 1 ( X) holds for all X ∈ C n×p if and only if In this case, where V 1 , V 2 ∈ C n×p are arbitrary. (b) There exists an X ∈ C n×p such that φ 2 (X) φ 2 ( X) holds for all X ∈ C n×p if and only if R(C * A) ⊆ R(B * ). (7.11) In this case, where V 1 , V 2 ∈ C n×p are arbitrary. Theorem 7.3 also motivates us to obtain the following consequence. Theorem 7.4 Let A ∈ C m×n , B ∈ C p×q and C ∈ C m×q be given. Then, there always exist an X ∈ C n×p that satisfies min A * ( C − AXB )( C − AXB ) * A : X ∈ C n×p , (7.13) min B( C − AXB ) * ( C − AXB )B * : X ∈ C n×p , (7.14) and the general solution is given by where V 1 and V 2 are arbitrary matrices, namely, the solutions of the three minimization problems in (7.15) are the same. can be derived from Theorem 6.1. Concluding remarks We established in this paper a group of explicit formulas for calculating the global maximal and minimal ranks and inertias of (1.3) when X runs over the whole matrix space. By taking these rank and inertia formulas as quantitative tools, we characterized many algebraic properties of (1.3), including solvability conditions for some nonlinear matrix equations and inequalities generated from (1.3), and analytical solutions to the two well-known classic optimization problems on the φ(X) in the Löwner partial ordering. The results obtained and the techniques adopted for solving the matrix rank and inertia optimization problems enable us to make new extensions of some classic results on quadratic forms, quadratic matrix equations and quadratic matrix inequalities, and to derive many new algebraic properties of nonlinear matrix functions that can hardly be handled before. As a continuation of this work, we mention some research problems on QHMFs for further consideration. (i) Characterize algebraic and topological properties of generalized Stiefel manifolds composed by the collections of all matrices satisfying (4.3)-(4.6). In these cases, it would be of interest to establish possible formulas for calculating the extremal ranks and inertias of these nonlinear matrix-valued functions (biquadratic matrix-valued functions), in particular, to find criteria of identifying semi-definiteness of these nonlinear matrix-valued functions, and to solve the Löwner partial ordering optimization problems. (x) Two special forms of (6.1) and (6.24) by setting X 1 = · · · = X k = X are In this case, find criteria for the QHMF to be semi-definite, and solve for its global extremal matrices in the Löwner partial ordering. (xi) Many expressions that involve matrices and their generalized inverses can be represented as quadratic matrix-valued functions, for instance, In these cases, it would be of interest to establish formulas for calculating the maximal and minimal ranks and inertias of these matrix expressions with respect to the reflexive Hermitian g-inverse A ∼ r of a Hermitian matrix A, and g-inverse B − of B. Some recent work on the ranks and inertias of the Hermitian Schur complement D − B * A ∼ B and their applications was given in [4,9]. Another type of subsequent work is to reasonably extend the results in the precious sections to the corresponding operator-valued functions, for which less quantitative methods are allowed to use.
2013-01-11T18:28:54.000Z
2013-01-11T00:00:00.000
{ "year": 2013, "sha1": "01edc9db0b2851283edcd38a50f1ebb9f330f4ca", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "01edc9db0b2851283edcd38a50f1ebb9f330f4ca", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
15163245
pes2o/s2orc
v3-fos-license
PARALLEL EVOLUTION OF LOCAL ADAPTATION AND REPRODUCTIVE ISOLATION IN THE FACE OF GENE FLOW Parallel evolution of similar phenotypes provides strong evidence for the operation of natural selection. Where these phenotypes contribute to reproductive isolation, they further support a role for divergent, habitat-associated selection in speciation. However, the observation of pairs of divergent ecotypes currently occupying contrasting habitats in distinct geographical regions is not sufficient to infer parallel origins. Here we show striking parallel phenotypic divergence between populations of the rocky-shore gastropod, Littorina saxatilis, occupying contrasting habitats exposed to either wave action or crab predation. This divergence is associated with barriers to gene exchange but, nevertheless, genetic variation is more strongly structured by geography than by ecotype. Using approximate Bayesian analysis of sequence data and amplified fragment length polymorphism markers, we show that the ecotypes are likely to have arisen in the face of continuous gene flow and that the demographic separation of ecotypes has occurred in parallel at both regional and local scales. Parameter estimates suggest a long delay between colonization of a locality and ecotype formation, perhaps because the postglacial spread of crab populations was slower than the spread of snails. Adaptive differentiation may not be fully genetically independent despite being demographically parallel. These results provide new insight into a major model of ecologically driven speciation. Speciation is a central process in evolutionary biology. A growing consensus suggests that divergent natural selection in contrasting habitats, generating local adaptation, may be a common impetus for the evolution of reproductive isolation and thus speciation (Schluter 2009;Nosil 2012). The response to selection is straightforward in allopatry but both local adaptation and the subsequent enhancement of reproductive isolation may be opposed by gene flow and recombination where habitats are connected by dispersal (Felsenstein 1981;Smadja and Butlin 2011). Thus, although the traditional categorization of speciation processes into allopatric, parapatric, and sympatric classes may be an oversimplification , the spatial context for speciation and the extent of gene flow at different stages during speciation are still important in most scenarios, determining whether and how rapidly reproductive isolation will evolve . Understanding local adaptation and speciation therefore requires inferences about the biogeography and past demography of populations, factors that may have changed substantially over the course of speciation (Hewitt 2011;Abbott et al. 2013). For example, speciation might be promoted by alternating cycles of separation by geographical barriers and secondary contact (Bierne et al. 2011), or local adaptation might be achieved more readily with some spatial arrangements of habitats than with others (Gavrilets et al. 2007). In principle, inferences about the sequence of events can be made using genetic data and coalescent-modeling approaches. For example, in the case of cave salamanders (Gyrinophilus), a model of continuous gene flow during divergence was supported over an alternative that allowed for a period of allopatry (Niemiller et al. 2008;Pinho and Hey 2010). However, it was not possible to exclude short allopatric intervals and, in general, the reconstruction of complex gene-flow histories is expected to be challenging (Strasburg and Rieseberg 2013). Cases of parallel local adaptation are of particular interest because they provide strong evidence for a role of natural selection. Where reproductive isolation repeatedly results from adaptation to similarly divergent pairs of environments, that is "parallel speciation," this further shows that natural selection can drive speciation (Schluter and Nagel 1995). The natural replication provides the opportunity for powerful tests of underlying processes (Jones et al. 2012). However, Johannesson et al. (2010) have emphasized that the pattern of parallel local adaptation in the presence of current gene flow can result from very different historical sequences of events. They distinguished four scenarios. Either the initial adaptive divergence occurred once, perhaps in allopatry, with subsequent colonization by differentially adapted forms of similar pairs of environments (scenario A) or, alternatively, evolutionary divergence occurred repeatedly in multiple localities, again with or without spatial separation (scenario B). Repeated evolution may depend on an independent origin of adaptive genetic variation in each population (B1), a common origin of locally adaptive alleles from standing genetic variation (B2) or concerted adaptation where each advantageous allele arose once and was then shared by gene flow between geographically separated populations in the same habitat (B3). Empirical separation of these alternatives requires, first, the use of putatively neutral genetic markers to establish the demographic history of the populations and, second, the analysis of loci underlying adaptation whose history may be substantially different from that for neutral markers (as for the Eda locus, Colosimo et al. 2005, and other loci, Jones et al. 2012, in sticklebacks). Key loci underlying local adaptation may be identified by genetic analysis (as for the Eda locus) or by "outlier" analysis (Stinchcombe and Hoekstra 2008). For outlier analysis, the first step of establishing the demographic history is essential because the reliable identification of loci under divergent selection requires a robust model of the demographic history of the populations analyzed (Crisci et al. 2012). Parallel origins for locally adapted ecotypes have often been invoked but have rarely been tested against explicit alternative hypotheses. Even in classic examples of colonization of lakes (Hohenlohe et al. 2012;Kautt et al. 2012) or caves (Strecker et al. 2012) by fish, alternate histories are conceivable . Phylogenetic (Kautt et al. 2012) or clustering (Strecker et al. 2012) approaches do not contrast different models in the context of historical demographic change and gene flow. Here we test explicit alternative scenarios for the origin of parallel local adaptation in the rough periwinkle, Littorina saxatilis, a common rocky-shore gastropod from the North Atlantic that bears live young and has low lifetime dispersal (Reid 1996). In many locations, one finds two ecotypes in close proximity: a small, thin-shelled one with a large aperture, and a larger, thick-shelled form with a small aperture (Fig. 1). These ecotypes are adapted to withstand wave exposure and crab predation, respectively (reviewed in Johannesson et al. 2010). There is evidence for assortative mating, so that each morph mates preferentially with similar individuals (Conde-Padín et al. 2008), and for a genome-wide partial barrier to gene exchange (Grahame et al. 2006), with evidence for divergent selection on some loci (Wilding et al. 2001). The ecotypes (here referred to as "wave" and "crab" ecotypes) have been studied extensively, but to date largely independently, in three European regions: Galicia in northwest Spain (where the ecotypes are called "smooth unbanded" and "ridged banded," respectively), the west coast of Sweden ("exposed" and "sheltered"), and the northeast coast of England ("high-shore" and "mid-shore"). It has been suggested that ecotype differentiation occurred in parallel on different shores within Sweden and Spain (Johannesson et al. 1993;Johannesson 2001;Rolán-Alvarez et al. 2004;Panova et al. 2006;Quesada et al. 2007), and this has been widely accepted (Ostevik et al. 2012), although the evidence has been questioned ). Furthermore, parallel differentiation at the level of regions (i.e., Britain, Sweden, Spain) has not previously been tested: it is possible that the ecotypes originated independently in different parts of Europe but that there was a single origin within each region. Recent phylogeographic analyses (Doellman et al. 2011;Panova et al. 2011) suggest that Iberian populations have been genetically independent from northern European populations for a longer period than the separation of Swedish from British populations, which are likely to have shared a postglacial colonization history. Here we analyze samples from all three regions together, for the first time. We test for parallel adaptation by asking to what extent ecotypes have diverged in the same phenotypic direction between regions and between sites within regions. We then combine mitochondrial and nuclear DNA sequence and amplified fragment length polymorphism (AFLP) data, using an Approximate Bayesian Computation (ABC) framework (Beaumont 2010;Wegmann et al. 2010), to compare models for the demographic history of the populations. Our question is this: was the origin of the ecotypes a single event or a series of parallel events, occurring either within each locality or within each European region? Our results provide strong support for parallel demographic separation at both spatial scales. MORPHOMETRIC ANALYSIS Each snail was photographed with a Leica MZ12 stereoscopic microscope and Leica digital ICA video camera. The presence of shell scars was noted, indicating that the individual had survived a crab attack (Vermeij et al. 1981;Johannesson 1986). We predicted that scars would be more frequent in the "crab" habitat because of a greater probability of both attack and survival. Adult shell images (n = 26-30 per ecotype per location) were analyzed using 11 landmarks positioned on the digitized shell image following Conde-Padín et al. (2009). For each individual, we measured centroid size (CS) and shape, using relative warps (RW). The relative warps were computed using the software packages TpsDig and Tp-sRelw (Rohlf 2005(Rohlf , 2006, excluding the uniform component, following Carvajal-Rodríguez et al. (2005). We used the scaling option α = 0, which weights all landmarks equally. We performed a three-way analysis of variance (ANOVA) on size and shape variables, with fixed factors region (Spain, Britain, and Sweden) and ecotype (Wave, Crab), and locality as a random factor nested within the interaction between fixed factors. We used a G-test to compare scar frequencies between ecotypes and regions. DNA EXTRACTION AND SEQUENCING Head-foot tissue was used for DNA extraction, using a CTAB protocol (Wilding et al. 2001). DNA concentration and purity were assessed using a NanoDrop spectrophotometer. DNA samples were purified with NucleoSpin columns following the manufacturer's instructions (Macherey-Nagel). All DNA samples were standardized to 50 ng·μL −1 . Primers designed from the annotated L. saxatilis partial mtDNA sequence (AJ132137; Wilding et al. 1999) and from sequences in Small and Gosling (2000) were used to amplify a 2004bp region (in two overlapping fragments of 1028 and 1137bp) encompassing the ND6 and tRNApro mitochondrial genes, as well as the 3 end of the ND1 gene and the 5 end of the Cyt-b gene (Table S1). Sixteen individuals were sequenced for each ecotype in each locality. The candidate nuclear genes were chosen from the Littorina Sequence Database (Canbäck et al. 2012). Three exon-primed intron-crossing (EPIC) markers were successfully designed, targeting a complete intron of the calreticulin (Cal), elongation factor 1 α (ElFac), and thioredoxin peroxidase 2 (ThioPer) genes (Table S1). Each gene was amplified in 16 individuals per ecotype from each locality, polymerase chain reaction (PCR) products were cloned and inserts sequenced (see Table S1 for details). Sequence data for nuclear loci were processed using the software Geneious Pro version 5.1.7 (Biomatters Ltd., Auckland, New Zealand). Primers and vector were trimmed and sequences were aligned with the clustalw2 algorithm (Larkin et al. 2007). All sequences were inspected at every polymorphic position to detect possible sequencing errors. The distinct alleles present in each alignment were identified with Arlequin. For each gene, the occurrence of no more than two alleles per individual was checked to identify PCR artifacts. Doubtful individuals were first amplified, cloned and sequenced a second time, and ultimately discarded if ambiguities could not be resolved. Finally, only one allele was kept per individual and per gene. For all further analyses, the exonic regions were trimmed and all indels were deleted. The final alignments were composed of 192, 187, and 189 sequences of length 376, 314, and 341bp, respectively for Cal, ElFac, and ThioPer. The haplotype and nucleotide diversities were estimated in Arlequin. To obtain summary statistics for the ABC analysis, we used the Kimura 2-parameter model for consistency with our simulated data. The neutrality of the nuclear loci was verified with the Tajima's D and Fu's Fs tests, implemented in Arlequin, and the significance was assessed with 10,000 simulations. A sequential Bonferroni correction for multiple tests was applied to the neutrality tests. In addition, a recombination test was performed for each gene with IMgc Online (Woerner et al. 2007). Recombination was detected only for ElFac and the first 44bp at the 5' extremity of the intron were removed from the alignment to exclude possible recombining sites in further analyses. Haplotype networks were built with TCS version 1.21 (Clement et al. 2000) and edited with Inkscape 0.48.1 (www.inkscape.org). Nuclear sequence data have been submitted to GenBank with accession numbers: Cal HG792757-HG792783, ElFac HG792716-HG792756, and ThioPer HG792699-HG792715. The mtDNA fragment corresponds to GenBank Accession AJ132137, starting at position 710. A haplotype file for individuals studied here is available at Dryad: doi:10.5061/dryad.m186r. AMPLIFIED FRAGMENT LENGTH POLYMORPHISMS All 32 individuals per locality and ecotype were used in AFLP analysis. Profiles were generated with two rare-cutter enzymes (EcoRI and PstI) to minimize homoplasy (Caballero et al. 2008). The method was based on Vos et al. (1995). A double digestion with 2U of EcoRI and PstI (New England Biolabs, Ipswich, MA) was carried out in a final volume of 11.6 μL 1× Eco buffer (New England Biolabs) containing 100 ng of genomic DNA and 3 μg of bovine serum albumin. Samples were digested for 3 h at 37 • C. Then, 5.5 μL of 1× ligation buffer (Invitrogen, Carlsbad, CA) containing 0.5U of T4 DNA ligase (Invitrogen), 0.9 μM Eco-adaptor, and 0.9 μM Pst-adaptor were added to the digestion reaction. Samples were ligated for 16 h at 16 • C. Ligation reactions were diluted 1:4 and used as template for preselective PCRs. Six different selective PCRs were performed and 12 primer combinations were obtained (Table S1). Preselective and selective PCR conditions, electrophoresis, and scoring are described in Galindo et al. (2009) (and see Supporting Information). AFLPscore version 1.4 (Whitlock et al. 2008) was used to perform error rate analysis using replicates (15% of the samples, chosen randomly and replicated from the digestion step), remove loci with low repeatability, and create the binary matrix (0, 1) containing the AFLP phenotypes. The mismatch error rate obtained with AFLPscore was 4.63% and the number of AFLP loci scored was 614. The AFLP dataset has been submitted to Dryad: doi:10.5061/dryad.m186r. Outlier analysis was performed using the program Dfdist (Beaumont and Nichols 1996; http://www.maths.bris. ac.uk/∼mamab/stuff/). Dfdist input files were created using the AFLP convert program (http://webs.uvigo.es/acraaj/ tools.htm) and analyses were carried out following Galindo et al. (2009). Between-ecotype pair-wise comparisons were performed independently for each locality and those loci above the 95th percentile were considered outliers. Outliers within locality were combined for each region to determine the degree of sharing of outliers, because sharing is one possible indication of the repeated involvement of the same loci in the response to divergent selection. All the outliers detected in any of the six localities were removed from the dataset in further analyses because demographic parameters are best estimated using exclusively loci that are not influenced by selection (Beaumont and Nichols 1996). After removing outliers AFLP-SURV version 1.0 (Vekemans et al. 2002) was used to calculate F ST values and individual pair-wise relatedness, which was used to create multidimensional scaling plots, following the methodology of Jackson et al. (2012). Summary statistics for ABC analysis were calculated using the same custom scripts as we used for simulated data. The analysis of molecular variance (AMOVA) was conducted in Arlequin. Demographic models and parameters Models are specified in Figure 3. There were 8(9) parameters of interest for the parallel (old divergence) models, either within or between regions, plus two mutation parameters: MU, mutation rate for nuclear sequence data; AFLPMU, mutation rate for AFLP sequences. Observed summary statistics are given in Table S2. We conducted exploratory simulations to ensure that the prior distributions for our demographic parameters encompassed the posterior distributions, while remaining biologically reasonable. Where parameters were shared between models, we used the same prior distribution. In the case of equivalent parameters between models (e.g., in the parallel divergence model the first time split, between ecotypes, is equivalent to the first time split between localities in the ancestral divergence model), we also used equal prior distributions. Prior distributions were log uniform except for mutation rates and time proportions (PROPT, etc.), which were uniform, and ranges are given in Table S3. We set the mutation rate for mitochondrial loci to 1.5 × 10 −8 per base per generation based on a substitution rate of 3% per million years from the fossil record for Littorina species (after Reid et al. 1996;Wares and Cunningham 2001) as used by others (Wares et al. 2002;Blakeslee et al. 2008;Chapman et al. 2008;Cunningham 2008). For the three sequenced nDNA loci, we allowed the mutation rate to vary over one order of magnitude below the mitochondrial mutation rate: 1.5 × 10 −8 to 1.5 × 10 −9 per base per generation. For AFLP loci, we allowed the mutation rate to vary independently but over the same range as the nuclear loci. For the mtDNA sequence, we used a transition/transversion ratio of 0.91, based on third position cytochrome b data from Reid et al. (1996). For the nDNA sequences, we used an unbiased transition/transversion ratio of 0.33. Because we conducted 10 6 simulations per model, and used the same simulation set to test multiple individual observed datasets in some cases, we set the simulated sample sizes for the coalescent simulations as the geometric mean of the real sample sizes across the ecotypes/localities used as observed datasets. ABC sampling For each model, we performed 10 6 standard ABC simulations using the software package ABCtoolbox for all markers combined. For the within-regions models, we also performed 10 6 simulations separately for the sequence data and for the AFLP data. We used fastsimcoal (Excoffier and Foll 2011) to simulate sequence data for four loci with lengths equal to the observed sequences (after pruning of ElFac to remove putative recombinants), and arlsumstat to calculate summary statistics. For the AFLP loci, we used fastsimcoal to simulate 462 separate loci, each one with a 20 base sequence, and an in-house program that converted the resulting sequence data into a binary matrix of AFLP alleles and calculated summary statistics from this binary matrix. For each simulated AFLP locus, one 20bp haplotype was chosen at random and designated the "1" allele. All other haplotypes were designated as "0" alleles. AFLP phenotypes were then called assuming that genotypes 11 and 10 correspond to "band present" and genotype 00 corresponds to "band absent." This process allowed us to simulate the asymmetrical mutation expected for presence and absence alleles at AFLP loci and the fact that loci fixed for the absence allele are not observed. Calculating summary statistics from the simulated phenotype matrix made them directly comparable to the summary statistics obtained from the real data. Summary statistics and estimation step Summary statistics used for the sequence data (for each sample or sample pair) were Tajima's D, π, and ST based on Kimura 2-parameter distances between sequences, implemented in Arlequin/arlsumstat 3.5. For the AFLPs, we used heterozygosity, mean F ST , standard deviation of F ST across loci, and Jaccard distance. Because our summary statistics were numerous (56 for the sequence data; 22 for the AFLP data; 78 when combined), we used the partial least squares (PLS) method described in Wegmann et al. (2009) to reduce their dimensionality in the rejection step of the ABC procedure (see Supporting Information). Separate sets of PLS components were defined for simulations under each different model because of the variation in parameters. These PLS components were used to transform the summary statistics of the entire dataset of simulations, as well as the observed summary statistics, prior to the estimation stage. After retaining the closest 0.5-2% of simulations to the observed data based on PLS components, we took two approaches for the "regression adjustment" step, depending on whether we were interested in model comparison or parameter estimation. For model comparison, we used all summary statistics to perform postsampling adjustment, using the GLM method of Leuenberger and Wegmann (2010), to produce marginal densities, which were comparable between models. For parameter estimation, we used PLS components for both the distance step and the postsampling adjustment step. For combined datasets, we retained the closest 10,000 simulations (1%) to the observed data based on the Euclidean distance between PLS-transformed observed summary statistics and PLStransformed simulated summary statistics, and used these retained summary statistics to estimate the parameter values that best reproduce the real-world data using the ABC-GLM procedure of Leuenberger and Wegmann (2010), implemented in ABCtoolbox. For the AFLP data, we took the same approach, but retained the closest 20,000 simulations to the observed data. For sequence data alone, we retained the closest 5000 simulations to the observed dataset. These numbers of retained simulations were chosen on the basis of the P-values (the fraction of retained simulations with likelihood less than or equal to the likelihood of the observed data under the GLM). Model comparison and validation Models were compared using Bayes Factors, which are equal to the ratio of the marginal densities between models, and posterior probabilities, which are approximately equal to the marginal density of the model of interest divided by the sum of the marginal densities of all models. The P-value was used as a measure of goodness-of-fit. In addition to these tests, we also compared the distribution of summary statistics of retained simulated datasets to the summary statistics of the observed dataset, to check that the observed dataset lay well within the distribution of simulations. We did this for distributions of both PLS (all pairs of variables) and raw summary statistics (one variable at a time). The condition was satisfied for all reported models. To validate our model choice, we simulated 1000 (new) datasets from the original priors for each competing model and used these pseudo-observed datasets to test the robustness of discrimination between models (Fig. S2). To validate our parameter estimates, we used the 1000 pseudo-observed datasets that were generated under each model to check for uniformity of the posterior quantiles (Fig. S3). Results Morphometric analysis of L. saxatilis shell size and shape, from two sites in each of three regions, showed remarkable concordance in the direction of phenotypic differentiation between samples from crab-and wave-dominated habitats (Fig. 1). Despite some differences between regions, crab ecotype snails were consistently larger and had higher scores on the first shape axis (RW1 ; Table 1), representing a smaller aperture and higher spire than wave ecotype snails. Differentiation between ecotypes was most marked in Sweden and least marked in Britain. Previous studies, in both Sweden and Spain, indicate that the majority of the morphological difference between ecotypes is genetically determined, although there is a small contribution from developmental plasticity (Janson 1982;Johannesson and Johannesson 1996;Conde-Padín et al. 2009;Saura et al. 2012). We predicted that snails in the crabexposed habitat would be attacked more often by crabs and also be more likely to survive attacks, resulting in a higher frequency of scarred shells. As expected, the crab ecotype had a higher proportion of snails with scars than the wave ecotype (42.2% vs. 20.0%, G = 21.6, df = 1, P < 0.001). This difference was also greatest in Sweden (Table 2). Mitochondrial DNA sequence data showed extensive sharing of haplotypes between British and Swedish localities but not between these regions and Spain ( Fig. S1; AMOVA by region: CT = 0.14, P = 0.037). The northern locality in Spain (Burela) showed much higher diversity than the southern locality (Silleiro) whereas all other localities showed diversity similar to Burela. In no case was there strong differentiation between ecotypes (Table S2) and ecotypes were not differentiated overall (AMOVA by ecotype: CT = 0). These patterns are consistent with previous observations (Quesada et al. 2007;Doellman et al. 2011;Panova et al. 2011) and suggest genetic isolation between northern European and Spanish populations as well as either a recent origin of ecotypes, or substantial gene flow between them within localities. Sequence data from introns in three single-copy nuclear genes (ElFac, Cal, ThioPer; 314, 376, and 341bp, respectively) revealed lower overall diversity than mtDNA (Fig. S1). Neutrality tests did not reveal departures from expectations for any of the three introns in any locality. As for mtDNA, there was evidence for differentiation between regions but not between ecotypes (AMOVA by region: CT = 0.64, 0.16, and 0.19 for ElFac, Cal, and ThioPer, respectively, P ≤ 0.009; by ecotype: CT = 0 for all loci). Table 1. Three-way ANOVA for the morphometric variables of centroid size (CS) and shape (the two leading relative warp axes, RW1 and RW2). The percentage of variance explained by each relative warp is presented in parenthesis). We checked for heteroscedasticity in the dependent variables; CS did depart from expectation and so results for this variable should be taken with some extra caution. After quality checking and removal of loci with poor repeatability, the AFLP dataset included 614 loci. We excluded 152 of these loci that showed evidence for an influence of divergent selection between ecotypes within any locality. Fewer of these outliers were observed in Britain, where morphological differentiation was also less marked than in the other regions. There was slightly greater sharing of outliers between localities and between regions than expected by chance (Table 4). As in previous analyses of British (Wilding et al. 2001) and Spanish (Galindo et al. 2009) populations, differentiation between ecotypes within localities was low (F ST = 0-0.027) relative to differentiation among localities (F ST = 0.021-0.134), and the highest genetic distances were between Spanish and northern European locali-ties (F ST = 0.107-0.134). As for the other marker types, overall differentiation was strong among regions but not between ecotypes (AMOVA by region: F CT = 0.132, P < 0.001; by ecotype: There were some common patterns among the marker types, particularly the low differentiation between ecotypes compared with the separation among regions, but also many differences of detail (Fig. 2) as expected from the stochasticity of the underlying processes of mutation, drift, and gene flow. A key question is whether the low differentiation between ecotypes within localities is because of recent parallel origin of the ecotypes in situ in each region or locality, or to a single, older common origin whose genetic signal has been obscured by subsequent gene exchange. Ideally, data from all markers should be combined to answer this question. Therefore, we formalized the two alternative models (Fig. 3) and compared them using ABC. In the "parallel divergence" model, an ancestral population colonized multiple localities, which were connected by gene flow, and the ecotypes then diverged within localities creating a partial barrier to gene flow whose effects may be detectable in neutral loci. We considered periods of allopatry (zero gene flow) between ecotypes, within localities to be biologically implausible and so did not include them in this model. In the "old divergence" model, ecotypes diverged within the ancestral population before colonization of the sampled localities, potentially with an initial period of allopatry. First, we applied these models to the two sampled localities within each region (Fig. 3A). In each case, "ghost" populations (Beerli 2004) were included in the model to represent the many unsampled localities that make up the regional meta-population. The parallel divergence model was strongly supported relative to the old divergence model for British and Spanish populations (posterior probabilities 0.9974 and 0.9965, respectively) but the opposite was true for the Swedish populations (posterior probability of 1.0000 in favor of the old divergence model). Nevertheless, a model where the period of allopatry was constrained to be zero had a higher posterior probability for the Swedish populations. Thus, there was evidence for continuous gene flow even where a single origin of the ecotypes was supported. We also considered sequence (nuclear+mtDNA) and AFLP data separately. Sequence data gave the same pattern as the combined data but AFLPs supported the parallel divergence model for all regions, including Sweden. The preferred models (parallel for Britain and Spain, old divergence for Sweden) fitted the data well: observed summary statistics fitted the ABC regression estimation (P > 0.17; Table S3) and fell within the range of postrejection simulated values (both untransformed and PLS), and the distributions of posterior quantiles did not show strong departures from uniformity, indicating a lack of bias in parameter estimation (Wegmann et al. 2009). The alternative models both allow the possibility of gene flow connecting all populations for most of their history and, therefore, they are likely to be difficult to distinguish. Nevertheless, pseudosamples generated under the alternative models, using prior distributions of parameters, showed that discrimination between models was possible using the combined dataset since for 54% of pseudosamples the correct model was supported with confidence (for 21% the wrong model was supported and in the remainder neither model had posterior probability >0.95; Fig. S2; see Materials and Methods for details of model validation). Parameter estimates for the within-region models using data for all markers had wide 95% highest posterior density intervals (Table S3). However, the median estimates from the parallel divergence model were consistent across regions in suggesting large effective population sizes (∼10 4 locally and ∼10 6 for the ghost populations), low migration rates between localities (m∼10 −5 ), and long times since the first population separation ∼10 5 generations (there are typically 2 generations per year). The parallel divergence models estimated the time since ecotype formation to be ∼10 4 generations, that is with long times between colonization and ecotype formation, and low migration between ecotypes (∼10 −6 per generation). These estimates are referenced to a fixed mtDNA mutation rate (see Materials and Methods). To investigate ecotype origin at the regional level, we applied the same two models to the combined marker data for each of the possible between-region pairs of localities in turn (Fig. 3B). We did not include ghost populations because, in this case, we took the locality sample to be representative of its regional metapopulation. In 11 of 12 comparisons, the parallel divergence model was strongly supported (posterior probability >0.9996; Table S3). As for within-region models, observed summary statistics fitted the ABC regression well (P = 0.097-0.282, the lowest value being for the Lysekil-Thornwick pair for which the old divergence model was preferred). The robustness of discrimination between models was slightly greater in this case (probability of supporting the correct model 59% and the wrong model 17%; Fig. S2). In the parallel divergence model, local population sizes were again estimated to be ∼10 4 , except where the southern Spanish Silleiro site was involved where estimates were greater (∼10 5 ). These models allowed the possibility of population expansion associated with ecotype formation but the posterior distributions did not support this (Table S3). Migration estimates between ecotypes were similar to the within-region estimates (∼10 −6 per generation) but migration between sites in different regions was, as expected, estimated to be lower (10 −6.5 -10 −8 ), with the lowest values involving the Silleiro site which appears be more distinct from the northern regions than the other Spanish site (Table S3). Averaging parameter estimates across comparisons (excluding Lysekil-Thornwick), the between-region models suggest separation of British and Swedish populations ∼6 × 10 5 generations ago and separation of northern from Spanish populations ∼1.4 × 10 6 generations ago, whereas ecotype separation is estimated to be much more recent (∼5 × 10 3 generations from Britain-Sweden comparisons and ∼2.6 × 10 4 generations from north-ern Europe-Spain comparisons), as for the within-region models (∼10 4 generations). Discussion Our results show strikingly parallel phenotypic differentiation in L. saxatilis in response to contrasting crab and wave environments, both between localities within regions and among European regions. Our analyses of genetic data support the hypothesis that the ecotypes arose in parallel, without allopatric separation and after colonization of the different regions and localities, rather than divergence being old and predating colonization. This support depends on the effect on neutral loci of the barriers to gene flow between populations in the contrasting environments that are associated with phenotypic differentiation. It remains to be seen whether the alleles underlying adaptive traits evolved in parallel. Our data confirmed previous observations, based on mtDNA, of greater differentiation between Spain and northern Europe than between Britain and Sweden and of a large genetic distance between the two Spanish sites (Quesada et al. 2007;Doellman et al. 2011;Panova et al. 2011). Reid et al. (1996) used fossil and biogeographic information to provide estimates of evolutionary rates. Our parameter estimates depend on their mtDNA mutation rate but we adjusted the mutation rate to fit the simple substitution model implemented in the ABC analyses. For this reason and because the long-term rate may not be appropriate for the recent events described here (Charlesworth 2010), the inferred relative timings of events and population sizes may be more robust than absolute values. Relative values are consistent with the interpretation of postglacial colonization of British and Swedish shores from a common refuge or refugia (Doellman et al. 2011;Panova et al. 2011), distinct from the Spanish sites, because the estimated time of separation between the northern European regions and Spain was 2.2 or 7.4 times older than between Britain and Sweden (based on the betweenregion and all-region models, respectively). Two separate refugia in Spain can be inferred from the estimated time of separation between sites, which was about 10 times greater than in the northern regions. Our models gave little support to past population expansion, even suggesting a reduction in population size in Spain. Doellman et al. (2011) andPanova et al. (2011) found some evidence that L. saxatilis population sizes had expanded recently using their mtDNA markers (∼10×). They inferred large population sizes and, as in our analyses, their IMa model fits implied gene flow between geographical regions. As in Panova et al. (2011), our between-region models implied about 10 times more gene flow between Britain and Sweden than between these regions and Spain. Overall, the evidence suggests that the distribution of L. saxatilis changed during the Pleistocene glacial cycles but that the effective population size remained consistently large, partly as a result of gene flow over large distances. Littorina saxatilis is cold-tolerant and now has a distribution extending into the Arctic. Therefore, its populations may not have been impacted as severely as some species by the glaciations, resulting in early colonization of northern European shores with limited population expansion and allowing long-term separation from southern European populations. Our analyses support the parallel divergence of ecotypes over the alternative old divergence model (Fig. 3), that is they suggest that the crab and wave ecotypes arose separately in each region and locality, after colonization, rather than arising once before the geographical separation of populations. This is true both within and between regions. The confidence that can be placed on model comparisons in ABC has been questioned (Robert et al. 2011) because of the uncertainty introduced by the choice of summary statistics. Therefore, we used an information-rich set of summary statistics, derived from different marker types, and used all summary statistics for model comparison . All preferred models fitted the observed data well and in all cases the distributions of simulated summary statistics, after the rejection step, contained the observed summary statistics. Extensive recent gene flow may eradicate signals of past separation ) making our alternative models intrinsically difficult to distinguish, at least for part of the parameter space. However, our model choice validation indicated that discrimination between the parallel and old divergence models was possible, with reasonable support. At least for between-region comparisons, the consistent evidence in favor of the parallel model (11 of 12 comparisons) is very unlikely to be a chance outcome. Under the parallel model, the separation of ecotypes was in every case estimated to be recent relative to the separation of populations in different localities (itself presumably reflecting patterns of colonization). In the within-regions models, the time to ecotype separation was about 10% of the age of the local populations whereas for between-region models it was an even smaller fraction (1-2%), as expected. If the actual time of colonization of British and Swedish coasts was after the most recent glacial retreat (∼10,000 years ago; Charbit et al. 2007), the relative age of ecotype separation implies a waiting time to ecotype formation of around 18,000 generations (9000 years). Absolute time estimates from the models imply even longer waiting times. This contrasts with simulation results for ecotype formation in Littorina (Sadedin et al. 2009), based on the characteristics of the Swedish populations, in which distinct morphs form rapidly (typically in <1000 generations). Models of ecological speciation generally have shorter waiting times to speciation than models that do not involve direct divergent selection but waiting times are dependent on the supply of relevant mutations (Gavrilets 2004). In our demographic models, ecotype formation occurs instantaneously rather than gradually. This may be considered unrealistic but intermediate levels of differentiation occur only briefly in the Sadedin et al. (2009) models. Barriers to gene flow at neutral loci are never strong in these models (F ST reaches about 0.05) but those barriers that do also appear rapidly. Two possible explanations for ecotype formation occurring long after colonization of each locality deserve further investigation. The major predators considered important in selecting for the "crab" ecotype may have arrived in warming regions after the snails because they require higher minimum temperatures. Carcinus maenas (predator in Britain and Sweden) has a current northern distribution limit well south of the northern limit of L. saxatilis, whereas Pachygrapsus marmoratus (predator in Spain) is a relatively warm-water species. Alternatively, following local extinctions (due to toxic algal blooms, e.g.; Johannesson and Johannesson 1995) populations may be reestablished by individuals that bring (neutral) alleles from source populations of both morphs. Our model fits for the time of separation of ecotypes may then reflect these recent events rather than the original ecotype formation. The parallel model involves an ancestral population that became divided spatially into a series of local populations, which exchanged migrants. Later, distinct habitat-associated populations were established within each of these local populations, still with gene exchange. This scenario was clearly favored over the old divergence model for both British and Spanish populations but the support was equivocal for Swedish populations. Biologically, what does support for the parallel model mean? First, it provides evidence against the origin of the ecotypes during a period of past allopatric separation. An allopatric period was also excluded in the one case of a within-region analysis that favored the old divergence model (Sweden). Thus, the available evidence strongly suggests that the crab and wave ecotypes of L. saxatilis were formed by divergent selection in the face of continuous gene flow. The contrasting result for Sweden may reflect more recent common ancestry of the spatially separated sites in that region as a result of postglacial colonization. The two Spanish sites appear to have a long separate history (as observed previously, Quesada et al. 2007) and the separation of the British sites may also be older than in Sweden (Panova et al. 2011). Parallel origin, as inferred here, relates to the demographic history of the populations. The inference that ecological barriers between ecotypes developed after geographic barriers between localities does not require that the alleles implicated in the formation of ecological barriers originated independently in each locality. Locally adaptive alleles may have risen in frequency from standing variation present in all founding populations, or may have spread among populations at a later date. Thus, of the options presented by Johannesson et al. (2010) for the origin of parallel local adaptation, the single-origin alternative (scenario A) can be excluded but the different genetic pathways to parallel local adaptation in the presence of gene exchange (B1-B3) cannot easily be separated. The inference of a long lag after colonization before formation of ecotypes argues against an origin from standing variation, which is likely to be rapid (Barrett and Schluter 2008). However, ongoing gene exchange, even among regions, suggests that independent origins of either the same or different alleles are less likely than the sharing of variation ancestrally or via concerted adaptation, which is similar to the process described by Morjan and Rieseberg (2004) and the "transporter hypothesis" of Schluter and Conte (2009). The observed sharing of a few outlier AFLP loci hints at a contribution from concerted adaptation. Data for the arginine kinase locus in a related species, Littorina fabalis, also point in this direction (Kemppainen et al. 2011). Further study of loci influenced by divergent selection (Wilding et al. 2001;Wood et al. 2008;Galindo et al. 2009Galindo et al. , 2010 should provide tests of these predictions, although distinguishing among the alternatives may be impractical if divergence depends on many loci of small effect. Sambatti et al. (2012) have recently compared direct and indirect estimates of gene flow between sunflower (Helianthus) species. Following Strasburg and Rieseberg (2008), they emphasize that low levels of genetic differentiation, implying high Nm, may reflect large population sizes rather than high gene exchange (m). This is important because divergence under selection requires s > m, whereas divergence under drift requires low Nm. Wood et al. (2008) considered this issue in relation to estimates of F ST for loci putatively under divergent selection between crab and wave exposed habitats in Britain. They found that the estimated strength of selection was too high to be compatible with the apparently very small genomic regions of elevated differentiation. Using estimates obtained here (m ∼ 10 −6 between morphs) implies much weaker selection on the outlier loci (s ∼ 10 −3 ), which is more consistent with the observed genomic pattern of differentiation. Speciation is typically a protracted process during which many changes in geographic distribution, population size, and opportunity for gene flow are likely to occur (Abbott et al. 2013). Gene exchange at later stages may easily obscure the signatures of events occurring earlier in the process, particularly in neutral loci (Via 2009;Bierne et al. 2013). Current methods for inferring past patterns of gene exchange have serious limitations (Strasburg and Rieseberg 2013), including the uncertainty inherent in interpreting the results of fitting models that are not accurate reflections of the true history, because of the inevitable need for simplification (Becquet and Przeworski 2009). ABC approaches have greater flexibility than many other methods (Beaumont 2010), allowing us, in this case, to combine information from multiple marker types, to tailor demographic models to our knowledge of the study species and to focus on the specific issue of parallel origin. Nevertheless, they are not free from these very general reservations about historical reconstructions. We conclude that the L. saxatilis ecotypes most likely diverged in the presence of gene flow and are certainly now maintained despite gene flow. The ABC analyses, combining information from multiple markers of different types, suggest that the ecotypes have originated repeatedly in different localities. This provides a firm foundation for understanding the genetic basis of divergent adaptation and the nature of other barriers that impact on patterns of gene flow across the genome. DATA ARCHIVING The doi for our data is 10.5061/dryad.m186r. Supporting Information Additional Supporting Information may be found in the online version of this article at the publisher's website: Figure S1. Haplotype networks derived from mtDNA and nuclear sequence data. Figure S2. Logistic regressions for model validation. Figure S3. Posterior quantile analyses used in model checking. Table S1. Primers and annealing temperatures. Table S2. Summary statistics for all datasets, as used in the ABC analyses. Table S3. Parameter estimates from the ABC models.
2018-04-03T02:05:30.677Z
2013-12-23T00:00:00.000
{ "year": 2013, "sha1": "b7f103ca9db480564281fa156edf8c4d57e67118", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/evo.12329", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "e0b3aac115f4d4d83afca417236c76ba1816bd85", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
235284829
pes2o/s2orc
v3-fos-license
A disposable and cost-effective electrochemical DNA sensor using nanocomposite modified screen-printed gold electrode This research involved the preparation of an electrochemical biosensor using a disposable screen-printed gold electrode (SPGE) for the DNA hybridization detection. An electrochemical DNA biosensor was successfully fabricated based on DNA probe tagged with methylene blue (MB) as redox hybridization indicator, was immobilized on the nanocomposite modified electrode. The modified SPGE was characterized by using cyclic voltammetry (CV) and scanning electron microscopy (SEM) with energy dispersion x-ray spectroscopy (EDS). The current signal of target DNA hybridization was monitored using differential pulse voltammetry (DPV). These DNA biosensor showed a good current response with the complementary target DNA concentration range from 1.0 x 10-11 to 1.0 x 10-7 M. This fabricated genosensor could also be regenerated easily and can be reused 36 times for hybridization studied. Introduction Affinity DNA biosensor has been conjugrated between the biological material of DNA or PNA probe and transducers. For DNA electrochemical biosensor has widely reported in many fields due to its high sensitivity, specificity, easy to use, low cost, compatible with microfabrication and direct convert to the hybridization events into the electrical signal. In this technique, the stability of the immobilized single-stranded probe on the electrode surface as well as its accessibility toward the target DNA played an important role in the performance of the DNA sensor [1,2]. Moreover, nanomaterials have also accelerated the performance of electrochemical application by improving bio-compatibility, enhancing electron transfer that enhanced signal can be achieved [3]. Therefore, the advantages of nanomaterial modified electrode surface can be enhanced the signal due to high surface area and strong adsorption ability. In recent years, screen-printed gold electrode (SPGE) has been widely used and challenged the conventional three-electrode system which consists of a reference electrode, a counter electrode and a working electrode. The advantages of SPGE including ease of operation, simple fabrication, low cost, small size, disposability, reusability and easy mass-produced, leading to its development in electrochemical DNA biosensor. The main advantage associated with the miniaturization of the electrochemical DNA sensors is the reduction of sample volume required, as low as a few microliters [4]. The surface of SPGE can be easily modified that related to many analytes. This versatility, its miniaturized size, and the possibility of connecting it to portable instrumentation make its possible highly specific on-site determination of target DNA. Furthermore, SPGE avoid the common problems [5], swine flu (H1N1) infection in human [6] and heavy metals detection [7]. In this paper, we describe the development of a sensitive, cost-effective, fast response, and accurate electrochemical DNA biosensor using a SPGE modified with polyaniline-graphene-silver (PANI-Grap-Ag) nanocomposite for the detection of DNA(MB)-DNA hybridization. DNA probe tagged with methylene blue (MB) as an electrochemical indicator was immobilized on the modified SPGE for hybridization with the complementary target DNA. The signal generated is measured using differential pulse voltammetry (DPV). That means these fabricated DNA sensor was interrogated using electrochemical transducer and scanning electron microscopy (SEM) with energy dispersion X-ray spectroscopy (EDS) as characterization techniques. Theoretical Background DNA biosensor is one method that has attracted much attention for DNA hybridization detection [8]. The sensor generally composed of ss-DNA probes immobilized on a transducer surface that are able to form duplex with the target DNAs. The hybridization event is then converted into a measurable signal by a transducer as shown in Fig 1. The detection of DNA hybridization in a biosensor can either be direct or indirect [9]. In the indirect approach, after the hybridization event the signal is determined from indicator molecules such as enzymes, electroactive compounds or nanoparticles. The indicator is tagged either directly on the target DNAs prior to the hybridization (competitive method) or with the secondary target after the hybridization on the sensing surface (sandwich method). The tagged DNA probe or target DNA can then generate a hybridization signal that can be used to obtain the amount of DNA. In this approach, the signal generation is extremely sensitive but it requires several steps, is expensive, time-consuming, and real-time measurement is also not possible. In contrast, for a direct detection approach the signal comes from the change of the physical properties at the electrodesolution interface. Therefore, the technique is more attractive since it can provide a fast response, low-cost and can be monitored in real time. In the DNA hybridization sensors, DNA probes are typically short oligonucleotides that are able to hybridize with the specific (complementary) target DNA sequence to form a double-stranded hybrid [10]. A longer probe often exhibits unfavorable hybridization specificity due to intramolecular hydrogen bonding and consequent formation of the non-reaction hairpin structure. For the end-labels, such as thiols, disulfides, amines, or biotin are incorporated with immobilize DNA probe to electrode surface. A long flexibility spacer of hydrocarbon is usually added to provide sufficient accessibility for surface attachment. Although, DNA probe is usually use in DNA biosensor, it still has some issues with the specificity, sensitivity and stability under various conditions [11]. Electrochemical detection of DNA hybridization is one strategy being explored because an electrochemical device provides a high sensitivity, rapid response, is easy to use, low cost and can be miniaturized. The transduction relies on the conversion of a base-pair recognition event into a useful electrical signal. Electrochemical methods for direct DNA detection are such as voltammetry, impedance spectroscopy and capacitance measurement. The common principle of the voltammetric measurement is that involving the application of a potential to an electrode and the monitoring of the current signal flowing through the cell. It is considered as active technique because the applied potential force a change in the concentration of an electroactive species at the electrode surface due to electrochemically reducing or oxidizing. In this work, the transducer that conversing the chemical reaction to electrical sinal is Pulse voltammetry. This technique can be measured the current while making pulse changes in the applied working electrode potential. The most extensively used method of pulse voltammetry is differential pulse voltammetry or DPV which is based on the application of successive double potential pulse [12] as shown in In this technique two potential pulses of amplitude E 1 and E 2 and length t 1 and t 2 , respectively are first applied with t 1 >> t 2 and ∆E = E 2 -E 1 (Fig. 2(a)-(b)). The potential is scanned in the negative (∆E < 0) or positive direction (∆E > 0) in such a way that a delay between each pair of pulses is introduced in order for the equilibrium to be re-established. In this potentiostatic technique the difference current responses I DPV or ∆I = I 2 (t 1 + t 2 ) -I 1 (t 1 ) is plotted versus E, referred to as differential pulse voltammogram [13] (Fig. 2(c)-(d)). That means, the difference between current measurement at these points for each pulse is determined and plotted against the base potential. The resulting voltammogram consists of a current peak, the height of the peak is directly proportional to the concentration of analyte. Materials and Methods Materials The sequences for the 12 bases of DNA probe tagged methylene blue is 5'-MB-TTT TTT TTT TTT-NH 2 -3' that purified by reverse phase HPLC and its identity was verified by MALDI-TOF mass spectrometry. The synthetic complementary target DNA with 12 bases (5-AAA AAA AAA AAA-3) was purchased from the Bioservice Unit, National Science and Technology Development Agency and BioDesign Co., Ltd., Thailand. The blocking thiol of 11 carbon length, 11-mercapto-1-undecanol (11-MUL) was purchased from Aldrich (Steinheim, Germany). Silver nitrate was purchased from Aldrich (Steinheim, Germany). Graphene nanosheets (4-5 layers, thickness of 10 nm, surface area 500-800 m 2 g -1 , particle diameter 3 m) were obtained from Cheap Tubes Inc (Brattleboro, USA). Aniline solution was purchased from Merck (Germany) and purified using the noramal distillation method and keep at 4 degree celsius in the refrigerator. All aqueous solutions were prepared with analytical reagent grade chemicals and de-ionized water (Milli-Q, Merck). For SPGE consists of gold working electrode (4 mm diameter), Pt counter electrode and Ag/AgCl reference electrode. SPGE pre-treatment The electrochemical pre-treatment of SPGE was carried out by applying potential at 1.2 V in the saturated Na 2 CO 3 with the scan rate 5 mVs -1 , 600 s. After the activation, the SPGE strip was rinsed with de-ionized water and placed in an electrochemical cell for voltammetric measurement. PANI-Grap-Ag nanocomposited modified SPGE For a PANI-Grap-Ag nanocomposite modified gold surface, a 2.0 mg mL -1 graphene and 0.20 M AgNO 3 with 0.10 M aniline aqueous solution were added into the electrodepositing solution (0.50 M H 2 SO 4 ), mixed with 0.25 M polyacrylic acid (PAA) to get a better stability with improved polymer properties [14]. The electrodeposition was performed by cyclic voltammetry for 10 scans using the potential range from -0.4 to 1.0 V vs. Ag/AgCl with a scan rate of 50 mVs -1 . Immobilization of DNA-MB probe The PANI-Grap-Ag coated SPGE was cleaned by rinsing with distilled water for 3 times and treated with 5.0 % (v/v) glutaraldehyde in 10 mM phosphate buffer pH 7.00 at room temperature for 20 min to activate the aldehyde groups. Then 20 L of 5.0 M of DNA-MB probe was placed on the modified electrode for 24 h in the refrigerator (4  C). Finally, the immobilized SPGE was immersed in 1.0 mM of 11-mercaptoundecanol (11-MUL) solutions for 1 h to block any remaining pinholes, hence preventing any non-specific binding on the electrode surface. Surface morphology characterization The surface morphology of PANI-Grap-Ag nanomaterial modified SPGE was characterized using SEM and EDS. Both SEM image and EDS spectrum were characterized with a JSM 5800 Quanta from JEOL, Japan. Electrochemical measurement The hybridization behavior measurement was studied using three electrode system of the SPGE, connected to 910 PSTAT Mini (Metrohm Applikon, Utrecht, The Netherlands) controlled by PSTAT Software Software Version 1.1. The hybridization response was the decrease of the oxidation peak of the electrochemical indicator MB (tagged to the DNA probe) detected using DPV. The DPV was operated from -1.2 to -0.3 V, with a scan rate of 50 mVs -1 , a step width of 100 ms, a step potential of 5.0 mV, the pulse width and pulse amplitude were 60 mV. The DPV was performed in a batch vessel containing 100 mM sodium phosphate buffer pH 7.00 with 100 mM potassium chloride. Results and Discussion Pre-treatment of the SPCE SPGE are often preconditioned by applying anodic potential in electrolyte solution (saturated Na 2 CO 3 ) to enhance the electrochemical activities. Under an appropriate electrochemical pre-treatment condition for the SPGE, we studied the cyclic voltammetric behavior using redox system. As shown in Fig 3, after pre-treatment the SPGE exhibited discernable redox peaks for potassium hexacyanoferrate (III)-(II) system. That means, this conditions can be improved the electrochemical activity of SPGE and the activation procedure in Na 2 CO 3 solution resulted in the good electrochemical characteristic. The regenerated gold surface showed a voltammogram with oxidation and reduction peaks (Fig 5(a)). Both peaks increase when PANI-Grap-Ag nanocomposite was deposited onto the gold surface (Fig 5(b)) indicated that the PANI-Grap-Ag helped to increase the electrical conductivity. When 5.0 % (v/v) glutaraldehyde in 10 mM sodium phosphate buffer pH 7.00 was used to activate the covalent bonding between the amine group of the DNA-MB probe and the free amine group of PANI at room temperature for 20 min, the redox peaks of the electrode decrease (Fig 5(c)). The response was further reduced when DNA-MB probes were immobilized (Fig 5(d)). The modified electrode surface was then react with ethanolamine pH 8.50 to occupy all the remained aldehyde groups of glutaraldehyde that were not bound to the probes. Finally, PANI-Grap-Ag nanocomposite modified SPGE was rinsed with 100 mM phosphate buffer pH 7.00 and then immersed in 1.0 mM of 11-MUL solution for 60 min to cover any pinholes on the electrode surface. The cyclic voltammogram showed complete blockage of the redox species (Fig 5(e)). -. -. -. - Surface Morphology with SEM and EDX The PANI-Grap-Ag nanocomposite modified on SPGE was characterized using SEM and EDX. Figure 6, show the morphology of PANI-Grap-Ag nanocomposite, graphene sheets were seen embedded within the PANI nanofiber with the silver nanoparticles decorated on thePANI nanofibers. The PANI Film has a fibrous network structure having a diameter of 40 -70 nm that measured from an SEM image using electronic digital calliper. Silver nanoparticles were decorated on the surface of the PANI nanofibers with the particle size 50 -90 nm. From EDX spectrum, which revealed the peak of silver (Ag) and confirmed the presence of silver element decorated on PANI nanofiber. Reusability The reusability of PANI-Grap-Ag modified SPGE was tested by analyzing the same concentration of target DNA (1  10 -9 M). After hybridization, the regeneration step (dropped with 20 L of 0.05 M sodium hydroxide with 30 min of the incubation time) was included in the analysis cycle. The residual activity (%) of immobilized DNA-MB probe to its target after regeneration was calculated. The residual activity (%) was plotted against the number of hybridization. Figure 7, 36 times of the hybridization between DNA-MB probe and target DNA, the average residual activity was 95  3% (RSD = 4%). After 37 regeneration cycles, the average residual activity reduced to 90%. The gold electrode surface was then tested by cyclic voltammetry. A flat voltammogram similar to one obtained after electrode preparation was observed. This confirmed that the film on the electrode surface was not destroyed by the regeneration solution. The results indicated that the decrease of residual activity after being used several times. Hybridization study between DNA-MB probe and synthetic complementary DNA Under the hybridization, the current response from the electron transfer of MB was studied for DNA detection. Oxidation peak current from electron transfer of MB to the electrode surface was measured using DPV. The batch system conditions were; complementary target DNA volume 20 L in PBS pH 7.00 with 30 min for hybridization event. The regeneration solution for dissociation DNA can be done using 50 mM NaOH for 10 min. The DPV response between potential (volt) and current (A) that obtained from five concentrations of the complementary target DNA from 1.0  10 -11 to 1.0  10 -7 M is shown in Fig 8. In the absence of the synthetic target DNA, single-stranded DNA-MB probe can be closed to the electrode surface, resulting in the electron transfer between the MB and the electrode can easily occur, that provided a high current. After hybridization, the hybrids between the probes and the target DNAs made the probe structure more rigid. Therefore, the MB at the end of the DNA probe moved further away from the electrode resulting in the decrease of the response. That means the high concentration of target will be reduce more of the signal, suggesting that the changes in DNA concentration and the voltammograms are related. Conclusions This work shows a successfully developed PANI-Grap-Ag nanocomposite modified a disposable SPGE for hybridization detection. The SPGE was initially pre-treatment with saturated sodium carbonate for enhancing electrochemical activities. From the cyclic voltammetry study, the nanocomposite modified electrode provides a larger current over bare SPGE. The DNA-MB probe immobilization and hybridization on SPGE were investigated using DPV method. This sensor is simple and cost-effective electrochemical biosensor for detection of DNA. Our going research will be focused on using other conducting materials to enhance the electrochemical signal that means using the electro-natural product to modified the SPE.
2021-06-03T00:09:45.906Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "c0f874c52f2d8b9d510fb246c2f177c5d9bc3eea", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1835/1/012101", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c0f874c52f2d8b9d510fb246c2f177c5d9bc3eea", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
220792034
pes2o/s2orc
v3-fos-license
Optimization of pancreatic islet isolation from rat and evaluation of islet protective potential of a saponin isolated from fruits of Momordica dioica Pancreatic islet β-cell destruction in type I diabetes mellitus is prominent, and there may not be any better drug if one can stimulate the regeneration/protection of islet. The objective of this study is to isolate the islet and evaluation of the protective potential of the isolated islet of a saponin. The extraction and isolation of saponin Momordica dioica (SMD) were done, and purification was achieved through the fractional method of thin liquid chromatography that yielded a pure saponin and was characterized by high-performance liquid chromatography, liquid chromatographymass spectroscopy, Fourier transmission infrared, and nuclear magnetic resonance. The best optimized method for the isolation of rat pancreatic islets, islet viability, potential, insulin secretion, and intra-islet contents was performed, and also, the insulin assay protective properties were assessed. The most optimum method was found to be the pancreas mincing and Collagenase Type XI digestion followed by cell straining (500μm), Ficoll gradient centrifugation and cell straining (70μm). Glucose stimulated insulin secretion showed the islets secreted insulin in a dose dependent manner with respect to the different concentrations of glucose compared with their respective group indicating its functionality. MDA and NO results in STZ and high glucose conditions help in establishing the beta cell protective activity of Saponin Momordica dioica. All of these results are a promising signs of the diabetes patients. INTRODUCTION Diabetes is a group of metabolic disease, in which a person has high blood sugar, either because the pancreas does not produce enough insulin or cells do not respond to the insulin that is produced (David & Dolores, 2011). The WHO has predicted that the major burden will occur in developing countries. The studies conducted in India in the past decade have highlighted that not only there is a high prevalence of diabetes but also it is increasing rapidly in the urban population. It is estimated that there are approximately 33 million adults with diabetes in India. This number is likely to increase to 57.2 million by the year 2025 (Manisha et al., 2007). Beta-cell destruction in type I diabetes mellitus (DM) is prominent that leads to insulin deficiency. Glucose metabolism is affected in the body and accumulates and gives rise to multiple complications. In patients with DM, years of poorly controlled hyperglycemia lead to multiple, primarily vascular complications that affect small vessels (microvascular), large vessels (macrovascular), or both. Pancreatic islets are thought to play a key role in the pathophysiology of diabetes through the failure of islet beta cells to secret the sufficient quantities of insulin to regulate blood glucose and are, therefore, a key focus of diabetes research (Donath & Halban, 2004). The use of natural products in modern medicine even though widespread in curing or preventing diseases lacks scientific evidence in most of the cases as to whether it is to be used as a plant or its active constituents (Bhonde et al., 1999;Dittrich & Dorsche, 1978;Singh et al., 2000). Several drugs have been discovered and are in use, which either increase insulin secretion or increase the utilization of glucose by the peripheral tissues. However, to date, no drug has been discovered, which can regenerate or protect the islets. Alternative regenerative options such as the use of stem cells are there, but their clinical application is not validated. Momordica dioica has also the same potent as the other natural drugs as already reported in various scientific papers; the antidiabetic activity of fraction of saponin isolated from the fruits of M. dioica showed a reduced glucose level in alloxan-induced diabetic rats and enhanced insulin sensitivity (Firdous et al., 2009). Steroidal saponin showed an improvization in lipid profile, which lowers the levels of HbA1c and increases serum insulin, reversible of beta-cell degeneration in vivo . Antidiabetic and insulin secretagogues activity of M. dioica has been reported earlier, but no study has been done so far on islet protection in vitro. There may not be any better drug if one can stimulate the regeneration of islets containing insulin-producing cells. This can put the diabetics of the antidiabetic drugs. The isolated phytoconstituents of the plants if found promising in ameliorating the severity of diabetes can be further exploited for the betterment of mankind. This study was focused on the optimization of the most active method for the isolation of islets from rat pancreas and explored the influence of the saponin of M. dioica (SMD) on the isolated islets in various simulated diabetic conditions (Fig. 1). Saponin isolation and characterization The extraction and isolation of saponin from the fruits of M. dioica were already done and reported in the previous paper . The pure saponin was further studied and characterized by spectroscopic techniques such as high-performance liquid chromatography (HPLC), liquid chromatography-mass spectroscopy (LCMS), Fourier transmission infrared (FT-IR), and nuclear magnetic resonance ( 1 H-NMR) spectroscopy. HPLC of SMD The sample was diluted 100 times with 5% acetonitrile (ACN) in water. Dionex 3,000 Ultimate was used as the Liquid chromatography (LC). Buffer A was 0.1% formic acid in water and buffer B was 0.1% formic acid in ACN. The flow rate was 500 µl/minutes. The gradient was as follows: 0-10 minutes increase from 5% B to 98% B and 10-20 minutes hold at 98% B and finally come back to 5% B. A Hypersil GOLD C18 (Thermo Fisher Scientific, Waltham, MA) column was used for the separation (150 mm × 4.6 mm, particle size of 3 µm). The column was kept at 35°C. UV detection was carried out at 280 nm. LCMS of SMD Bruker Impact HD QTOF Mass Spectrometer was used in this LCMS experiment equipped with an electrospray ionization source. [Source: Software and data analysis: Bruker otofControl software (version 3.3 build 18) was used to operate the mass spectrometer]. FTIR of SMD Source: Bruker alpha series was used to operate the FTIR, and the instrument is available in the Analytical Laboratory, Karnataka College of Pharmacy, Bangalore. H-NMR of SMD NMR spectra were recorded on a Bruker-AV-400 NMR spectrometer at room temperature in MeOD and dimethyl sulfoxide (DMSO), respectively, with Tetramethylsilane (TMS) acting as an internal standard. Chemical shifts (δ) were expressed in parts per million (ppm) with coupling constants (J) in Hertz (Hz). Animals Wistar male rats weighing 120-150 g were used for the experiment. The entire study was conducted in accordance with the Committee for the purpose of control and supervision of experiments on animals (CPCSEA) guidelines. All the experiments conducted on the animals were in accordance with the standards set for the use of the laboratory animal use, and the experimental protocols were duly approved by the Institutional animal ethics committee (IAEC) of Karnataka College of Pharmacy, Bangalore (Ref. no: KCP-IAEC2/17-18/15-04-07). Optimization of islet isolation and culture Following are methods employed for the optimization of the isolation of rat pancreatic islets: (A) Pancreas mincing and collagenase type IV digestion (B) Pancreatic perfusion of collagenase type XI via the common bile duct (CBD) postduodenal occlusion (C) Pancreas mincing and collagenase type XI digestion followed by cell straining and Ficoll gradient centrifugation. (D) Pancreas mincing and collagenase type XI digestion followed by cell straining (500 μm), Ficoll gradient centrifugation, and cell straining (70 μm) (Graham et al., 2016;Samaddar et al., 2019). Pancreas mincing and collagenase type IV digestion Rat was sacrificed with the high dose of pentobarbital sodium I.P. The pancreas was isolated and then transferred to a sterile polypropylene Petri plate containing sterile Hank's balanced salt solution (HBSS) with 20 mM of HEPES (N-2hydroxyethylpiperazine-N-ethanesulfonic acid), 2 mM of CaCl 2 .2H 2 O, and penicillin-streptomycin-amphotericin B (100 IU/ml-100 µg/ml-2.5 µg/ml, respectively) solution. The superficial fatty tissues and blood clots were removed by HBSS wash and mincing of the pancreas followed by HBSS wash for three times. The minced tissue mass was transferred to a 50 ml of sterile conical-bottom centrifuge tube containing 5 ml of collagenase type IV (1 mg/ml; HiMedia, Mumbai) in Roswell Park Memorial Institute-1640 (RPMI 1640). The tube was incubated at 37°C in a water bath for 20 minutes with occasional shaking. After 20 minutes, the digestion was stopped by thrusting the tube in ice and adding 10% fetal bovine serum (FBS). The tube was centrifuged at 1,000 rpm for 10 minutes. The cell pellet was resuspended in warm RPMI with 10% FBS and seeded in a T25 culture flask (Nunc, Denmark). The islets were incubated at 37°C in 5% CO 2 in a CO 2 incubator (Forma, Thermo Scientific, Waltham, MA). Healthy islets (with smooth borders and no dark center) are handpicked using a 200-µl micropipette under the microscope and placed into a Petri plate containing fresh RPMI 1640 medium after 48 hours. These islets were counted and further evaluated for their functionality and viability. Pancreatic perfusion of collagenase type XI via the CBD post duodenal occlusion Rat was sacrificed with the high dose of pentobarbital sodium I.P., and an incision was made around the upper abdomen to expose the peritoneum. The liver was flipped over onto a tissue paper, and it was folded over to cover it. Using a curved forceps, the duodenum was located. The confluence, where the CBD enters the duodenum, was located and clamped with a hemostatic forceps to prevent the emptying of collagenase solution in the duodenum. Gently, 3 ml of ice-cold collagenase type XI (Sigma, St. Louis, MO) solution (1 mg/ml) in HBSS was injected through the CBD into the pancreas. The inflated pancreas was gently dissected and placed in a 50-ml conical centrifuge tube containing 2 ml of collagenase type XI solution. The tube was placed in a water bath at 37°C for 15 minutes and briefly shaken 2-3 times by hand during the incubation. After digestion, the tube was removed from the water bath and plunged immediately into ice to stop collagenase digestion, and 15 ml of fresh ice-cold HBSS was added to stop the digestion process completely. The HBSS was removed by centrifuging the tube at 1,000 rpm for 10 minutes, and then, the cell pellet was resuspended in warm RPMI with 10% FBS and seeded in T25 culture flask (Nunc, Denmark). The islets were incubated at 37°C in 5% CO 2 in a CO 2 incubator (Forma, Thermo Scientific, Waltham, MA). Healthy islets (with smooth borders and no dark center) are handpicked using a 200-µl micropipette under the microscope and placed into a Petri plate containing fresh RPMI 1640 medium after 48 hours. These islets were counted and further evaluated for their functionality and viability. Pancreas mincing and collagenase type XI digestion followed by cell straining and Ficoll gradient centrifugation Pancreatic digestion was carried similar to the previous method in a water bath with a slight modification in the digestion medium. Bovine serum albumin fraction V (Sigma, St. Louis, MO) was added to a final concentration of 2%. A 100 μl aliquot of the digestion mixture was withdrawn at a regular interval (5 minutes) and observed under the microscope to monitor the digestion process (Jacqueline et al., 2009). The complete detachment of exocrine tissues from the islets when observed indicates the completion of the digestion process. At this point, the digestion is stopped, and the mixture is centrifuged at 1,000 rpm for 10 minutes. The supernatant is removed, and the cell mass is resuspended in 10-ml warm RPMI. The digested pancreas is filtered through a sterile stainless steel mesh (500-μm pore size). The mesh is washed with 5-10 ml of additional cold RPMI to wash down any residual digested tissue leaving behind the undigested tissue. The tube is centrifuged to obtain the digested pancreas after removing the supernatant and resuspended gently in 10 ml of Histopaque® 1077 (Sigma, St. Louis, MO) at room temperature. This was then overlaid with 5 ml of RPMI 1640 at room temperature extremely gently forming a clear and sharp interface between the two liquids. The tube was centrifuged at 850 × g for 20 minutes with brakes off. It is just beneath the interface where the islets gather after centrifugation. The islets were gently removed from the gradient and transferred to a tube containing RPMI. The medium was removed by centrifugation to wash off the histopaque, and a fresh, warm complete RPMI 1640 containing 10% FBS was added to the islets. The islets were resuspended, seeded into a Petri plate, and incubated at 37°C in 5% CO 2 for 48 hours followed by handpicking. Pancreas mincing and collagenase type XI digestion followed by cell straining (500 μm), Ficoll gradient centrifugation, and cell straining (70 μm) After density gradient centrifugation (in the previous step), the islets obtained were passed through a prewetted, inverted polypropylene 70-μm cell strainer (Cat. No. CLS431751; Corning, Corning, NY). The strainer was rewashed with fresh medium and turned upside down over a new petri dish containing 15 ml of fresh RPMI to rinse off the captured islets. This method eliminates the exocrine cells leaving behind islets only. The islets were incubated at 37°C in 5% CO 2 for 48 hours and handpicked postincubation. Assessment of islet viability and specificity The islet viability was assessed by the trypan blue dye exclusion test, and the specificity of islets was determined by dithizone (DTZ) staining (Sigma, St. Louis, MO) . In the trypan blue dye exclusion assay, islets were exposed to the membrane-impermeant dye, trypan blue (0.1% w/v) for 15 minutes at 37°C. Dead and membrane-compromised cells took up the dye and appeared blue while healthy viable cells with intact membrane appeared colorless. For specificity, DTZ stock solution (39 mmol/l) was prepared by dissolving 100 mg of DTZ in 10 ml of DMSO, filtered, aliquoted, and stored at −20°C. Routine staining was carried out by adding 10 μl of DTZ stock to islets suspended in 1 ml of Krebs-Ringer bicarbonate (KRB) buffer (pH 7.4) with (4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid) (HEPES) (10 mmol/l) and incubated at 37°C for 30 minutes. Glucose-stimulated insulin secretion assay (GSIS) The isolated islets were cultured at 37°C in a humidified atmosphere of 5% CO 2 in air in RPMI 1640 medium containing 11.1 mM of glucose, 10% of FBS, and antibiotics (50,000 IU/I penicillin and streptomycin) . The islets were seeded at a concentration of 50 islets per well in 12-well plates (Corning, Corning, NY), and the islets were washed thrice with KRB buffer and preincubated for 1 hour at 37°C. Grouping cells---The islets were incubated for 1 hour at 37°C, and the aliquots of 10 µl were withdrawn from each well and assayed for insulin. A total of 50 islets in triplicates (n = 3) were treated in the following manner: Group 1: Glucose-free control -islets maintained in glucose-free KRB Group 2: Normal glucose control -islets in 5.5 mM added glucose in KRB Group 3: High glucose control -islets in 16.7 mM added glucose in KRB Membrane integrity After the islets were handpicked and incubated for 24 hours, they were exposed to the membrane-impermeant dye, trypan blue (0·1% w/v) for 15 minutes at 37°C. The presence of dye within cells was determined by light microscopy. Dead cells will take up the dye and appear blue owing to membrane damage, whereas the viable cells will remain unstained. Effect of SMD on insulin secretion from cultured islets The islets were cultured and treated with 5 mM of streptozotocin (Sigma, St. Louis, MO) in PBS, pH 7.4 with or without the SMD in low and high dose. Normal and high glucosetreated islets were also maintained in KRB and assayed for insulin. A group of 50 islets each in triplicate (n = 3) were treated in the following manner: Group 1: Normal control -islets maintained in glucosefree KRB, pH7.5 Group 2: Streptozocin (STZ) control -islets maintained in glucose-free KRB treated with 5 mM of STZ Group 3: Normal glucose control -islets in 5.5 mM added glucose in KRB Group 4: High glucose control -islets in 16.7 mM added glucose in KRB Group 5: Saponin low dose -islets treated with SMD (10 μg/ml) Group 6: Saponin high dose -islets treated with SMD (50 μg/ml) Effect of the SMD on STZ-induced lipid peroxidation and nitric oxide (NO) formation The assay mixture contained 0.5 ml of cell lysate, 1 ml of 0.5M KCl in 10 mM Tris-HCl, 0.5 ml of 30% trichloroacetic acid, and 0.5 ml of 52 mM thiobarbituric acid (TBA). The assay mixture was heated to 80°C for 30 minutes and after cooling to 0°C centrifuged at 800 x g for 10 minutes. The absorbance of the supernatant was measured at 532 nm. The levels of N-nitrosomethylaniline (NMA) were calculated using the following formula: (Absorbance at 532) -(Absorbance at 600) is Absorbance due to MDA-TBA abduct. The extinction coefficient of this MDA-TBA abduct at 532 nm is 155 mM −1 cm −1 . The concentration of Malondialdehyde (MDA) (mM) = (A532 -A600)/155 NO produced during STZ and saponin treatment was estimated spectrophotometrically as a formed nitrite (NO 2 ). To measure the nitrite content, 100 μl of the cell lysate was incubated with 100 μl of Griess reagent (1% sulfanilamide in 0.1 mol/l HCl and 0.1% N-(1-naphthyl) ethylenediamine dihydrochloride) at room temperature for 10 minutes. Then, the absorbance was measured at 540 nm using a microplate reader. The nitrite content was calculated based on a standard curve constructed with NaNO 2 (Ignarro et al., 1987). Effect of SMD on high glucose-induced oxidative stress on islets A group of 50 islets each in triplicate (n = 3) were treated in the following manner: Group 1: Normal control -islets maintained in normal glucose (5.5 mM) medium Group 2: High glucose control -islets maintained in high glucose (31 mM) medium Group 3: Saponin low dose -STZ control + SMD (10 μg/ml) Group 4: Saponin high dose -STZ control + SMD (50 μg/ml) The islets were incubated for 24 hours at 37°C in 5% CO 2 . After incubation, the islets were lysed, as previous, and the lysate was used to estimate the formation of NO and MDA (lipid peroxidation). Statistical Analysis The results were expressed as mean ± SEM, and all the statistical comparisons were made by means of the one-way analysis of variance test followed by Tukey's multiple comparison tests (GraphPad Prism v5.0). The p-value < 0.05 was considered to indicate a statistically significant difference. Saponin isolation and characterization The saponin was isolated from the methanolic extract fruits of M. dioica, and the confirmation chemical test of M. dioica after isolated saponin was found to contain steroidal saponin ( Fig. 2A, 2B, 3, 4 & 5). Islet isolation and culture After the optimized method for the isolation of rat pancreatic islets, the last method, which involves pancreas mincing and collagenase type XI digestion followed by cell straining (500 μm), Ficoll gradient centrifugation, and again cell straining (70 μm), produced the highest yield of purified islets with long-term viability (up to 3 days). The islets were completely digested and got released from the exocrine tissue. Cell straining avoided the entry of digested/undigested exocrine tissue into the islet pool, ensuring a very less damage to the islets. Through Ficoll gradient centrifugation, only islets could be selected which concentrate as a hazy band/ring immediately below the Ficoll-RPMI interface. Even though hand-picking of islets was carried out after 48 hours, almost 99% of exocrine cells were eliminated due to 70-μm straining because of which their detrimental effects on the islets were abolished. The islets obtained through this isolation technique were found to possess the maximum stability in terms of functionality and viability compared to the remaining three methods. Other methods yielded lesser islets and poor stability and viability as they were incompletely digested (Fig. 6) and also got destroyed by the exocrine cells that found access into the islet pool as they were not specifically excluded from the digestion medium (straining and Ficoll gradient were not employed). Healthy islets were recovered after 48-hour incubation with smooth rounded surface (Fig. 7) and were employed in further experiments. Larger islets are prone to develop hypoxic cells in their center, visibly distinguishable as darker cells compared to the surrounding tissue. Reducing the amount of media in the dish allowed increased oxygenation and reduced this effect (Table 1). Assessment of islet viability and specificity The islets were handpicked and incubated for 24 hours, and the islet viability and specificity were assessed by trypan blue dye exclusion test (0.1% w/v for 15 minutes at 37°C) and (DTZ, 10 μl) staining (Sigma, St. Louis, MO), respectively. The viable islets appear reddish-brown with DTZ, whereas dead exocrine cells stained blue with trypan blue (Fig. 8A and 8B). Glucose-stimulated insulin secretion assay The isolated rat pancreatic islets were exposed in different concentrations of glucose, i.e., 5.5 and 16.7 mM. Insulin was assayed, and their response to glucose stimuli was noted (Fig. 9). Values are expressed as mean ± SEM (n = 3). *** p < 0.05 Compared with the normal glucose-free control Effect of SMD on insulin secretion from cultured islets Insulin detection is one of the key indicators of β-cell existence and pancreatic β-cell protection by the SMD which was clearly seen to increase in a dose-dependent manner and data (Fig. 10). Measurement of released and intra-islet insulin Glucose stimulates insulin secretion from islet β-cells but suppresses glucagon secretion from α-cell. A fine balance between insulin and glucagon secretion maintains the blood glucose levels within a narrow physiological range. In this assay, we looked at the released and intra-islet insulin to confirm the protective and regenerative potential of the drug on the pancreatic β-cells (Fig. 11). Effect of SMD on STZ-induced lipid peroxidation and NO formation Nitric oxide can both promote and inhibit lipid peroxidation. By itself, NO acts as a potent inhibitor of the lipid peroxidation chain reaction by scavenging propagatory lipid peroxyl radicals. MDA levels were measured by the TBA method as previously reported by Konings and Drijver (1979). NO produced during STZ and saponin treatment was estimated spectrophotometrically as a formed nitrite (NO 2 ) ( Fig. 12A and 12B). Effect of drug in high glucose-induced apoptosis on cultured islets High glucose content can induce apoptosis and in the above experiment found that the SMD shows protection to the pancreatic islet as indicated by the measurement of MDA and NO levels ( Fig. 13A and 13B). . 1 H-NMR spectra and chemical structure of the SMD: The spectra exhibited aliphatic region, and it looks aliphatic cyclic organic compound fragment. Using the spectral data and chemical test, the basic skeleton of the structure of saponin was found to be steroidal saponin. Isolation method Islet yield (per rat pancreas) Pancreas mincing and collagenase type IV digestion <300 Pancreatic perfusion of collagenase type XI via the CBD post duodenal occlusion 300-450 Pancreas mincing and collagenase type XI digestion followed by cell straining and Ficoll gradient centrifugation 850-1,000 Pancreas mincing and collagenase type XI digestion followed by cell straining (500 μm), Ficoll gradient centrifugation, and cell straining (70 μm). 800-900 DISCUSSION Through the ages, plants have always offered huge prospects toward the betterment of human health either by ameliorating disease conditions or enhancing normal physiological activity. Diabetes is one such metabolic disorder which is rampant throughout the world affecting individuals of every segment of the society. The complications of diabetes are multifactorial, which exacerbates the clinical conditions of the patients, eventually leading to death. Medicinal plants are extensively used as an alternative treatment strategy in the management and treatment of diabetes. It has been estimated that approximately 30% of diabetes worldwide have adopted the therapy offered by alternative and complementary medicine (Raman et al., 2012). In fact, the WHO has listed 21,000 medicinal plants, of which 150 are modestly used commercially (Joseph et al., 2011). The active constituents of herbal antidiabetic plants have been already reported to possess islet regeneration, insulin secretion, and overcoming resistance (Kavishankar et al., 2011). The islets of Langerhans are the cluster of endocrine cells, and the target of immune-mediated destruction in type I diabetes has been reported (Thomas et al., 2016). The isolation of islets requires enzymatic and mechanical digestion of the exocrine tissue, and density gradient separation results in the isolation of 200-400 islets. Similarly, Sheng et al. (2009) reported a classical procedure that includes three steps: collagenase perfusion, pancreas digestion, and islet purification. The whole procedure takes 30-45 minutes for each individual, and a reasonable number of islets can be obtained in a relatively short period of time. Islet beta-cell replacement and regeneration are the keys approached for the treatment of diabetic patient. The current research has provided the proof of principle that the islet isolation was executed. The pancreas was removed and washed with HBSS, and the superficial fatty tissues and blood clots were removed. The most optimum method was found to be the pancreas mincing and collagenase type XI digestion followed by filtration gradient centrifugation and cell straining, and after density gradient centrifugation, the islets obtained were passed through a prewetted, inverted polypropylene 70-µm cell strainer. The islets were incubated in RPMI at 37°C in 5% CO 2 for 48 hours, handpicked postincubation, and yielded the best islets both in terms of the quality and quantity. A very well-known representative is Momordica charantia, belonging to the Cucurbitaceae family which is very popular as an antidiabetic plant. It is commonly known as bitter melon or bitter gourd whose fruit is very bitter on ripening. The extracts of M. charantia have been reported to possess hypoglycemic activity (Garau et al., 2003). The extracts of M. charantia and its various constituents have been reported to exert their hypoglycemic effect through various mechanisms such as utilization of glucose by peripheral and skeletal muscle (Uebanso et al., 2007), inhibition of glucose uptake from the intestine (Abdollah et al., 2010;Jeong et al., 2008), inhibition of the differentiation of adipocytes (Nerurkar et al., 2010), inhibition of primary gluconeogenic enzymes (Shibib et al., 1993), stimulation of important enzymes of HMP shunt pathway, and protection to the β-cells of the islets (Gadang et al., 2011). Momordica dioica, which is a different species of the same genus and family, is abundant found in most of the states of India, Nepal, and the Himalayan region. It has been reported to possess type I and II antidiabetic activity and hypolipidemic activities, and many more were reported in the previous paper (Jha et al., 2017). These activities were investigated mainly focusing on in vivo models, but, in the present study, we investigated the antihyperglycemic activity in isolated rat pancreatic islets of Langerhans. Islet viability and its function leading to failure use of antibiotics were examined (Bhonde et al., 2001). The viability and insulin production data showed that none of the antibiotics affect the viability and the function of the islets at their pharmacological concentrations. Free radical levels measured in terms of MDA, nitric oxide (NO), and reduced glutathione reveal that, except for a marginal increase in lipid peroxidation with tetracycline and slight increase in NO levels with streptomycin, none of these antibiotics affect the oxidative status of the cells. Similarly, cytokines play an important role in beta-cell failure (Yang et al., 2010). Cytokines, such as IL-1β, IFN-γ, TNF-α, leptin, resistin, adiponectin, and Values are expressed as mean ± SEM (n = 3). ### p < 0.05 Compared with the normal glucose group, *** p < 0.05 compared with the high glucose control group. (B) Effect of SMD in high glucose-induced NO on cultured islets. Values are expressed as mean ± SEM (n = 3). ### p < 0.05 Compared with the normal glucose group, *** p < 0.05 compared with the high glucose control group. visfatin, have been shown to diversely regulate pancreatic β-cell function. NF-κB is a key signaling mechanism for pancreatic β-cell damage. Sulfuretin is one of the main flavonoids produced by Rhus verniciflua and reported to inhibit the inflammatory response by suppressing the NF-κB pathway. Rat insulinoma RINm5F cells and isolated rat islets were treated with IL-1β and IFN-γ to induce cytotoxicity reported (Song et al., 2010). Our intention was to explore the effectiveness and stability of the islet. Moreover, the islets obtained after handpicking were round with smooth and uniform boundary and without necrosis. They demonstrated decent specificity and viability through DTZ and trypan staining. The islets were also found to respond appropriately to GSIS assay, thereby qualifying for the functionality, specificity, and viability assessments. We have shown that the saponin has stimulated the secretion of insulin from islets significantly compared to STZ-treated islets. We further show the exposure of islets to STZ, and the saponin did not exhibit a reduction in insulin secretion which otherwise has happened with the STZ-treated islets. This is suggestive that betacell mass unaffected by STZ in the presence of the saponin. The results also revealed suppression in the formation of MDA, an index of Lipid peroxidation (LPO), and NO as opposed to the STZ-treated islets. We also assessed the effect of high glucose on isolated islets cultured in the presence of the saponin. It was reported that the high glucose-induced oxidative stress on islets leads to Reactive oxygen species (ROS) formation (Robertson et al., 2007), LPO (Turk et al., 1993), DNA damage (Wu et al., 2004), and apoptosis. In agreement to this, we revealed that the viability of islets in high glucose was reduced by almost two folds. However, the viability of islets treated with the saponin unaffected, as almost 70% islets were found to viable. Finally, the results indicate that the saponin could be a promising sign and vindication of the antihyperglycemic activity of the SMD, thus proving our hypothesis. CONCLUSION To draw the conclusion of the research conducted, it can be commented based on the data generated; phytosaponin was isolated through Thin liquid chromatography (TLC) and characterized by HPLC, LCMS, FT-IR, and 1 H-NMR. Using the different spectral analyses, the basic skeleton of the structure of saponin was found to be steroidal saponin, and further study can be done to elucidate the structure. The most optimum method was found to be the pancreas mincing and collagenase type XI digestion followed by cell straining, Ficoll gradient centrifugation, and cell straining. Moreover, the islet was confirmed by DTZ staining. GSIS showed that the islets secreted insulin in a dose-dependent manner with respect to the different concentrations of glucose compared with the normal control indicating its functionality. The SMD was found to stimulate insulin secretion and provide protection. ACKNOWLEDGMENT I would like to express heart full thanks to my beloved parents, guide for their support and blessings. And, I would, especially, like to thank who played an important role in this project: Dr. Vijay Kumar, Dr. Manoj Kumar, Mr. Joseph Vinod, and Miss. Ankita.
2020-07-16T09:03:30.604Z
2020-07-04T00:00:00.000
{ "year": 2020, "sha1": "05241f012b52caf69182d2698757a01aacee482f", "oa_license": null, "oa_url": "https://doi.org/10.7324/japs.2020.10712", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4774cadbd893a04d28897fea24b23871a863a8f2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
244855018
pes2o/s2orc
v3-fos-license
Study of the Effect of Process Parameters on the Yield of Fermentable Sugar from Tuber Peels Via Acid and Enzyme Hydrolysis The aim of this work is to study the acid and enzymatic hydrolysis of water yam peels using HCl, H2S04 acids and cellulase enzyme. The cellulase was secreted from Aspergillus niger (A.niger). The proximate analysis of the substrate showed that water yam peel is a lignocellulosic biomass with a cellulose composition of 48%. The effect of the process parameters (time, temperature, acid concentration and pH) on the yield of glucose in acid and enzymatic hydrolysis of the water yam peel was respectively investigated. Maximum glucose yield of 44.5% was obtained after 3 days of enzymatic hydrolysis at 30°C and pH 5. The HCl acid hydrolysis showed a maximum glucose yield of 27.3% at 70°C, 5% HCl after 180 minutes. The glucose yield in H2S04 hydrolysis was relatively lower than that of the HCl with a maximum yield of 26.5% at 70°C, 5% H2SO4 after 180 minutes. In addition to, the functional groups present in the glucose synthesized from ground water yam peels and the standard glucose were evaluated using Fourier Transformed Infrared (FTIR) Spectroscopy. The FTIR results showed similarities in the functional groups present in both sugars. Yam peel can be used for the production of glucose and further fermentative process to produce ethanol. Original Research Article Maxwell et al.; JENRR, 9(2): 24-32, 2021; Article no.JENRR.77749 25 INTRODUCTION In years passed, increasing research and development efforts have been directed to reducing the use of fossil fuels and decreasing the emission of carbon dioxide. Bio-ethanol is made mostly from sugar cane, maize, wheat, and barley [1][2][3][4][5][6][7]. However, the use of these crops to produce bio-ethanol competes with their use as food sources [8]. Hence, a special attention is actually being paid to the use of renewable resources, which are mainly agricultural and industrial by-products. Examples of the agricultural wastes are (corn stover, sugar cane bagasse, yam, water yam, cocoyam, flux straw, potato pulp, cassava bagasse, cowpea husk, rice husk, soya bean husk), forestry (beech bark, beech wood, populous tremuloides wood) and herbaceous materials (e.g. reed grass, switch grass, rye grass). These agricultural wastes biomass tend to dominate and pollute the environment. Many of these agro-wastes are allowed to rot away and not utilized [9]. These wastes biomass consist of cellulose, hemicelluloses, lignin and other materials called extractive [10,11]. Among all the constituents of agricultural waste biomass, cellulose constitutes relative high percentage, because it is a strong elastic material that forms cell wall of nearly all plants [11]. The cellulose can be hydrolyzed to produce glucose for human needs which can be used as substrates for fermentative production of useful product like alcohols [12,13]. As mentioned above, all forms of plant materials that can be used for energy are derived from agricultural waste [14,15,16]. Water Yam is the third most important root crops (after potato and cassava) cultivated in West Africa. More than three quarters of world yam production comes from Africa with Nigeria and Ghana being the world's leading producers [17]. In Egypt, Hawaii and Japan they are also important crops [18]. In general, they are stem tubers that are widely cultivated in both tropical and subtropical regions of the world [19]. Among the species of the family Dioscoreaceae which originated from Asia and other species of genus Dioscorea which is from America, Indian and China, the species mostly grown in West Africa particularly in Nigeria is the Dioscorea alata, which are either red or white [19]. The Dioscorea alata variety in Nigeria is hard and highly starchy which makes it easily useful for fufu preparation. The young leaves and the cormels of the Dioscorea alata variety serve as leafy vegetables in some diets, in Nigeria [20]. The substrate used in this study is Dioscorea alata (water yam). Dioscorea alata can be processed in several ways to produce food and feed products similar to that of potatoes in the Western world. Water yam can be processed via the following; boiling, roasting, frying, milling and conversion to "fufu", as earlier mentioned, soup thickeners, flour for baking, chips, beverage powder, porridge and specialty of food for gastrointestinal disorder [20,17,21,22]. Saccharification of Water yam peels to produce reducing sugar is important, owing to the fact that reducing sugars are essential raw material for the production of bio-ethanol (bio-fuel). Saccharification is basically achieved via acidic and enzymatic hydrolysis of polysaccharides or cellulose. Large quantities of these wastes produced annually in Nigeria are under-utilized. Usually, residues are allowed to decompose or are burnt. However, studies have shown that these residues could be processed into liquid fuels or combusted/gasified to produce electricity and heat [14,15,16]. Conversion of these waste products to valuable products such as glucose, xylose, arabinose, etc. provides a more efficient means of waste management. This study therefore, focused on the production of bioethanol from water yam peels that are readily available in the country in large quantities as agrowastes. The use of waste biomass like water yam peel to generate energy can reduce problems associated with waste management such as pollution, greenhouse gaseous emissions and fossil fuels use. The rate of global warming can be reduced drastically through the use of bioenergy derived from municipal or agricultural wastes. According to a recent past report, it was proposed that by the year 2020, constant use of biomass will produce 19 million tons of petroleum equivalents. Out of this, 46% will be obtained from bio-wastes like farm waste, agricultural waste, municipal solid waste and other biodegradable waste [23]. Acid Hydrolysis of Water Yam Peel The water yam peels were collected and pretreated by washing, drying in an oven at 105°C for 4 hours and grinding before sieving to fine particle size of 250µm. Thereafter, the dilute acid hydrolysis was carried out using the method adopted and described by [24,25]. The ground water yam peels were first soaked in ethanol for 24hrs after which they were washed repeatedly with distilled water until the residues were free of the solvent. 1.0g of the pretreated biomass was weighed into a 250mL conical flask. 20mL of 1% HCl (0.1M) was added into the flask. The flask was covered with cotton wool and aluminum foil and put into a water bath set at 30°C for 30 minutes. The mixture was thereafter filtered with filter paper, neutralized with drops of 6M NaOH and the concentration of the simple sugar obtained was measured using DNS method. The experiment was repeated at different concentrations of HCl (3% and 5%) for different durations (60, 90, 120, 150, 180 minutes) and at different temperatures (50⁰C and 70⁰C). H 2 SO 4 at different concentrations of (1%, 3%, 5%) for duration (60, 90, 120, 150, 180 minutes) and at temperatures (30°C, 50°C and 70⁰C) was also used for the experiment. The yield of simple sugar (glucose) was calculated using Equations 2.1 and 2.2, while the percentage conversion of cellulose/ hemicelluloses to simple sugars at each run was calculated using Equation 2.3 as described by [25]. Where yield (%) = yield of simple sugar (glucose) based on the total weight of the biomass M (g) = total mass of simple sugar (glucose) after hydrolysis Vol = total volume of the hydrolysis mixture (mL) Conc = percentage concentration of simple sugar (glucose) obtained from the standard graph E(%) = simple sugar (glucose) percentage conversion M = mass of the extractive free biomass F = conversion factor (0.9 for cellulose) y = fraction of cellulose /hemicelluloses in the biomass Isolation of Aspergillus niger The fungi Aspergillus niger (A.niger) was isolated and characterized at Microbiology Department of Enugu State University of Science and technology (ESUT) Nigeria following the method described by [26]. Soil obtained from groundnut husk dump site was crushed, sieved and diluted serially using sterile distilled water. Inoculums preparation Inoculums for enzyme production were prepared by adding 10mL of citrate buffer (pH of 5.0) to each test tube containing fully grown spores of A.niger. The inoculums were estimated to have 2.8x10 6 spores/ml [27]. The inoculums were stored in a refrigerator for future use. Cellulase enzyme production Enzymes production was carried out in 250 mL Erlenmeyer flasks with 50 mL medium as described by [1]. The ingredients of culture medium included 30 g/L alkaline pretreated cocoyam shell (dry biomass), 1 g/L glucose, 6 g/L ammonium sulfate, 2.0 g/L KH 2 PO 4 , 0.3 g/L CaCl 2 , 0.3 g/L MgSO 4 , 0.005 g/L FeSO 4 , 0.0016 g/L MnSO 4 , 0.0014 g/L ZnSO 4 and 0.0037 g/L CoCl 2 . The initial pH value was adjusted to 4.8 by adding 2.5 mL citrate buffer solution (1 mol/L) to the medium. Then the prepared medium was autoclaved at 121°C for 30 min. The submerged fermentation started by inoculating the 50mL medium with 10mL of the fungi inoculums in a 250mL Erlenmeyer flask. The flask was incubated under shaker for 7days. The fermentation was terminated when the glucose level was zero. The medium was filtered and centrifuged to obtain the supernatant, which is referred to as the crude enzyme. Cellulase assay was done following the procedure described by [2]. One milliliter of 1% Carboxyl Methyl Cellulose (CMC) in 0.1M citrate suffer (pH 5.5) was placed in a test tube and 1ml of culture filtrate was added. The reaction mixture was incubated at 50˚C for 30mins and the reaction terminated by adding 1.5ml of DNS reagent. The tubes were heated at 100 ˚C in a boiling water bath for 15 minutes and then cooled at room temperature. The absorbance was read at 540nm. Enzyme activity is expressed as mmol glucose released per sec per ml of culture filtrate. The result after 7 days of incubation gave 2.4 × 10 -4 µg/ml. Enzymatic Hydrolysis The enzymatic hydrolysis was performed in 250 mL Erlenmeyer flasks with a 20 mL mixture of 0.05 M citrate buffer solution (pH 5.0), and enzymes. 1g of the alkaline (NaOH) pretreated ground yam peels was added in the mixture and the flask was incubated in an orbital shaker (140 rpm) at 30⁰C [1]. Sampling was conducted at 1 day interval for analysis. The glucose yield was analyzed using the DNS method. The different pH used for the hydrolysis process was 3.0, 5.0 and 7.0. The pH was adjusted using citrate buffer and the temperatures used were 30, 50 and 70⁰C [3]. Dinitrosalicylic (DNS) Method of Simple Sugar Analysis [4] Dinitrosalicylic acid reagent solution, 1% Heat the mixture at 90º C for 5-15 minutes to develop the red-brown color. iii. Add 1 ml of a 40% potassium sodium tartrate (Rochelle salt) solution to stabilize the color. iv. After cooling to room temperature in a cold water bath, the absorbance was recorded with a spectrophotometer at 540 nm Note1: Phenol, up to 2g/l, intensifies the color density. It changes the slope of the calibration curve of absorbance versus glucose concentration but does not affect the linearity. The above procedure yield an absorbance of 1 for 1 g/l of glucose in the original sample in the absence of phenol in the reagent, as opposed to an absorbance of 2.5 for 1 g/l of glucose in 2 g/l of phenol. This property can be exploited to achieve the maximum sensitivity for dilute samples. See Fig. 1 at for the glucose standard graph. RESULTS AND DISCUSSION The results obtained for the study of the effects of time, acid type/concentration and temperature on glucose yield for acid hydrolysis of ground yam peels are shown on Tables 1, 2, and 3 below. It was observed that glucose yield for HCl hydrolysis was higher than that of H 2 SO 4 hydrolysis. At high temperature and acid concentration, higher glucose yield is achieved faster than at low temperature and low acid concentration [25,5]. However, the result showed a maximum sugar yield of 27.3% and 26.5% for 5% HCl and 5% H 2 SO 4 hydrolysis respectively at 70˚C and at 180 minutes. This result trend was in consonance with results obtained in similar studies of acid hydrolysis of lignocellulosic biomass from literatures. Increase in contact time and increase in acid concentration will enhance adequate access of the acid molecules to the cellulose content of a lignocellulosic biomass, and thus, break adequately the crystalline bonds (hydrogen bonds) of the cellulose polymer into glucose monomers, thereby leading to higher glucose yield. [5] in their study reported higher glucose yield at higher temperature. In the study of the effects of process parameters (time, pH and temperature) on the yield of glucose via enzymatic hydrolysis, the results as presented on Table 4 showed the maximum glucose yield of 44.5% obtained at 30⁰C at pH of 5 and 72 hours (3 days). The table also contains the conversion efficiency (E) based on the total concentration of cellulose and hemicelluloses in the water yam peel. The yield of glucose was observed to decrease as the temperature increased from 30 to 70⁰C. Gautem et al., (2011) in their study reported an optimum pH range of 5-6 for cellulase activity and at 40˚C. Also, Nermeen et al., (2010) reported an optimum pH range of 5.5-7 and 45˚C for cellulase activity. The results obtained in this study were in agreement with the results of similar studies from the literatures. The yield of glucose from enzymatic hydrolysis of the yam peels actually depends on process parameters investigated. CONCLUSION The hydrolysis of ground yam peels with HCl and H 2 SO 4 at different temperatures, time and concentrations gave a maximum glucose yield of 27.3% and 26.5% respectively at 70˚C, 180 minutes and 5% acid concentration. Also in enzymatic hydrolysis, a maximum glucose yield of 44.5% was obtained at 30˚C, pH 5 and 3 days. Furthermore, the functional groups present in the cocoyam peel glucose and the standard glucose was evaluated using FTIR. The FTIR results showed similarities in the functional groups present in both sugars as shown in Tables 5 and 6. Finally, the results obtained in this study have shown the suitability of water yam peel for the production of fermentable sugar that can be fermented to synthesize ethanol (biofuel). DISCLAIMER The agrowaste used for this research is commonly and predominantly used substrate in our area of research and country. There is absolutely no conflict of interest between the authors. Also, the research was funded by Tertiary Education Fund (TETFund) of Nigeria Ministry of Education.
2021-12-04T16:18:32.746Z
2021-11-29T00:00:00.000
{ "year": 2021, "sha1": "efd95b6c309252e9522f85d2f5851942f17ab13e", "oa_license": null, "oa_url": "https://www.journaljenrr.com/index.php/JENRR/article/download/30228/56726", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "355f551c387e953e7022597690a92c1ef1153cec", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
226248045
pes2o/s2orc
v3-fos-license
Nutraceutical Profiles of Two Hydroponically Grown Sweet Basil Cultivars as Affected by the Composition of the Nutrient Solution and the Inoculation With Azospirillum brasilense Sweet basil (Ocimum basilicum L.) is one of the most produced aromatic herbs in the world, exploiting hydroponic systems. It has been widely assessed that macronutrients, like nitrogen (N) and sulfur (S), can strongly affect the organoleptic qualities of agricultural products, thus influencing their nutraceutical value. In addition, plant-growth-promoting rhizobacteria (PGPR) have been shown to affect plant growth and quality. Azospirillum brasilense is a PGPR able to colonize the root system of different crops, promoting their growth and development and influencing the acquisition of mineral nutrients. On the bases of these observations, we aimed at investigating the impact of both mineral nutrients supply and rhizobacteria inoculation on the nutraceutical value on two different sweet basil varieties, i.e., Genovese and Red Rubin. To these objectives, basil plants have been grown in hydroponics, with nutrient solutions fortified for the concentration of either S or N, supplied as SO42– or NO3–, respectively. In addition, plants were either non-inoculated or inoculated with A. brasilense. At harvest, basil plants were assessed for the yield and the nutraceutical properties of the edible parts. The cultivation of basil plants in the fortified nutrient solutions showed a general increasing trend in the accumulation of the fresh biomass, albeit the inoculation with A. brasilense did not further promote the growth. The metabolomic analyses disclosed a strong effect of treatments on the differential accumulation of metabolites in basil leaves, producing the modulation of more than 400 compounds belonging to the secondary metabolism, as phenylpropanoids, isoprenoids, alkaloids, several flavonoids, and terpenoids. The primary metabolism that resulted was also influenced by the treatments showing changes in the fatty acid, carbohydrates, and amino acids metabolism. The amino acid analysis revealed that the treatments induced an increase in arginine (Arg) content in the leaves, which has been shown to have beneficial effects on human health. In conclusion, between the two cultivars studied, Red Rubin displayed the most positive effect in terms of nutritional value, which was further enhanced following A. brasilense inoculation. Sweet basil (Ocimum basilicum L.) is one of the most produced aromatic herbs in the world, exploiting hydroponic systems. It has been widely assessed that macronutrients, like nitrogen (N) and sulfur (S), can strongly affect the organoleptic qualities of agricultural products, thus influencing their nutraceutical value. In addition, plant-growth-promoting rhizobacteria (PGPR) have been shown to affect plant growth and quality. Azospirillum brasilense is a PGPR able to colonize the root system of different crops, promoting their growth and development and influencing the acquisition of mineral nutrients. On the bases of these observations, we aimed at investigating the impact of both mineral nutrients supply and rhizobacteria inoculation on the nutraceutical value on two different sweet basil varieties, i.e., Genovese and Red Rubin. To these objectives, basil plants have been grown in hydroponics, with nutrient solutions fortified for the concentration of either S or N, supplied as SO 4 2− or NO 3 − , respectively. In addition, plants were either non-inoculated or inoculated with A. brasilense. At harvest, basil plants were assessed for the yield and the nutraceutical properties of the edible parts. The cultivation of basil plants in the fortified nutrient solutions showed a general increasing trend in the accumulation of the fresh biomass, albeit the inoculation with A. brasilense did not further promote the growth. The metabolomic analyses disclosed a strong effect of treatments on the differential accumulation of metabolites in basil leaves, producing the modulation of more than 400 compounds belonging to the secondary metabolism, as phenylpropanoids, isoprenoids, alkaloids, several flavonoids, and terpenoids. The primary metabolism that resulted was also influenced by the treatments showing changes in the fatty acid, carbohydrates, and amino acids metabolism. The amino acid INTRODUCTION Basil (Ocimum basilicum L.) is an annual plant belonging to the Lamiaceae family growing in the tropical and subtropical regions of America, Africa, Asia, and the southern areas of Europe (Kwee and Niemeyer, 2011). Basil has considerable commercial importance not only as a fresh-market herb for culinary and ornamental purposes but also for the production of phytochemicals used for medicinal scopes (Singletary, 2018). The medicinal properties of basil can be mainly ascribed to the presence of a plethora of biologically active compounds in its leaves, characterized by different chemical structure, encompassing for instance phenolic acids (i.e., rosmarinic and caffeic acids), flavonol glycosides (quercetin and kaempferol), and anthocyanins (Flanigan and Niemeyer, 2014;Ghasemzadeh et al., 2016;Złotek et al., 2016;Singletary, 2018). Other important components contained in both basil leaves and flowers are essential oils, which play a pivotal role in the medicinal and food application of this plant (Avetisyan et al., 2017;Burducea et al., 2018). The qualitative and quantitative composition of phytochemicals featured by basil leaves can primarily depend on plant's genetic traits (Skrypnik et al., 2019). Indeed, several authors have shown that distinct basil cultivars have the genetic potential of generating and storing different sets of bioactive molecules, thus leading to a wide range of possible chemotypes within the same basil variety/species (Avetisyan et al., 2017). Besides genetics, another critical factor determining the set of phytochemicals produced by basil, and by plants in general, is the cultivation condition (Scagel and Lee, 2012). The cultivation of basil plants is normally carried out in both natural (e.g., open field) and controlled conditions (e.g., greenhouses); however, to increase the yield in terms of biomass as well as to prolong the production period over the year, the greenhouse cultivation methods represent the most suitable solution (Sgherri et al., 2010). Concerning the growth in controlled conditions, at present, the hydroponic cultivation of basil is the preferred solution as compared to the traditional soil-based cultivation methods (Kiferle et al., 2011). Indeed, the soilless cultivation approach represents a good opportunity for agriculture, especially for those areas that are characterized by scarce water availability and severe soil degradation, allowing the implementation of environment-friendly agricultural practices in a general context of safe food production (Sambo et al., 2019). The exploitation of controlled conditions allowed by the hydroponic cultivation methods consents, on the one hand, the reduction in soil disinfection and application of agrochemicals with defense purposes and, on the other hand, the fine tuning of the nutrient solutions compositions to match plants nutritional requirements to maximize both the yield and the quality of the agricultural products (Sambo et al., 2019). In this sense, hydroponic cultivation systems also allow a better reproducibility in plant growth and yield as well as in the quality of the agricultural products in term of nutraceuticals content (Kiferle et al., 2011;Sambo et al., 2019;Skrypnik et al., 2019). The content of bioactive compounds in basil leaves can be dependent on both the growth substrate and the fertilizer (e.g., chemical vs. organic) applied (Matłok et al., 2019). In particular, mineral nutrition has been addressed as one of the principal features influencing plant metabolism. Several authors, for instance, have highlighted that the fertilization with the essential macronutrients potassium (K + ) and ammonium (NH 4 + ) can lead to an increased content of nutraceutical compounds, like sugars, phenolic acids, flavonoids, anthocyanins, carotenoids, lycopene, and vitamins, in both medicinal herbs like basil and fruits (Lester et al., 2010;Ibrahim et al., 2012;Salas-Pérez et al., 2018;Valentinuzzi et al., 2018b,c). Few pieces of research also indicate that the overfertilization with sulfur (S) can induce a higher yield in several plants, as for instance Medicago sativa, wheat, canola, oil seed rape, corn, and potato (Stewart and Porter, 1969;Mullins and Mitchell, 1989;Weil and Mughogho, 2000;Pavlista, 2005;Rehm, 2005); in the specific case of basil plants, sulfate (SO 4 2− ) increased fertilization rate and resulted in a higher accumulation of biomass and a higher concentration of eucalyptol in the leaves (Zheljazkov et al., 2008). Interestingly, several experimental experiences also showed that the biofortification practices with non-essential microelements [e.g., boron (B), silicon (Si), and selenium (Se)] can produce an alteration in the metabolomic profiles of agricultural products, inducing the accumulation of secondary metabolites, which might have beneficial effects on human health (Gottardi et al., 2012;Schiavon et al., 2013;Nancy and Arulselvi, 2014;Tomasi et al., 2015;Mimmo et al., 2017;Valentinuzzi et al., 2018a,c;Skrypnik et al., 2019). In fact, the preventive effects arising from the consumption of fruit, vegetables, and herbs toward chronic diseases are mostly due to secondary metabolites such as vitamins (including carotenoids and tocopherols) and antioxidant compounds such as glucosinolates and phenolic compounds (Schreiner et al., 2012). While primary metabolites are ubiquitous in plant species, the products of the secondary metabolism, which include the majority of the industrially applicable compounds, represent a complex adaptation mechanism that plants adopt to face adverse environmental conditions (Isah, 2019). In this vision, the application of abiotic stressors (e.g., mild nutrient deficiencies, UV light) has been proposed as an alternative strategy to enhance the content of nutraceuticals compounds in agricultural products for human consumption (Kiferle et al., 2011;Valentinuzzi et al., 2015;Burducea et al., 2018), albeit this method might have a negative impact on the cultivation yield. Beside the modulation of the growing media composition, additional strategies, including high-yielding cell line screening, elicitation, precursor feeding, large-scale cultivation systems, plant cell immobilization, hairy root culture, and biotransformation (Ramachandra Rao and Ravishankar, 2002;Cardoso et al., 2019), aimed at increasing the content of technologically relevant secondary metabolites in medicinal and edible plants, have been explored. In this sense, it is widely known that a group of bacteria, generally known as plant growth-promoting rhizobacteria (PGPR), can positively affect the growth and the fitness of plants as well as crop yield and quality (Pii et al., 2015b). The positive effects exerted by PGPR on plants can be brought about through both direct (e.g., inducing the growth of root system, assimilating atmospheric inorganic nitrogen, increasing the bioavailability of mineral nutrients, and enhancing plants ability to take up mineral nutrients) and indirect (e.g., activating plants induced systemic resistance, producing antimicrobial compounds, and outcompeting pathogens for essential nutrients) mechanisms (Pii et al., 2015b;Crecchio et al., 2018). Nonetheless, the interaction between PGPR and plant roots has also been shown to induce modulation in the molecular and biochemical mechanisms related to different aspects of plant physiology, thus having an impact on the production and content of secondary metabolites. Several pieces of evidence have already addressed the positive role produced by PGPR inoculation on the quality of fruits like citrus, mulberry, apricot, sweet cherry, raspberry, and strawberry (Esitken et al., 2002(Esitken et al., , 2003(Esitken et al., , 2005Orhan et al., 2006;Pii et al., 2018). More recently, PGPR as well as symbiotic arbuscular mycorrhizal fungi have been shown to have a promoting effect on the accumulation of secondary metabolites, like antioxidant molecules and essential oils, in aromatic herbs (Banchio et al., 2009;Erika et al., 2010;Santoro et al., 2011;Heidari and Golpayegani, 2012;Cappellari et al., 2013Cappellari et al., , 2019a. Nonetheless, the effect produced by the rhizobacteria on the qualitative and quantitative profile of secondary metabolites has been observed to be strongly dependent both on the plant species/cultivar and on the microbial strain. This suggests the existence of specific microbe/host recognition and interaction mechanisms, thus inducing diverse effects at plant level (Cappellari et al., 2019a). On the bases of these premises, the aim of this work was to assess the influence of the mineral nutrients supply, as well as the inoculation of the PGPR Azospirillum brasilense, on the nutraceutical compounds production in two different sweet basil cultivars, namely, Genovese and Red Rubin. For the purposes of this study, the macronutrients nitrogen (N) and sulfur (S) were selected for the fortification of the nutrient solutions, mainly for the presence of these elements in the chemical structure of secondary metabolites with nutraceutical value. To this aim, basil plants were hydroponically grown with two nutrient solutions, fortified for the concentration of either S, supplied as SO 4 2− , or N, supplied as NO 3 − . In addition, plants were inoculated with A. brasilense, which is already known (i) to influence the molecular and biochemical mechanisms underlying the nutrients acquisition in model plants (Pii et al., 2015cMarastoni et al., 2019) and (ii) determine alteration in both quantitative and qualitative traits of strawberry fruits . The effects of cultivation practices (i.e., nutrient solution fortification and inoculation with beneficial microbes) on the nutraceutical composition in basil leaves have been assessed by the application of a holistic approach, combining metabolomics and ionomic analyses, together with traditional analytical methods, based on both spectrophotometric and highperformance liquid chromatography (HPLC) analyses. Plant Material and Growing Conditions The seeds of two different sweet basil (Ocimum basilicum) cultivars (i.e., cv. Genovese and cv. Red Rubin) were obtained from a local nursery and were germinated adopting a modified version of the RHIZOtest system (Bravin et al., 2010). Seeds were sown in small pots and placed in 6-L plastic tanks. For the germination stage, tanks were filled with 6 L of germination solution (CaCl 2 , 88.24 g L −1 ; H 3 BO 3 , 0.1236 g L −1 ) and covered with both plastic and aluminum foils (Bravin et al., 2010). Air pumps were used to maintain the constant aeration of the nutrient solution. Plants have been grown in controlled conditions in a climatic chamber with a day/night cycle of 14/10 h and 24/19 • C. The relative humidity was about 70%, and the light intensity was 250 mmol m −2 s −1 . One week after sowing, germination solution was replaced with nutrient solution (NS), composed of the following: 14.58 mM NO 3 − , 3 mM NH 4 + , 3.5 mM PO 4 3− , 10.5 mM K + , 3.5 mM Mg 2+ , 4 mM Ca 2+ , 3.63 mM SO 4 2− , 0.95 mM Cl − , 40 µM FeEDTA, 10 µM MnSO 4 , 5 µM ZnSO 4 , 1 µM CuSO 4 , 40 µM H 3 BO 3 , and 0.5 µM Na 2 MoO 4 (pH 6). The control solution has been renewed two times per week for 3 weeks. After 21 days of growing, samples were split in three sets: one set was used as control and was grown in control NS, mentioned above. The other two sets of samples were grown in modified NS (Supplementary Table 1), containing either 20 mM NO 3 − or 8 mM SO 4 2− , while micronutrient concentration was maintained as described above. In addition, half of all the samples were inoculated with the PGPR A. brasilense. Six independent biological replicates for each treatment were set up. Basil plants were grown in modified nutrient solutions for a further 3 weeks. At harvest, the soil-plant analysis development (SPAD) measures were recorded, roots and shoots were separated, and fresh weight were recorded. The remaining tissues were frozen in liquid nitrogen and stored in aluminum foils at −80 • C for further analyses. Bacterial Strain and Inoculation A. brasilense Cd (DSM-1843) was grown in solid Luria-Bertani (LB) growth medium (10 g L −1 tryptone, 5 g L −1 yeast extract, 10 g L −1 NaCl, 14 g L −1 agar) for 3 days at 28 • C. Afterward, the microbial biomass was inoculated in liquid LB medium and grown at 28 • C under horizontal shaking until saturation. Bacteria were collected by centrifugation for 15 min at 4,500 rpm and washed once with sterile deionized water, as previously described . The concentration of bacteria was estimated by a spectrophotometer at 600 nm optical density. The final concentration of 10 6 cfu ml −1 was used for nutrient solution. A second inoculation was carried out 10 days after the first one. Extracts Preparation Basil leaf extracts were prepared as previously described (Valentinuzzi et al., 2015). Briefly, the tissue was freeze dried and finely homogenized by a ball mill (MM400, Retsch, Italy). A specific amount of powder was weighed into 15-ml centrifuge tubes and resuspended with methanol in a 1:10 ratio. The suspension was thoroughly mixed and sonicated in an ultrasonic bath sonicator for 30 min and centrifuged for 30 min at 4,500 rpm. The supernatants were filtered with a 0.45 µm Nylon syringe filter in 5 ml Eppendorf tubes and kept at −80 • C until further analyses. Total Nitrate Content Total nitrate content was determined using the methodology described previously (Cataldo et al., 1975). Briefly, 0.1 g of leaves was suspended in 10 ml of distilled water, thoroughly mixed and kept at 45 • C for 1 h. Afterward, samples were filtered through Whatman No. 40 filter paper and analyzed immediately. Samples (0.2 ml) were added to 0.8 ml of 5% salicylic acid in concentrated sulfuric acid and incubated for 20 min at room temperature; then, 19 ml of 2 N NaOH was added, and the samples were incubated for further 30 min. The absorbance was read at 410 nm, and the concentration of NO 3 − in the tissue was determined through a suitable calibration curve. Untargeted Metabolomics Profiling Six independent biological replicates from each treatment were analyzed using a ultra-HPLC (UHPLC) system coupled to a hybrid quadrupole-time-of-flight mass spectrometer (UHPLC/QTOF-MS) as previously reported (Rocchetti et al., 2018b). The apparatus comprised a 1290 LC system coupled to a G6550 QTOF detector equipped with an electrospray ionization source (all from Agilent Technologies, Santa Clara, CA, United States). A volume of 4 µl was injected; then, metabolites were separated in reverse phase mode by an Agilent Zorbax Eclipse-plus C18 column (100 mm × 2.1 mm, 1.8 µm), using linear binary gradient elution (6-94% methanol, 33 min run time, flowrate of 200 µl min −1 ). The mass spectrometer operated in SCAN mode (100-1,000 m/z, 0.8 spectra/s) and positive polarity. Features were extracted and annotated from raw data using Profinder B.07 (Agilent Technologies) following alignment, by merging the monoisotopic accurate mass and the isotopic profile (isotope spacing and ratio). The process was carried out as previously described (Rocchetti et al., 2018a), using the publicly available databases PlantCyc 12.6 (Plant Metabolic Network 1 , downloaded on April 2018) and adopting a mass accuracy tolerance of 5 ppm. On this basis, a putative Level 2 annotation was achieved, as referred by COSMOS Metabolomics Standards 2 . Annotated features were finally filtered by frequency to retain compounds being in at least 75% of replicates within at least one treatment. Statistical Analysis The results are reported as mean ± standard error (SE) of six independent biological replicates. The significance of differences among means was calculated by one-way ANOVA with post hoc Tukey honestly significant difference (HSD) with α = 0.05 using R software (version 3.6.0). The following R packages were used for data visualization and statistical analyses: ggplot2 v.3.2.0 (Wickham, 2016), Agricolae v.1.3-1 (de Mendiburu and de Mendiburu, 2019), and ggfortify (Tang et al., 2016). For the metabolomics analysis, the chemometric interpretations were performed in Mass Profiler Professional B.12.06 as previously reported (Salehi et al., 2018) for log2 transformation of compounds abundance and normalization at the 75th percentile, using the median value as the baseline. Unsupervised hierarchical cluster analysis (HCA), based on fold change (FC) values, was then performed (Euclidean distance matrix, Wards' linkage) to point out the relatedness of metabolomic signatures across treatments. After that, the supervised orthogonal projections to latent structures discriminant analysis (OPLS-DA) was carried out in SIMCA 13 (Umetrics, Sweden). Outliers were investigated using Hotelling's T2 (95 and 99% confidence limits for the suspect and strong outliers, respectively). CV-ANOVA (p < 0.01) and permutation testing (N = 100) were used for model validation and for excluding overfitting, respectively. Goodness-of-fit R2Y and goodness-of-prediction Q2Y were also calculated from the OPLS-DA model. Subsequently, the most discriminant compounds (VIP score > 1.3) were selected by variable importance in projection (VIP) analysis and discriminant compounds identified by Volcano plot analysis, combining fold-change (FC > 1.5) and ANOVA [p < 0.05, false discovery rate (FDR) multiple testing correction] to describe the extent and direction of regulation in response to treatments. Finally, the metabolites identified by Volcano analysis along with their FC values were uploaded to the PlantCyc pathway Tools software (Karp et al., 2010) to decipher principal classes of functional compounds modulated by the treatments. Biometric Parameters At harvest, the fresh weight of the roots and shoot of basil plants, both Genovese and Red Rubin cultivars, was assessed (Figure 1). In the case of cv. Genovese, the root biomass of plants, independently from the nutrient solution (NS) applied, was not affected by the inoculation with A. brasilense ( Figure 1A). Nonetheless, the increased concentration of S in the NS induced a higher growth of the root biomass in non-inoculated plants as compared to the control ones ( Figure 1A). A similar behavior was also observed at shoot level; in fact, the inoculation with the PGPR did not influence the allocation of biomass, whereas the fertilization with SO 4 2− caused a significant increase in the weight of the aerial parts (Figures 1B-D). The cv. Red Rubin, though, was differently affected by treatments as compared to cv. Genovese (Figure 1). In fact, at the root level, the inoculation with A. brasilense caused a strong increase in the biomass (almost doubled) as compared to non-inoculated plants, irrespective of the NS applied ( Figure 1E). On the other hand, the overfertilization with either NO 3 − or SO 4 2− did not cause an alteration in the root biomass allocation as compared to control plants ( Figure 1E). Yet, the shoot biomass of O. basilicum cv. Red Rubin was not significantly affected by treatments as compared to control plants (Figures 1F-H). In addition, the general health status of plants was checked by assessing the total chlorophyll content of leaves that was determined by measuring the SPAD index. The data reported in Supplementary Figure 1 were recorded after 6 weeks of cultivation in hydroponic solution and show that, for both Genovese and Red Rubin cultivars, the treatments imposed have no effects on the whole content of chlorophyll in the leaves of sweet basil (Supplementary Figure 1). LEAVES QUALITY ASSESSMENT Untargeted Metabolomics More than 4,000 compounds were putatively annotated, with secondary metabolism being widely represented. The complete list of annotated metabolites, together with composite mass spectra and relative intensities, are provided as Supplementary Material (Supplementary Table 2 for cv. Genovese and Supplementary Table 3 for cv. Red Rubin). The multivariate models revealed that changes in nutrient solution and/or inoculation led to a modulation of plant metabolome, as indicated by the unsupervised HCA. In more detail, such modifications resulted from the different nutrient supply (NO 3 − vs. SO 4 2− ) in both cultivars, and the combination of different fertilizations with bioinoculation (i.e., either with or without A. brasilense) (Supplementary Figures 2A,B for cv. Genovese and cv. Red Rubin, respectively). The further supervised modeling confirmed the separation of the samples in the score space according to treatments (Figure 2). Indeed, OPLS-DA allowed to better separate the different treatment combinations by discriminating predictive and orthogonal components of variance into the score plot hyperspace for both cultivars (Figures 2A,B). The model was validated, and the parameters indicated good predictability both in cv. Genovese (R 2 Y = 0.956; Q 2 Y = 0.54; CV-ANOVA, p = 3.78E −4 ) and Red Rubin (R 2 Y = 0.972; Q 2 Y = 0.55; CV-ANOVA, p = 4.72E −3 ). Broadly speaking, OPLS-DA minimized the cultivar-specific behavior as evidenced by HCAs. In fact, the supervised modeling allowed highlighting a common response of both cultivars to the different combined treatment. The first latent vector discriminated control plants and SO 4 2− -containing treatments regardless of bioinoculation. Indeed, any consistent separation occurred between the different SO 4 2− treatments (± A. brasilense), implying a hierarchically stronger effect of the SO 4 2− -based fertilization. On the other hand, for NO 3 − supplementation, some differences can be appreciated considering the two different basil genotypes, suggesting a distinctive metabolic reprogramming at the molecular level. Indeed, Red Rubin plants fertilized only with NO 3 − overlapped with SO 4 2− -overfertilized samples (± A. brasilense), while regarding cv. Genovese, the simple overfertilization with NO 3 − generated a separated group, which is closer to the combined treatment NO 3 − + A. brasilense and controls. Moreover, plants treated with a combination of NO 3 − and A. brasilense appeared to behave likewise with non-inoculated controls in both cultivars. Remarkably, the most distant variance is explained by the simple bioinoculation, which formed a perfectly separated group from all the other treatments. The metabolites having the highest discrimination potential between treatments were identified by VIP analysis (VIP score > 1.3; Supplementary Table 4). Accordingly, 196 and 267 compounds were identified for Red Rubin and Genovese cultivars, respectively. About half of the total metabolites related to secondary metabolism, suggesting its strong remodulation in response to the various treatments. The most represented compounds belonged to phenylpropanoids, isoprenoids, and alkaloids. In particular, we underlined the presence of several flavonoids and terpenoids (including carotenoids and brassinosteroids). On the other hand, considering the primary metabolism, the most significant changes are related to fatty acid metabolism and, although to a lesser extent, to carbohydrates and amino acids metabolism. The influence of treatments on the nutritional value of both cultivars was then extrapolated from metabolomic profiles. With this purpose, Volcano analysis was used to identify the leading bioactive compounds differing from control (p < 0.05; FC > 1.5; Supplementary Table 5). Differential metabolites were then elaborated using the Pathway Tools Omics Dashboard of PlantCyc, with a data reduction purpose. Figures 3A,B depict the principal classes of health-promoting compounds metabolomic dataset produced through UHPLC-ESI/QTOF-MS was subjected to a Volcano Plot analysis (P < 0.05, fold-change > 1.5), and differential metabolites were loaded into PlantCyc Pathway Tool (https://www.plantcyc.org/). The x-axis represents each set of subcategories, while the y-axis corresponds to the cumulative fold-change. classified by the Omics Dashboard for Genovese and Red Rubin, respectively. The comparison of the two analyses pointed out a striking cultivar-specific response. Indeed cv. Genovese had an overall down-accumulation of the secondary metabolism in response to most of the treatments, unlike that observed in Red Rubin (that broadly accumulated these classes of compounds). Despite the negative modulation related to S and N supply and/or inoculation in cv. Genovese, the treatments elicited specific functional compounds. In this sense, the inoculation with A. brasilense enhanced the accumulation of alkaloids, while flavonoids increased in the presence of the combined treatment NO 3 − + A. brasilense. This latter treatment elicited the accumulation of high amounts of health-promoting compounds (for instance, the precursor of provitamin A, prephytoene diphosphate, tricetin, and the coumarin scopanone) and promoted the biosynthesis of unsaturated fatty acids. A. brasilense induced the highest effect on secondary metabolites since not only alkaloids were up-accumulated, as in the case of cv. Genovese, but also terpenoids and phenylpropanoids were strongly elicited by the microorganism. Several precursors were up-accumulated after inoculation, suggesting a modulation of upstream biosynthetic processes. Regarding NO 3 − -overfertilized plants, unsaturated fatty-acidrelated compounds were up-accumulated, as well as terpenoids, including diterpenes and triterpenes, while tetraterpenes seemed to be repressed. SO 4 2− and, largely, SO 4 2− + A. brasilense presented a negative effect on secondary metabolism and fatty acids in cv. Red Rubin. Phenolic Acids A majority of the phenolic compounds present in basil leaves are derivatives of caffeic acid (CA). Therefore, to understand the effects of the mineral nutrients supplementation of the biosynthesis of these secondary metabolites, CA was separated by HPLC and measured along with one of its dimeric derivatives, the rosmarinic acid (RA). The concentration of CA in cv. Genovese was not affected by the treatments applied (Figure 4A), except for a slight reduction in the leaves of plants grown in NO 3 − fortified NS. Yet, it is noteworthy that, in the case of cv. Red Rubin, the lowest CA concentration was shown in the leaves of the control plants, while the different treatments imposed caused an enhanced accumulation of CA as compared to controls ( Figure 4B). However, only the overfertilization with NO 3 − determined a significant increase in CA concentration in both non-inoculated and inoculated plants (Figure 4B), as compared to non-inoculated controls. The concentration of RA in cv. Genovese was significantly influenced by both N and S overfertilization and rhizobacteria ( Figure 4C). In particular, the concentration of RA was the highest in control plants, while the overfertilization with NO 3 − and SO 4 2− and the inoculation with A. brasilense caused a significant reduction (Figure 4C). On the other hand, in cv. Red Rubin, the fertilization practices showed to have opposite effects on RA concentration in leaves ( Figure 4D). In fact, plants fertilized with increased concentration of SO 4 2− presented a significantly higher concentration of RA as compared to plants supplemented with NO 3 − ( Figure 4D). However, neither the NO 3 − nor the SO 4 2− overfertilization caused a significantly different accumulation of RA as compared to control plants, as well as the inoculation with A. brasilense (Figure 4D). Amino Acids The results of the total amino acids analysis are reported in Table 1 and highlighted that, in both Genovese and Red Rubin cultivars, arginine (Arg) was the most affected by the treatments in terms of relative abundance. In fact, the overfertilization with NO 3 − caused an increase of about 30 times in both cultivars as compared to non-inoculated control, while the supplementation with 8 mM SO 4 2− induced enhancement in the concentration of approximately 50-and 40-fold in cv. Genovese and Red Rubin, respectively, with respect to control. In the case of NO 3 − fertilized Genovese plants, the inoculation with A. brasilense caused an additional increase in Arg concentration that was not recorded in SO 4 2− -treated plants. Similarly, the inoculation of Red Rubin plants did not induce a significant alteration in the Arg concentration of leaves. In addition, in the cv. Genovese, the treatments imposed were not affecting the concentration of aspartic acid (Asp), glutamic acid (Glu), phenylalanine (Phe), proline (Pro), and valine (Val), while alanine (Ala), cysteine (Cys), glycine (Gly), leucine (Leu), and threonine (Thr) were decreased by both the fertilization practices and the inoculation with A. brasilense as compared to non-inoculated controls. Furthermore, the amino acids lysine (Lys), methionine (Met), and serine (Ser) were significantly increased by the fertilization with SO 4 2− , while the supplementation with NO 3 − induced a slight concentration enhancement as compared to control plants, albeit not significantly. In cv. Red Rubin, Ala, Asp, Gly, Met, Phe, and Ser were not affected by the treatments, while Cys, Leu, and Thr concentrations were reduced by both the fertilization practices and the inoculation with A. brasilense as compared to non-inoculated controls. Nonetheless, in the cv. Red Rubin, a group of amino acids composed of Glu, Lys, Phe, Pro, and Val showed an increasing trend in response to the treatments applied with respect to non-inoculated controls. Ionomic Analysis To understand whether the different treatments might have influenced the uptake and allocation of mineral nutrients, the whole ionome profile of basil leaves, at the end of the cultivation period, was analyzed through ICP-OES. In the case of cv. Genovese, the macronutrients were mostly unaffected by the treatments imposed, except for calcium (Ca) and S ( Figure 5A and Supplementary Table 6). Indeed, the treatment with NO 3 − induced the highest accumulation of Ca in the leaves of basil, independently from the inoculation with A. brasilense. As expected, the highest concentration of S was detected in the leaves of plants grown in SO 4 2− -fortified NS, with Azospirillum showing a further promoting effect on S accumulation at leaf level ( Figure 5A and Supplementary Table 6). Concerning the micronutrients concentration in cv. Genovese plants, the only remarkable effects was shown by iron (Fe). In fact, both NO 3 − and SO 4 2− overfertilization had a promoting effect on Fe accumulation at leaf level. Interestingly, the highest Fe concentration was shown by basil plants fertilized with 8 mM SO 4 2− and inoculated with A. brasilense ( Figure 5B and Supplementary Table 7). Plants of Red Rubin cultivar showed a decreasing trend in the accumulation of the macronutrient magnesium (Mg) in the leaves upon treatment. This reduction was statistically significant in NO 3 − fertilized plants, independently of the inoculation with A. brasilense, as compared to non-inoculated control plants (Figure 5A and Supplementary Table 6). As also observed for cv. Genovese, the fertilization with SO 4 2− caused the highest accumulation of S in the leaves of cv. Red Rubin plants. These latter also displayed an increased concentration of phosphorus (P) as compared to the other samples ( Figure 5A and Supplementary Table 6). The fertilization with NO 3 − caused in Red Rubin cultivar a significant decrease in Cu concentration in leaves, while the combination of SO 4 2− fertilization and the inoculation with A. brasilense induced the opposite effect, determining the highest accumulation of the micronutrient Cu ( Figure 5B and Supplementary Table 7). In the control Red Rubin plants, the inoculation with A. brasilense caused a slight increase in the concentration of both manganese (Mn) and molybdenum (Mo), albeit not significantly. In contrast, the fertilization treatments induced a decreasing trend as compared to the control plants ( Figure 5B and Supplementary Table 7). Nitrate Content The analyses of nitrate in sweet basil leaves showed that, in cv. Genovese, the NO 3 − concentration was comprised between 1.39 and 1.5 mmol g −1 DW and that it was unaffected by the treatments imposed ( Figure 6A). On the other hand, cv. Red Rubin displayed an increase in NO 3 − concentration in the leaves according to the overfertilization practices, reaching values of 1.3 and 1.6 mmol g −1 DW in NO 3 − -and SO 4 2− -treated plants, respectively ( Figure 6B). DISCUSSION Among the mineral elements, nitrogen (N) and sulfur (S) are accounted as essential macronutrients for plants, significantly determining the yield and quality of crops (Marschner, 2012). Increased growth and development in plants fertilized with higher concentrations of macronutrients or inoculated with PGPR have been reported for several species (Egamberdieva and Teixeira da Silva, 2015;Orhan et al., 2006;Fan et al., 2017). The data obtained displayed that the effects of nitrogen and sulfur fertilization on sweet basil are consistent with previous reports with fresh biomass showing an increasing trend as determined by micronutrients supplementation (Zheljazkov et al., 2008;Kiferle et al., 2011;Oliveira et al., 2014). Indeed, N fertilization has already been shown to be one of the pivotal factors affecting basil yield (Zheljazkov et al., 2008). Similarly, treatments with S also increased plant yield, especially in the case of cv. Genovese. Although Oliveira et al. (2014) reported that S fertilizer increased biomass production only at the root level, in the present study, shoot biomass of cv. Genovese is also increased. Red Rubin cultivar, on the other hand, was not affected by S treatments, possibly highlighting a different response of the two cultivars toward the fertilization practices. Despite the promoting effects observed brought about by specific A. brasilense strains on basil growth (Mangmang et al., 2016), the inoculation did not induce significant alteration in the biomass of cv. Genovese, while it produced a higher root development in cv. Red Rubin. Up to now, Azospirillum inoculation has been shown to induce various effects on plant physiological parameters; in fact, it has been demonstrated to increase plant growth parameters in strawberry (Guerrero-Molina et al., 2014;Pii et al., 2018) and corn (Zaady et al., 1993;Pii et al., 2019). Nonetheless, it has also been demonstrated that the growth-promoting effects of A. brasilense are strongly dependent on plant species and genotypes (Pedraza et al., 2010;Pii et al., 2018). In the specific case of basil plants, Raei et al. (2015) revealed that Azospirillum inoculation improved plant growth parameters and resistance to drought stress, while Roshanpour et al. (2014) reported how the combination of three different rhizobacteria (Azotobacter, Azospirillum, and Bacillus) positively influenced the fresh and dry yield of basil, as the essential oil yield. Besides, recent evidence has also highlighted that A. brasilense affects the nutraceutical profile of hydroponically grown strawberry plants , induces secondary metabolism in oregano (Erika et al., 2010), and alters the root exudation profiles in cucumber (Pii et al., 2015c), thus suggesting a direct influence on plants metabolome. Plant metabolomics emerges as a powerful approach to broaden knowledge about the biochemical profile of plantbased foods, giving the possibility to control and improve their nutritional value and to establish targeted strategies (Hall et al., 2008). In our study, the untargeted metabolomics analysis unraveled a distinct metabolic reprogramming of the two cultivars (i.e., Genovese and Red Rubin) of sweet basil in response to the different combinations of mineral supplementation (S/N) and bioinoculation with A. brasilense. In fact, the two cultivars showed peculiar metabolic responses, confirming the primary influence of the genetic background on the production of bioactive compounds in hydroponically grown basil plants, as previously reported (Salas-Pérez et al., 2018). These results Frontiers in Plant Science | www.frontiersin.org pointed out the pivotal role of the selected genotype in shaping the profile of health-promoting compounds in basil. Besides the genetic background, several other factors could alter the basil composition, such as the growing conditions and agronomic practices (Corrado et al., 2020). Notably, the nutritional value of plant-based food is usually correlated with the accumulation of secondary metabolites, which are largely modulated by environmental conditions (Rouphael and Kyriacou, 2018). In fact, secondary metabolites represent an important part of the human diet. For instance, phenolic compounds are considered as powerful antioxidants protecting against oxidative damage (Lin et al., 2016). Likewise, plant terpenes include essential vitamers for humans as well as important health-promoting compounds such as squalene or carotenoids (Tetali, 2019). Similarly, a large amount of alkaloids present antimicrobial, antihypertensive, and antineuroinflammatory activity, and some of them, in particular vinblastine and vincristine, have proved to be anticancer compounds (Almagro et al., 2015). Among the functional compounds found in basil leaves, terpenoids and phenylpropanoids have been reported among the most accumulated compounds (Rouphael and Kyriacou, 2018). Although it is known that resource-limited environments promote secondary metabolism, the carbon flux between primary and secondary pathways seems to be more complex (Fritz et al., 2006). Nitrogen is required not only in carbon metabolism for the essential physiological processes but also in the biosynthesis of precursors for the secondary metabolism (Fritz et al., 2006). Similarly, S is the precursor of many chemoprotective compounds and takes part in plant processes. Carbon-based secondary metabolites are postulated to be inversely correlated with nitrogen availability, and nitrogenbased secondary metabolites directly correlated (Heimler et al., 2017). In our study, S-and N-based fertilizer induce repression of secondary metabolism in Genovese cultivar. Surprisingly, both Genovese and Red Rubin cultivars presented a general down-accumulation of alkaloids, suggesting that the plant might promote either growth or differentiation under resourcerich environments, according to the growth/differentiation balance hypothesis (Rembialkowska, 2007). However, a complex alteration regarding carbon-based secondary metabolites took place in the Red Rubin-treated plants since phenylpropanoids and terpenoids were mostly accumulated. Although many authors revealed that the N deprivation enhanced the phenylpropanoids biosynthesis as in the case of Red Rubin cultivar, Heimler et al. (2017) pointed out that inorganic nutrition diversely modulates the specific classes of phenolics and, according to our results, depends on the genotype. The down-accumulation observed in the treated plants of cv. Genovese agreed with previous studies summarized by Albornoz (2016), who revealed that N overfertilization would imply a loss of crop quality by decreasing bioactive compounds as ascorbic acid or phenylpropanoids (Albornoz, 2016). Consistently with the metabolomic analyses, the targeted quantification of caffeic acid did not reveal any specific alteration in the leaves' accumulation pattern as affected by the treatment imposed, whereas rosmarinic acid (RA) was significantly down-accumulated in cv. Genovese plants. Nonetheless, according to Kiferle et al. (2011), the accumulation of RA in basil plants is maximized at flowering stage; therefore, it is not surprising that the levels detected in our analyses are lower than the concentrations found in previous pieces of research (Kiferle et al., 2011). Interestingly, the treatments imposed also caused the significative accumulation of the amino acid Arg, which, besides being essential for the biosynthesis of proteins, also plays a pivotal role as a precursor of multiple secondary metabolites, polyamines, and nitric oxide. In addition, Arg can be frequently used by plants as major nitrogen storage form in seeds and other vegetative tissues. Its mobilization can indeed provide a readily usable N flux for different physiological processes (Todd and Gifford, 2002;Cánovas et al., 2007;Babst and Coleman, 2018). On the other hand, Azospirillum spp. might increase plant quality not only by assimilating atmospheric inorganic N but also by triggering signaling molecules. In fact, Azospirillum spp. have been reported to synthesize phytohormones, including auxins, cytokinins, and gibberellins. Interestingly, Azospirillum also produces stress-related molecules like jasmonic acid, abscisic acid, ethylene, and nitric oxide (Fukami et al., 2018). This complex network of key molecules in plant response could explain the modulation of secondary metabolism we observed. The distinct response based on the genotypes has been previously reported (Sasaki et al., 2010;Chamam et al., 2013). Moreover, the addition of inorganic elements (N/S) also modified the plant response to the microorganism as reported by Sasaki et al. (2010), who observed that the plant response to Azospirillum inoculation depended on the N level. It has been widely assessed that the growth conditions (i.e., chemical and physical characteristics of the growth substrate, fertilization practices, and inoculation with PGPR) can affect the mineral composition of agricultural plants, sometimes producing an increase in the concentration of either essential or nonessential mineral elements with health-promoting effects for consumers (Tomasi et al., 2009;Pii et al., 2015a;Astolfi et al., 2018). Interestingly, the treatment with NO 3 − , independently of the presence of A. brasilense, caused the accumulation of Ca in the leaves of cv. Genovese. Calcium plays an essential role as macronutrient for plants and animals, and it has paramount importance from a structural and biochemical (i.e., signaling) point of view. Being that major staple crops are poor sources of Ca, it has been observed that low dietary intake of this macronutrients in humans is epidemiologically linked to various diseases, which can have serious health consequences over time (Sharma et al., 2017). As expected, the fertilization with SO 4 2− determined a significant increase in the concentration of S in both cultivars. Indeed, the importance of S as health-promoting elements is represented by the fact that it is contained in the chemical structure of bioactive phytochemicals, for instance, related to the scavenging of oxidative stress (González-Morales et al., 2017). As previously demonstrated, the increased S fertilization is connected with an enhanced ability of plants to take up and store Fe (Astolfi et al., 2006(Astolfi et al., , 2018Zuchi et al., 2012;Celletti et al., 2016), whose concentration resulted increased in both Genovese and Red Rubin cultivars. Consistently, as previously observed (Nikolic et al., 2007), NO 3 − provision is a fundamental prerequisite for an efficient reduction and uptake of Fe in dicot plants, thus leading to an increased microelement concentration in the leaves of both basil cultivars. Yet, the inoculation with A. brasilense, which was shown to be effective in enhancing the Fe content in cucumber and maize plants (Pii et al., 2015c, did not display any influence in the Fe uptake and allocation in basil plants, further underlining the species-specific response of plants to the inoculation with the PGPR. On the other hand, the SO 4 2− fertilization and the inoculation with A. brasilense caused the increase in the concentration of Cu in cv. Red Rubin, which can result in toxicity for plants above a certain threshold and even an antinutritional element for human consumption (Brunetto et al., 2016). However, according to the European Commission directives 3 , the Cu concentrations detected in the basil leaves in our experimental model are not considered harmful to human health. Similarly, the fertilization with N sources might lead to the accumulation of NO 3 − in the edible parts of the plants, which is demonstrated to have negative effects for consumers (Santamaria, 2006). Nonetheless, despite the increase in NO 3 − observed, especially in cv. Red Rubin, the concentrations detected cannot be considered toxic, since they are lower than those reported in the literature (Chang et al., 2013). In conclusion, our results underline that genotype was the main factor in differentiating plant response to the treatments in terms of biomass production and nutraceutical compounds accumulation. As an example, the increase in the conditionally essential amino acid Arg and the essential amino acid Met, as well as the elicitation of carotenoids and phenolics like phenylpropanoids and phenolic acids, worth to be considered. Even though both genotypes modulate the metabolism of unsaturated fatty acids, flavonoids, alkaloids, and several terpene derivatives, which are well-known compounds for their role in the human health and nutrition, the Red Rubin cultivar showed the most positive effect in terms of nutritional value. This impact was more remarkable after A. brasilense inoculation. Regardless of the genotype considered, the possibility to modulate the profile of functional compounds, as well as to elicit the accumulation of essential mineral nutrients in the leaves of basil plants, offers promising perspectives concerning the functional role of basil and, possibly, of related herb crops. Nonetheless, it is important to consider that basil response was strongly dependent on the specific treatment considered in a cultivar-dependent manner, rather than exhibiting a generalized modulation of the phytochemical profile. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. AUTHOR CONTRIBUTIONS SC and YP designed the study. SK, MM, LL, BM-M, VB, FV, and YP performed the experiments. SC, YP, MT, TM, LL, BM-M, VB, and SK analyzed and discussed the data. YP, LL, TM, SC, BM-M, and VB wrote the manuscript. All authors contributed to the article and approved the submitted version. FUNDING The research was supported by grants from the Free University of Bolzano (TN2071). ACKNOWLEDGMENTS We thank the "Romeo ed Enrica Invernizzi" foundation for its kind support to the metabolomic facility. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpls.2020. 596000/full#supplementary-material The fold-change based heat map was used to build hierarchical clusters (linkage rule: Ward; distance: Euclidean). Supplementary
2020-11-05T14:13:30.560Z
2020-11-05T00:00:00.000
{ "year": 2020, "sha1": "75caaeda0b41f63f627c11e4e375cd4d6038d1fe", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2020.596000/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "75caaeda0b41f63f627c11e4e375cd4d6038d1fe", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
7871993
pes2o/s2orc
v3-fos-license
Modeling inflation rates and exchange rates in Ghana: application of multivariate GARCH models This paper was aimed at investigating the volatility and conditional relationship among inflation rates, exchange rates and interest rates as well as to construct a model using multivariate GARCH DCC and BEKK models using Ghana data from January 1990 to December 2013. The study revealed that the cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar for the period is 20.4%. There was evidence that, the fact that inflation rate was stable, does not mean that exchange rates and interest rates are expected to be stable. Rather, when the cedi performs well on the forex, inflation rates and interest rates react positively and become stable in the long run. The BEKK model is robust to modelling and forecasting volatility of inflation rates, exchange rates and interest rates. The DCC model is robust to model the conditional and unconditional correlation among inflation rates, exchange rates and interest rates. The BEKK model, which forecasted high exchange rate volatility for the year 2014, is very robust for modelling the exchange rates in Ghana. The mean equation of the DCC model is also robust to forecast inflation rates in Ghana. Introduction When the general level of prices is relatively stable, the uncertainties of time-related activities such as investment diminish. This helps to promote full employment and strong economic growth. When price stability is achieved and maintained, monetary policy makers have done their job well (Sobel et al. 2006). Conceivably, one of the most important responsibilities of every government is fostering a healthy economy, which benefits all her citizens. The government through its ability to tax, spend and control money supply, attempts to promote full employment, price stability and economic growth. The importance of price stability is also emphasized in the Maastricht agreement, which defined the framework for a single European Currency, Euro, and identified price stability as the main objective of the new European Central Bank (McEachern 2006). Deflation could result to doom for an economy; that is, it weakens consumer demand for goods and services as households are likely not to spend, believing that prices will continue to fall. This means that businesses as well as government may be unable to pay debts and could result in retrenchment. Emphasizing this point is Lagarde; the Managing Director of the IMF, in April 2014 who cautioned the euro area that, a prolonged period of "low-inflation" or deflation can suppress demand and output, and overturn growth and jobs. According to Goldberg and Knetter (1997) exchange rate pass-through is the percentage change in local currency import prices resulting from a one percent change in the exchange rate between the exporting and importing countries. Exchange rate pass-through therefore is the effect (positive or negative) of exchange rates on import and export prices, consumer prices or inflation, investments as well as trade volumes. Engel and Rogers (1996) established that crossing the US-Canada border can considerably raise relative price volatility and that exchange rate fluctuations explain about one-third of the volatility increase. That is US-Canada border is an important determinant of relative price volatility even after making due allowance for the role of distance. Parsley and Wei (2001) confirmed previous findings that crossing national borders adds significantly to price dispersion. The demand for and supply of money are the key determinants of exchange rates. Interest Rate Parity is an important concept that explains the equilibrium state of the relationship between interest rate and exchange rate of two countries. The foreign exchange market is in equilibrium when deposits of all currencies offer the same expected rate of return. The condition that the expected returns on deposits of any two currencies are equal when measured in the same currency is called the interest parity condition. It implies that potential holders of foreign currency deposits view them all as equally desirable assets, provided their expected rates of return are the same. Given that the expected return on say US dollar deposits is 4 percent greater than that on Ghana cedi deposits, all things being equal, no one will be willing to continue holding Ghana cedi deposits, and holders of Ghana cedi deposits will be trying to sell them for US dollar deposits. There will therefore be an excess supply of Ghana cedi deposits and an excess demand for US dollar deposits in the foreign exchange market (Krugman et al. 2012). An important theory of the relationship between inflation rate and interest rate is the Fisher effect; sometimes referred to as the Fisher hypothesis by Irvin Fisher. Fisher proved mathematically that the nominal interest rate is equal to the real interest rate minus the expected (predicted) inflation rate. The Fisher effect simply explains for example that; if the nominal interest rate is say 50 per cent for a given period, and the predicted inflation rate during that same period is 20 per cent, then the real interest rate is 30 per cent. The movement in short term interest rates primarily reflects fluctuation in expected inflation, which in effect has a predictive ability for future inflation (Mishkin and Simon 1995). The primary objective of the Central Bank of Ghana is to maintain stability in the general level of prices (Bank of Ghana Act 2002). Price Stability is, therefore, one of the most important indicators of the health of a nation's economy. It must be noted that price stability alone might not be enough for a healthy economy. Several studies have been conducted on modelling inflation rates in Ghana, and majority of these used the constant variance assumption model. Although Mbeah-Baiden (2013) used non-constant variance models to model inflation rates in Ghana, his work only considered a univariate analysis of inflation rates. In the developed countries where a number of the researchers have modelled financial data series using Multivariate Generalized Autoregressive Conditional Heteroscedastic (MGARCH) models, none has modelled the co-movements of inflation rates, exchange rates and interest rates. The MGARCH models have not been explored enough on Ghanaian data and to a very large extent, Africa. It must be noted that Atta-Mensah and Bawumia (2003) used Vector Error Correction forecasting model for Ghana and concluded that growth rate, broad money supply (M2+) and depreciation of exchange rate are the main drivers of higher inflation. The main objective of the study is to investigate the volatility and conditional relationship of inflation, exchange and interest rates and to construct a model using the multivariate GARCH BEKK (Baba, Engle, Kraft and Kroner) and DCC (Dynamic Conditional Correlation) models. A researcher can apply all these models on data series and the best model is chosen based on the performance of the model using a criterion. According to (Doan: RATS Handbook for ARCH/GARCH and Volatility Models. pp: 38. Evanston, United States: Estima, Unpublished Draft Book), the application of BEKK and DCC in modelling the conditional variance generally achieved similar results and the difference is negligible. Data and methodology The monthly inflation rates, average monthly exchange rates (cedi to US dollar) and interest rates (lending rate to the public) in Ghana spanning the period January 1990 to December 2013 were used for the study. This means that a total of 288 data points were considered for each variable. The sources of data were the Ghana Statistical Service (GSS) and Ghana Commercial Bank (GCB). The data were analyzed using multivariate GARCH, DCC and BEKK models. The procedure most often used in the model estimation involves the maximization of a likelihood function constructed on the assumption of independently and identically distributed standardized residuals. According to Engle and Sheppard (2001), analyzing and understanding how the univariate GARCH works is fundamental for the study of the Dynamic Conditional Correlation multivariate GARCH model. The DCC model is a nonlinear combination of univariate GARCH and its matrix is based on how the univariate GARCH (1, 1) process works. Suppose that the stochastic process x t f g T t denotes the return during a specific time period, where x t is the return observed at time t. Assuming for instance that the model for a return is given as: x t = μ t + ε t , where μ t = Ε(x t /λ t − 1 ) denotes the conditional expectation of the return series, ε t is the condition error and λ t − 1 = σ(x : s ≤ t − 1) represent the sigma field (information set) generated by the values of the return until time t -1. Suppose that the conditional errors are conditional standard deviations of the returns h 1=2 t ¼ V ar x t =λ t−1 ð Þ 1=2 times is independent and identically normally distributed with zero mean and a unit variance stochastic variable y t . Note that h t and y t are independent for all time t, ε t ¼ ffiffiffiffiffiffiffiffi h t y t p ∼N 0; h t ð Þ. Lastly, assume that the conditional expectation μ t = 0, which implies that x t ¼ ffiffiffiffiffiffiffiffi h t y t p and x t /λ t − 1 ∼ N(0, h t ). Conditioning of economic and financial models are mostly stated as the regression of a variable's present values of the variable on the same variable's past values as indicated in the GARCH(p,q) model proposed by (Bolleslev 1986) is given in equation (1): ϕ≥0 ; α i ≥0; i ¼ 1; 2; 3; …; p; β i ≥0 f or i ¼ 1; 2; 3; :: :; q: The GARCH(p,q) consist of the three terms, these are: α i ε 2 t−1 -the moving average term, which is the sum of the p previous lags of squared-innovations multiplied by the assigned weight α i for each lagged square innovation β i h t−1 -the autoregressive term, which is the sum of the q previous lagged variances multiplied by the assigned β i for each lagged variance. Since the variance is non-negative by definition, the process h t f g ∞ t¼0 must also be non-negative valued. Baba, Engle, Kraft and Kroner (BEKK) model To ensure positive definiteness, a new parameterization of the conditional variance matrix H t was defined by, (Baba, Engle, Kraft, Kroner: Multivariate simultaneous generalized ARCH at the University of California, San Diego, unpublished) and became known as the BEKK model, which is viewed as another restricted version of the VEC model. It achieves the positive definiteness of the conditional covariance by formulating the model in a way that this property is implied by the model structure. The form of the BEKK model is as: where A kj and B kj a N × N parameter matrices, and C is a lower triangular matrix. The purpose of decomposing the constant term in equation (2) into a product of the two triangular matrices is to guarantee the positive semi-definiteness of H t . Whenever K > 1 an identification problem would be generated for the reason that there are not only a single parameterization that can obtain the same representation of the model. The first-order BEKK model is given as: The BEKK model specified in equation (3) also has its diagonal form by assuming that the matrices A kj and B kj are diagonal. It is a restrictive version of the DVEC model. The most restricted version of the diagonal BEKK is the scalar BEKK one with A = aI and B = bI where α and b are scalars. Estimation of the BEKK model still bears large computations due to several matrix transpositions. The number of parameters of a complete BEKK model is (p + q)KN 2 + N(N + 1)/2 whereas in the diagonal BEKK, the number of parameters reduces to (p + q)KN + N(N + 1)/2. The BEKK form is not linear in the parameters, which makes the convergence of the model difficult. However, the model structure automatically guarantees the positive definiteness of H t . Under the overall consideration, it is assumed that p = q = k = 1 in BEKK forms of application. The difference between the results of BEKK model and the DCC model is highly negligible. The Dynamic Conditional Correlation (DCC) Model To extend the assumptions in the univariate GARCH to multivariate case, suppose that we have n assets in a portfolio and the return vector is x t = (x 1t , x 2t , x 3t , …, x nt ) '. Furthermore, assume that the conditional returns are normally distributed with zero mean and conditional covariance matrix g. This implies that In DCC-model, the covariance matrix is decomposed into H t ≡ D t X t D t , where D t is the diagonal matrix of time varying standard variation from univariate GARCH process The specification of elements in the D t matrix is not only restricted to the GARCH(p,q) described in equation (1) but to any GARCH process with normally distributed errors which meet the requirements for suitable stationary and non-negative conditions. The number of lags for each assets and series do not need to be the same either. However, X t is the conditional correlation matrix of the standardized disturbances ε t ; where Thus, the conditional correlation is the conditional covariance between the standardized disturbances. By the definition of the covariance matrix, H t has to be positive definite. Since H t is a quadratic form based on X ts , it follows from basics in linear algebra that X t has to be positive definite to ensure that H t is positive definite. By the definition of the conditional correlation matrix all the elements have to be equal to or less than one. To ensure that all of these requirements are met, X t is decom- where Q t is a positive definite matrix defining the structure of the dynamics and Q Ã−1 t rescales the elements in Q t to ensure that |q ij | ≤ 1. This implies that, Q Ã−1 t is simply the inverted diagonal matrix with the squared root diagonal elements of Q t . Suppose that Q t has the following dynamics: where Q is the unconditional covariance of the standardized disturbances t f g , and β are scalars. The dynamic structure defined above is the simplest multivariate GARCH called Scalar GARCH. A major caveat of this structure is the all correlations obey the same structure. The structure can be extended to the general DCC(P,Q) In this work, only the DCC(1,1) will be utilized. Constraints of the DCC(1,1) Model If the covariance matrix is not positive definite then it is impossible to invert the covariance matrix which is essential in portfolio optimization. To guarantee a positive definite H t for all t, simple conditions on the parameters are imposed. Firstly, the conditions for the univariate GARCH model has to be satisfied. Similar conditions on the dynamic correlations are also required, namely: β ≥ 0 and α ≥ 0, α + β < 1, Q 0 has to be positive definite. Estimation of the DCC(1,1) Model In order to estimate the parameters of H t , that is to say θ = (θ 1 , θ 2 ), the following log-likelihood function ℓ can be used when the errors are assumed to be multivariate normally distributed: The parameters in the DCC(1,1) model specified in equation (6) can be divided into two groups, that is: The estimation follows the following two steps. Step one The X t matrix in the log-likelihood function is replaced with the identity matrix I n , which gives the following log-likelihood function specified in equation (8): It is obvious that this quasi-likelihood function is the sum of the univariate GARCH log-likelihood functions. Therefore, one can use the algorithm to estimate parameter θ 1 = (ϕ 1 , α 1 , β 1 , ϕ 2 , α 2 , β 2 …, ϕ n , α n , β n ) for each univariate GARCH process. Since the variance h it for asset i = 1, 2, 3, … n is estimated for t ∈ {1, T}, then also the element of the D t matrix under the same time period is estimated. Step two In the second step, the correctly specified log-likelihood function is used to estimate θ 2 = (η, ψ) given the estimated parametersθ 1 ¼φ 1 ;α 1 ;β 1 ;φ 2 ;α 2 ;β 2 …;φ n ;α n ;β n from step one, we obtain: From equation (9), the first two terms in the loglikelihood are constants therefore, the two last terms including X t is of interest to maximize. Hence we obtain: Variance targeting is used in the dynamic structure and thereforeQ 0 ¼ ε 0 0 ε 0 and since the conditional correlation matrix also is the covariance matrix of the standardized residuals,X 0 ¼ ε 0 0 ε 0 . Figure 1 shows the time series plot for inflation rates, exchange rates and interest rates from 1990 to 2013 based on R output. The inflation rates and interest rates plots exhibits downward trend with fluctuations, contrarily, the exchange rates plot exhibit continuous upwards trend. The movements of the plots indicate that the mean and the variance of the exchange rates data are changing overtime. This means that the mean is non constant and the variance is unstable. Figure 2 displays the time series plot of the natural logarithm of inflation rates, exchange rates and interest rates from January 1990 to December 2013 using RATS 8.3. The time series plot appears to be stable after the transformation using the natural logarithm of inflation, exchange and interest rates. This suggests that the mean and variance are stable over time implying that the variables achieve stationarity after taking the natural logarithm. Results and discussion The cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar is 20.4% using the formulae in equation (11) and (12) respectively; where n is the number of years. Multivariate-GARCH modeling Multivariate GARCH models are estimated by the quasi maximum likelihood technique. Regression Analysis of Time Series (RATS) 8.3 is a widely used software in estimating MGARCH models as a result of its flexible maximum likelihood estimation capabilities and has advantages over many other software packages on estimating MGARCH models. The optimization algorithm used for the maximum likelihood estimation in RATS is BFGS; proposed independently by Broyden (1970), Fletcher (1970, Goldfarb (1970) and Shanno (1970), (Estima 2013). This optimization algorithm uses iteration routines to obtain the coefficient estimation. As such, convergence is assumed to occur if the change in the coefficient to be estimated is less than the criterion option 0.00001 specified. RATS was used in estimating the MGARCH models for this study. Table 1 shows both the DCC and BEKK with respective p-values of 0.99659 and 0.9869. The p-values are greater than a significance level of 0.05, hence it can be concluded that the there is no multivariate ARCH effect. This also suggests that the conditional distribution of the white noise is near Gaussian. DCC model The estimated DCC model's unconditional covariance matrix is given in equation (12): Figure 3 displays the conditional correlation between inflation rates and exchange rates from 1990 to 2013. The plot indicates that there is a positive conditional association between inflation and exchange rate. This implies that, as the local currency; the cedi depreciates to the US dollar, general levels of prices in Ghana also increases. The relationship was relatively stronger in 1991 and 1993 compared to 1992, the year election was held. The period of 1995, 1996 and 1997 as well as the years between 2003 and 2009 exhibited relatively weak correlation. Contrary, the period between 2000 and 2002 exhibited the strongest positive relationship. Depreciation of the cedi means that the cedi buys less than the US dollar, therefore, imports are more expensive and exports are cheaper. The positive relationship in the exchange rate depreciation and inflation rate means that, imported goods and services become more expensive and this affects the health of the economy especially because Ghana depends heavily on imported goods. The relationship exhibited is disincentive to cutting cost for companies whose raw materials are imported, this implies that depreciation causes cost-push inflation in the long run. Table 2 displays seven months out-of-sample forecast of inflation rate for 2014 using the mean equation of the DCC model. The forecasts, compared to the observed rates declared by the Ghana Statistical Service indicate that there is evidence that the mean equation of the DCC model is robust in predicting inflation rate in the medium to short term. The widening of the error with time is an indication that general prices of goods and services react to the depreciation of the cedi or volatility in the exchange rate in the long run. Based on the DCC model, the mean equation is given as BEKK model The parameters A, B and C in the BEKK model are provided below: 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 twelve months. The exchange rates forecast indicates that there is likely to be instability in the exchange rate in 2014. This implies that the cedi is likely to deviate abnormally in 2014, that is, the cedi is expected to depreciate very fast in 2014. The inflation rate forecast suggest that, in 2014, general prices of goods and services will increase but at a low rate, interest rates will also increase at the same pace. The forecasts suggest economic instability in Ghana in 2014. The shocks in the graph suggest that inflation and interest rates react to exchange rates volatility in the medium to long term. As at the time of completing this research work, the cedi has depreciated 31.8% on June 5, 2014, per information available on the Bank of Ghana website, a record high within the last decade, (Bank of Ghana, 2014). The current rate of 31.8% suggest that inflation rates could escalate further if the cedi is not stabilized by the last quarter of 2014. Certainly, it is evident that the BEKK model is robust in modeling volatility in the depreciation of the cedi to other foreign currencies. Figure 5 displays time series plot of inflation rates volatility from 1990 to 2013. There is evidence of relatively mild volatility in 2004 and 2008. Volatility in inflation rate during the study period could be found in 1993, 1995, 2003, 2004, 2005, 2007, 2008, 2010, 2011 and 2012. It must be noted that, the highest shock was in 2002. The risk in inflation means that there is evidence of abrupt deviation from the mean of the general level of prices of goods and services. The volatility exhibited during these periods implies that the expected inflation deviated from the observed mean value. Inflation volatility measures the uncertainty in the expected inflation. Volatility of any kind is likely to deteriorate the prospects of a healthy economy, if volatility is high; investors become uncertain in their future investments since there is a high inflation risk, therefore demand a high return. High volatility in inflation leads to high cost of borrowing which directly affect investment negatively and to a large extent the health of the economy leading to ineffective planning. The trend in the plots indicates that inflation volatility trail exchange rate volatility; this suggests that, inflation reacts to exchange rate volatility in the long run. Figure 6 is a time series plot of exchange rate volatility from 1990 to 2013. The period between 2002 and 2012 exhibited relatively mild deviation in mean exchange rate suggesting stability. Much of the turbulence could be observed between 2001 and 1990 as well as in 2013. The plot seems to suggest that exchange rate exhibits some sort of shocks a year after the general presidential and parliamentary elections are held in Ghana. It also suggests that the cedi depreciates fast during the first quarter of every year. The shocks in exchange rate impacts negatively on the economy of Ghana since it weakens the Ghanaian cedi against the US dollar. Volatility in the exchange rate will result in high prices of imported goods and services and reduces investor confidence in the economy. This implies that there will be uncertainty in the expectation of how the cedi will perform on the forex, as such many are likely to speculate, the public react by demanding more dollars, all things being equal, the cedi will depreciate further. The gross domestic product, employment and the overall health of the economy of Ghana will be affected negatively as a result. Vector error correction model and granger causality The Vector Error Correction Model and Granger Causality test is used to examine the cause and effect of the inflation rate, exchange rate and interest rate. Johansen test of 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 cointegration among the variables using STATA 12 rejected the null hypothesis that there is no cointegration; a precondition to running the Vector Error Correction model as shown in Table 3. The Vector Correction Model evidence long run and short run causality among the variables after the null hypothesis of both "no long run causality and no short run causality" were rejected. After a pair-wise Grangercausality tests at 5% significant level, the result show that, exchange rate Granger-cause inflation rate but the converse does not. Similarly, inflation rate Granger cause interest rate but the reverse does not. Conclusions Multivariate GARCH, DCC and BEKK models were fitted to the variances of the data. Both models passed the diagnostic test. The mean equation of the DCC model was used to predict the expected inflation rate and proved to be robust in the short to medium term, similarly, the BEKK model was used to predict the expected exchange rate volatility. These predictions suggest that, inflation rates are expected to increase at a very slow rate in 2014. Also, the forecast of exchange rate volatility suggested that, there is a very high risk of abrupt depreciation of the cedi to the US dollar. This implies that the rates of inflation as well as interest rates are likely to react in the long run to the expected volatility in exchange rate for the year 2014. There was generally positive conditional and unconditional correlation between inflation rates and exchange rates, inflation rates and interest rates as well as exchange 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 rates and interest rates. This implies that there is some evidence that when the general prices of goods and services are stable, interest rates are expected to be stable and possibly low. That of inflation and exchange rates implies that the stability of inflation means that the cedi depreciated to the dollar at low rate. There was evidence that the cedi has depreciated cumulatively to the US dollar of 7010.02% from 1990 to 2013 with a weighted annual average depreciation of 20.4%. The volatility experienced in inflation, exchange and interest rates in the study, to a large extent were not in elections year. It is therefore factually inaccurate to assert that during election years, the cedi depreciates faster to the US dollar. The evidence rather suggests, there seem to be volatility in these economic variables, periods after elections were held rather than during elections year and also during the first quarter of every year. It was also evident that, the fact that inflation rates were stable, does not mean that exchange rates and interest rates are expected to be stable. Rather, when the cedi performs well on the forex, inflation and interest rates react positively in the long run. All things being equal, this reaction tickles down to all aspects of the economy thus, occasioning improved standards of living. The economy of Ghana reacts positively in most instances when the cedi performs strongly on the forex market. Such performance was evidenced in 2003 when the cedi depreciated to the US dollar at an average of 3.81%, during that same year the Ghana Stock Exchange recorded returns on investments of about 155%, the highest since its inception. The success of the cedi during this year could be traced to foreign inflows of HIPC benefits into the country. This implies that the health of the economy of Ghana is highly dependent on the strength of the cedi against foreign currencies such as the US dollar, Euro and the British pound sterling. Recommendations Recommendations are made for both policy formulation and areas of further research based on the findings of the study. To begin with, it is recommended that policy makers use multivariate GARCH models to study the dynamics of economic and financial data. The DCC model proved to be robust in modeling the correlation among inflation, exchange and interest rates, and the mean equation of the model was robust for modelling inflation rates in the short to medium term. Similarly, the BEKK model was found to be robust in modeling volatility as well as forecasting. Secondly, the research work has revealed that, the health of Ghana's economy is highly dependent on the strength of the Ghanaian currency: cedi against the foreign currencies since the country is import dependent, as such there must be a national agenda to increase foreign inflows and introduce a policy aimed at Exchange Rate Targeting (ERT). The forecast is also an indication that policy makers and industry players can effectively plan to curb uncertainties in the Ghanaian economy given these models are used. Thirdly, there must be a national consensus to reduce imports into the country by improving production and in the long run increase non-traditional exports. The government could adopt a policy through consensus with private sectors (services) to list on the Ghana Stock Exchange to attract Ghanaians to own shares, tax incentives could be used as a stimulus package. This is to ensure that 100% of the profit is not repatriated. Government could also dialogue with the private sector and propose a policy that mandates foreign owned companies to delay about 50% repatriation of their profit in the 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 economy of Ghana for about two years. Government must also adopt a policy to reduce the number of State delegations to international events abroad to about 20%, this could also reduce the pressure on the Ghanaian cedi. Lastly, a study into the dynamics of interest rates, stock returns and exchange rates is recommended. Other economic indicators such as money supply, balance of payment and budget deficit could be added to inflation rate, exchange rate and interest for modelling using multivariate GARCH models. Modelling the volatility in the five most traded currencies in Ghana is also recommended. Impulse analysis of inflation rates, exchange rates and interest rates is suggested as well.
2016-05-12T22:15:10.714Z
2015-02-24T00:00:00.000
{ "year": 2015, "sha1": "3255ef46bbe562caa3749c9b4fe9e6c5353d92ff", "oa_license": "CCBY", "oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/s40064-015-0837-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a55b565632976bbd5ec6c62678816a54b358aec8", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics", "Medicine" ] }
233711904
pes2o/s2orc
v3-fos-license
Measurement and Uniform Formulation of Soil-Water Characteristic Curve for Compacted Loess Soil with Different Dry Densities To investigate the effect of dry density on the soil-water characteristics of compacted soil, loess used as filling in the land-making project of the Yan’an new district was collected and compacted to five initial dry densities of 1.40, 1.50, 1.60, 1.70, and 1.80 g/cm, respectively. ,e soil-water characteristic curves (SWCCs) of all specimens in the range of 0–10 kPa were measured using the filter paper method. ,e measured data were fitted using the Fredlund and Xing equation for each initial dry density. ,e SWCCs have obvious differences in a suction range below 100 kPa and overlap when the suction range is higher. ,is suggests that the SWCC of compacted soil is independent of the initial dry density in the high suction range, but the correlation with the initial dry density exists in the low suction range. ,erefore, the correlation functions of the parameters in the Fredlund and Xing equation with respect to the initial dry density were regressed, respectively. By substituting these functions into the Fredlund and Xing equation, the state surface function of θw − ψ − ρd was obtained and can reflect the SWCCs of all densities of the filled soil to support the further investigation of the unsaturated behavior of compacted soil. Introduction In the Chinese loess plateau, loess formations are used not only as natural foundation materials for buildings and infrastructures but also as compacted geomaterials for building foundations. With the rapid development of modern urban areas in the Chinese loess plateau in recent years, land-making projects, whose objective is to flatten hills and fill gullies to form a smooth spread of land for the development of urban sites or industrial plants, have been carried out for nearly a decade [1,2]. e largest such project is the new districts of Yan'an City, in the central region of China, as shown in Figure 1. Loess gullies are general V-shaped valleys; therefore, the thickness of the gully fillings is highly variable and can vary from a few meters up to 100 m. e gully fillings comprise loess soil compacted in one layer with a thickness of several meters. e investigation has revealed that the fillings were a high heterogeneous soil for which the compactness is difficult to control according to the requirements of landmaking projects ( Figure 2). erefore, the prediction of the postfilling settlement and uneven settlement is a key issue for researchers, managers, and investors. According to in situ monitoring data obtained at Yan'an new district, the main consolidation of the filled soil is accomplished just after the filling, and the postfilling settlement is minimal but continuous. e consolidation during filling is caused by the load of the overlaid soil. e filling is an unsaturated soil; however, surface water infiltration and groundwater penetration may change the soil-water characteristics and induce a large amount of postfilling settlement [3,4]. For unsaturated soil, the soil-water characteristic curve (SWCC) is important for investigating the soil behavior. Hence, this study focused on the measurement and uniformity of the SWCCs obtained for compacted loess soil with different dry densities. e SWCC describes the relationship between the matric suction and the gravimetric water content, volumetric water content, or saturation and plays a crucial role in modeling the behavior of unsaturated soils [5,6]. In fact, the measurement of the SWCC is the most important test required to introduce unsaturated soil mechanics into geotechnical engineering practice [7]. To date, the measurement of matric suction has been developed using the tensiometer method, axial translation technology (pressure plate instrument, Tempe instrument, centrifuge, etc.), and filter paper method [7]. Almost all suction measurement methods have shortcomings, including issues such as the range of application, cost, reliability, and feasibility [8]. Axial translation technology is a conventional method of measuring the soil matric suction in the range of 1-1.5 MPa. However, Baker and Frydman have argued that this method changes the energy state of the soil water by applying external pressure (up to 1 MPa) to the soil, which in turn changes the state and produces an unfavorable result compared with the natural field condition (101.3 kPa) [9]. e commonly used tensiometer is limited within the scope of suction, that is, in the range of approximately 0-100 kPa [10]. Various studies have shown that the filter paper method (FPM) is a relatively simple, low-cost, and time-efficient method that can be used to measure the suction in the range of 10-300,000 kPa [11]. Since Hansen used blotting paper to determine the osmotic potential of sugar solutions at the same vapor pressure as the experimental soil specimens [12], the FPM has gradually been used to measure the soil-water potential. Additionally, Whatman No. 42 filter paper is widely used to measure soil suction [13][14][15] and is considered to be the most practical out of the three abovementioned methods. Moreover, this method does not affect the soil structure and can be used under atmospheric conditions with acceptable accuracy. In this study, the fillings in the filled area of Yan'an new district were collected, and five groups of specimens with different initial dry densities in the range of the filled loess were generated using a static pressure instrument. e matric suction was measured using a modified FPM (10-10 5 kPa) and the corresponding SWCCs were obtained. e data measured for each specimen with different compacted density were fitted using the Fredlund and Xing equation [16] to obtain the respective SWCCs, and the SWCCs of the compacted loess were unified as a binary function of the matric suction against the water content and dry density, which can be used to investigate geotechnical problems related to compacted loess throughout the entire density and water content range. Soil Material for Tests e sampling site is located in the new district of Yan'an City, in the central region of China, where a new flat city site has been constructed by cutting and filling the hilly landscape ( Figure 1). e deep gullies are filled by layered loess with a thickness of up to 100 m, while the thickness of the cut section is up to 45 m. According to the laboratory test data for 19 boreholes in the filled soil (Figure 2), the dry density of the compacted loess samples is mostly in the range of 1.40-1.80 g/cm 3 and accounts for 97% of the total density. is suggests that the dry density is far lower than the maximum value of 1.80 g/cm 3 obtained by standard compaction in the laboratory and much lower than the required dry density of 1.70 g/cm 3 . e water content of the vertical section exhibits great differences and tends to move away from the optimal value (16%) in reality. e filling was collected at the middle of the excavated profile to adequately represent the average components of compacted loess. e particle size, plastic limit, liquid limit, and specific gravity were measured first and the results are presented in Table 1. Additionally, as shown in Figure 3, the silt fraction (0.002-0.05 mm) makes up 66.0% of the total amount, and the clay particle (<0.002 mm) accounts for 6.6%. e coefficient of uniformity is 8.5 and the coefficient of curvature is 1.4, which is classified as well-graded cohesive soil (C u > 5 and C C ∈ [1,3]). According to the Unified Soil Classification System [17], the loess is classified as silty clay. Previous studies have reported that the initial water content greatly influences the soil's pore structure [18]. Advances in Civil Engineering For compacted soil with the same void ratio, the macropores increase as the initial water content decreases [19]. However, there is a homogeneous distribution of pores in the compacted soil under the optimal water content [20]. For practical projects, the optimal water content is difficult to achieve and the water content is typically suboptimal, as shown in Figure 2. erefore, to fully consider the soil-water characteristics of compacted soil with more macropores, the minimum average moisture content (10%) in the filling area was selected for compaction. Subsequently, the filling was crushed and remolded into five groups based on the results of their initial dry densities. e target initial dry densities are 1.40, 150, 1.60, 1.70, and 1.80 g/cm 3 and cover 97.0% of the filled loess specimens at the site. e corresponding void ratio and bulk density are listed in Table 1. e water content of all remolded specimens was controlled to be 10%, and the initial dry density error was controlled to be ±0.01 g/cm 3 . Method of Measuring SWCC e principle of the FPM can be understood as the transportation of the pore water in the soil to wet or dry filter paper. e matric suction is measured as the soil specimen contacts the filter paper. And when there is no direct contact between them, the total suction (equal to the sum of the matric suction and osmotic suction) is also measured [6,21]. In this study, the contact method was used to measure the matric suction, and the oven-drying method was used to measure the water content. Once the specimens were remolded, they were put into sealed desiccators to maintain the original water content and keep free from disturbances. e specimens for each initial dry density were set as a group and used to measure the SWCC. e measuring processes were carried out as follows: (1) Filter paper preparation: ordinary filter paper was cut to the same size as the specimen surface with a diameter of 61.8 mm (equal to the size of the specimens), as shown in Figure 4(a), and used to cover the surface of the specimens to protect them against falling slags and to protect the test filter paper against contamination. e test filter paper was slightly smaller than the specimen or ordinary filter paper and had a diameter of 42.5 mm. Both the ordinary filter paper and the measuring filter paper were soaked in formalin solution (2%) and then dried to prevent the growth of microorganisms before use [22]. Notably, the weighing of the filter paper was carried out in a clean aluminum box with a cover. (2) Specimens preparation: first, using a static compression method at the water content of 10%, the specimens were remolded with a definite initial dry density in metal rings with a diameter of 61.8 mm and height of 20 mm. Subsequently, the set of specimens were dried in an oven at 120°C for 24 hours. is drying method may cause changes to the soil microstructure [23,24], and freeze-drying is probably the most appropriate way of avoiding this issue when testing the wetting SWCC [25,26]. Next, water was dropped on the specimens to control the target water content from very low (2%) to saturated at an increasing interval of 1% or 2%. Two parallel specimens with the same water content were prepared. (3) Water balance in the specimens: as soon as water was dropped on a specimen, the process of wrapping the specimen began, as shown in Figure 4. Specimens with varying water content were placed into the moisturizer for 48 hours and were packed after being removed. First, the initial weight of the soil specimens and filter paper was measured, and then the filter paper was nimbly placed between two identical specimens, as shown in Figure 4(b). Subsequently, the upper and lower specimens were fixed using waterproof tape and tightly wrapped up using plastic wrap to create a simple waterproof environment, as shown in Figure 4(c). Aluminum foil paper was used as a protective and shaping material to further wrap the specimens [27][28][29], and melted paraffin wax was evenly brushed on the aluminum foil paper such that the entire specimen was sealed with wax, as shown in Figure 4(d). Finally, the specimens were placed in the incubator (20°C) for soil-water balancing after being labeled. (4) Weighing of filter paper and specimens: the specimens were removed after seven days of balancing in the incubator [9,22]. After unpacking the wrapped specimens, the mass of the wet soil specimens and the wet test filter paper were weighed, respectively, and both were dried and then weighed again. Notably, the filter paper must be quickly put into an aluminum box using tweezers and the box should be covered for weighing, which is a key operation to ensure the accuracy of the data. In this process, an ordinary balance (accuracy of 0.01 g) was used to weigh the soil specimens and the Mettler analytical balance (accuracy of 0.0001 g) was used to weigh the filter paper, owing to the different precision requirements, as shown in Figure 5. Moreover, different ovens were used to avoid contamination. (5) Calculating the matric suction and water content of the specimens: according to the above-mentioned operation, the weight of the test filter paper before and after balancing can be obtained for each specimen, and the matric suction can be calculated using the calibration equation (1) [22]. where ψ represents the matric suction (kPa); w fp represents the gravity water content change of the filter paper (%). Regression for SWCCs with Measured Data Among the many empirical models (such as Gardner, Brooks and Corey, van Genuchten, McKee, and Bumb), the Fredlund et al. model [16] is widely used throughout the entire suction range (equation (2)) and has a correction factor c(ψ) (equation (3)) that always makes the suction under zero water content approach 10 6 kPa. In equation (2), parameter a is the air entry value of soil; n is the inflection rate in the transient zone, and as this value increases, the inflection rate also increases, which reflects the pore size uniformity; m controls the shape in the low suction range. Advances in Civil Engineering As shown in Figure 6, the fitting curves intersect at approximately 100 kPa using equation (2), and the data measured for different initial dry densities overlap in a suction range greater than 100 kPa, but the curves are more scattered. is suggests that the SWCCs in this range are independent of the initial dry density and exhibit a linear relationship in the semilogarithmic coordinate system. Other studies have also confirmed that the curves for the same type of soil exhibit similar distribution characteristics under different pressures [9], which is called the "broom" shape [30][31]. erefore, it is reasonable to use piecewise fitting. Equation (2) is used for the range of 1-100 kPa, and linear fitting is used over 100 kPa. All measured data obtained in the range of 1-100 kPa were fitted using equation (2). e parameters are listed in Table 2 and the fitting curves are shown in Figure 7. As shown in Figure 7, according to the shape and physical context of the curves, each SWCC can be divided into three stages in the wetting curve: the residual stage, transient stage, and boundary effect stage [5]. During the wetting process, the transient stage and boundary stage are divided by the air occlusion value (AOV), while the transient stage and residual stage are divided by the residual value (RV). e AOV and RV of the SWCC can be determined using the method proposed by Vanapalli et al. [5], and the data are also presented in Table 2. In the wetting process, the first stage is the residual stage. e water molecules are adsorbed by the soil particles to form adsorbed water, and the pore space is essentially occupied by the gas phase as the suction increases. e water content of the soil slowly changes, the performance tends to be consistent, and the RVs of all specimens are approximately 100 kPa with a water content of approximately 11%. During the transient stage, capillarity becomes dominant and the water begins to occupy the smallest pore in the soil, which leads to the escape of air. us, the soil is in a three-phase state (solid, liquid, and air), which is referred to as the contractile skim or interface by Fredlund [32]. Most engineering problems related to unsaturated soils occur at this stage. As the water content increases, the suction starts to decrease in a rapid manner. e change at this stage is closely associated with the initial dry density and the path is positively correlated with the initial dry density. When the suction exceeds the AOV, the soil enters the boundary stage and is nearly saturated. Additionally, the water phase is continuous and the gas phase is suspended in the water in the form of closed bubbles. As can be seen from the curves, specimen No.1 (1.4 g/cm 3 ) has an AOV of 5 kPa and corresponding water content of 33%, while No. 5 (1.8 g/ cm 3 ) has the highest AOV of 11.5 kPa corresponding to a water content of 18%. Use analytical balance Use elec tron ic bala nce Put the filter paper into the aluminum box quickly Advances in Civil Engineering e Equi-suction lines (Figure 8) also show that the lines below 100 kPa have a high inclination degree and the specimens with lower initial dry density have a higher moisture content under the same suction condition. e lines above 100 kPa are considered to be approximately vertical, which means that the initial dry density or pore ratio of the compacted soil in the high suction phase is independent of the suction. According to Ridley and Delage (1998), Delage (2007), and Rafael and Bake (2009), within the scope of dominant adsorption, suction should only depend on the gravity water content of remolded clay, and there is no correlation between the dry density or pore ratio of the soil at the high suction phase and suction change [9,33,34]. Generally, as the initial dry density increases, the path of the transient stage becomes shorter while that of the boundary stage becomes longer, but the difference of the residual region is not obvious. Unified SWCC of Compacted Loess By plotting the values of the fitting parameters a, m, and n listed in Table 2 against the initial dry density shown in Figure 9, it can be seen that each of them has a good correlation with the initial dry density. e functions of the parameters versus the initial dry density were also obtained by regression analysis, and a is exponentially correlated with the initial dry density, as expressed by equation (4), while n and m are linearly correlated with the initial dry density, as expressed by equations (5) and (6). Additionally, there exists a linear relationship between the saturated water content and the initial dry density (equation (7)), and we consider that the matric suction in the residual stage tends to be consistent. erefore, it is reasonable to consider that all specimens have the same residual matric suction, which is approximately equal to 100 kPa. By substituting these functions into the Fredlund and Xing equation, the united function (equation (8)) can be determined and used as an empirical equation reflecting a series of the SWCC. n � 11.014ρ d − 13.695, where θ w is the gravity water content (%); θ s is the saturated gravity water content (%); ψ is the matric suction (kPa); ψ r is the residual matric suction (kPa); e ≈ 2.7. Figure 10 shows the fitting surface (θ w − ψ − ρ d ), and the five sets of experimental data are in good agreement. It can be intuitively understood that, on the surface, the boundary Advances in Civil Engineering effect area is enlarged, the transient area is reduced (slope becomes steeper), and the residual area remains the same as the initial dry density increases. Using equation (8), as long as the dry density is given, a corresponding SWCC can be determined on this fitting surface. Summary and Conclusion is study measured the SWCCs of compacted loess soil with different density using the FPM, and a unified function based on the Fredlund and Xing equation was regressed. It is expected that the findings of this study will provide the foundation for further research on unsaturated soil. (1) In this paper, the steps for measuring the SWCC using the contact FPM are described in detail. With this procedure, the SWCC in the range of 1-10,000 kPa, or even higher, can be determined quickly and more accurately. us, this method should be widely adopted. (2) e influence of the initial dry density on the soilwater retention behavior was analyzed. e SWCCs overlapped when the moisture content was less than 11%, which corresponds to a suction above approximately 100 kPa. For the same type of soil, compaction only changed the soil-water retention behavior when the suction was less than 100 kPa, and the retention capacity decreased as the compactness increased, whereas the soil-water characteristics essentially remained the same when the suction was greater than 100 kPa. (3) A "nonlinear + linear" piecewise fitting method was used to fit the measuring data. e equation (θ w − ψ − ρ d ) reflecting the SWCCs of the entire filling area was obtained through regression analysis, which is significant for practical engineering. Additionally, this is the main equation used to calculate the seepage field change and settlement under infiltration conditions in the filling area. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
2021-05-05T00:09:03.915Z
2021-03-17T00:00:00.000
{ "year": 2021, "sha1": "a3a16036ef91e3db923eea09cd18680543870a6a", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ace/2021/6689680.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2cfee3fc8105339e3aa0b879f107a68e35c3fdef", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Materials Science" ] }
54588600
pes2o/s2orc
v3-fos-license
Current Strategies in the Management of Irritable Bowel Syndrome Irritable bowel syndrome (IBS) is one of the most studied and discussed problems in the field of gastroenterology, yet it often remains perplexing to both clinicians and patients. Some of the apprehension comes from a void of objective data that defines a diagnosis in most disorders. This level of comfort is not appreciated in the evaluation of IBS, where the art of medicine and subjective impressions are the cornerstones of proper assessment. Though this paper focuses on management, a review of pathophysiology and specific guidelines establishing a diagnosis of IBS will be addressed. Introduction Irritable bowel syndrome (IBS) is one of the most studied and discussed problems in the field of gastroenterology, yet it often remains perplexing to both clinicians and patients. Some of the apprehension comes from a void of objective data that defines a diagnosis in most disorders. This level of comfort is not appreciated in the evaluation of IBS, where the art of medicine and subjective impressions are the cornerstones of proper assessment. Though this paper focuses on management, a review of pathophysiology and specific guidelines establishing a diagnosis of IBS will be addressed. Epidemiology IBS is a very common disorder seen in physicians' offices. It is thought that 28% percent of referrals to gastroenterologists are for IBS and perhaps 12% of primary care visits involve evaluation or treatment for this condition [1]. It is more often seen in females whom have a prevalence at least 2 times greater than that of males [2,3]. Women often have more severe symptoms [4] and some studies suggest they seek health care more often [5]. Though IBS is more often seen in patients under 45, it is commonly diagnosed in elderly patients as well. Additionally it is frequently seen in adolescents and students in their high school and college years [6]. Clinical Features IBS features many symptoms but the salient or most accentuated of these is pain or a level of discomfort. Without some degree of pain or discomfort, one cannot have IBS. The pain often is described as cramping or colicky and is usually centered in the lower quadrants though any area of the abdomen may be involved. Often patients may complain of additional GI symptoms such as pyrosis, dyspepsia, bloating, or nausea and early satiety. Female patients may mention esophageal symptoms such as chest discomfort or dysphagia more commonly than males. Symptoms may be precipitated by stress and relief often is noted after defecation. Lack of nocturnal awakening with symptoms is a clue the problem is IBS. Constitutional signs and symptoms such as night sweats, fevers, and intestinal blood loss suggest an organic illness. Weight loss is often associated with an organic illness but may be seen in IBS. Since eating may precipitate discomfort or urgency, some patients may lose weight by avoiding food. The presence of alarm symptoms does not exclude IBS nor confirm organic disease [7,8]. Patients may have IBS and another illness occurring simultaneously. Fevers Intestinal blood loss Weight loss The next feature of IBS is perturbation of defecation. They may experience hard or fragmented stools with straining and decreased frequency or loose and frequent bowel movements accompanied by urgency. Both groups can have a feeling of incomplete evacuation. The Bristol Stool Scale Score is used as a guideline to determine the degree of constipation. The scale is scored from 1-7 with a score of one representing hard, pellet-like stools while watery movements are scored as 7 [9]. Other functional disorders seem to have a higher prevalence in patients with IBS and include migraine headaches, globus hystericus, fibromyalgia, chronic fatigue, mitral valve prolapse, and urinary bladder symptoms such as frequency or dysuria. Pathophysiology Several theories have been postulated to explain the causality of IBS yet all have their limitations. Research has demonstrated high propagation waves in IBS-D patients and prolonged sigmoid and rectal contractility in IBS-C patients [10,11]. Despite the demonstration that colon transit time may be altered in IBS [12,13] no one has proven that dysmotility is constant. The concept of visceral hypersensitivity is generally accepted as a leading factor in the development of IBS. Several mechanisms have theorized the existence of silent noviceptors or spinal hyperactivity by up-regulation of neurotransmitters or nitrous oxide. It was once postulated that hypersensitivity was limited to the GI tract but other functional disorders such as fibromyalgia also have hyperalgesia. Clinical research labs have shown multiple times that IBS patients have lower thresholds for pain than do healthy control. Current data also show higher pain responses among IBS patients subjected to electrical stimulation [14][15][16][17]. Post-infectious IBS represents a subgroup of IBS-D and some studies suggest an exposure to Campylobacter or Shigella and less frequently with Salmonella [18] though most often a pathogen is never identified. Antibiotics have been used with efficacy in the management of these patients. Objective findings in these patients have included the presence of lymphocytes and inflammatory cytokines in biopsy specimens [19,20]. Most physicians familiar with the care of functional patients will attest to the role of psychosocial factors in the pathogenesis of IBS. The role of anxiety and depressive disorders clearly are associated with flares of the condition and in some patients may be the only precipitant. Various studies have looked at psychological issues and IBS, showing that more severe cases often have psychiatric diagnoses. Those patients also are more apt to have co-existing sleep disorders, fatigue, absence from work or school, and seek medical attention [21][22][23][24][25]. Much work has looked at history of physical, sexual, or emotional abuse in patients with IBS and other functional disorders. Though a direct causality cannot be inferred, there nevertheless is a high percentage of IBS patients with a history of abuse. Some researchers have reported incidences as high as 40-50%. The highest rates are reported in women [26][27][28][29]. A final thought towards a mechanism of disease in IBS involves neural pathways. Traditionally attention has been focused on acetylcholine and serotonin. Studies have shown that patients with IBS have more serotonin receptors in the enteric mucosa than do controls [30]. Since serotonin plays roles in both pain perception and motility it is hopeful that greater understanding of these neurohormonal pathways will lead to better interventions. Research has also looked at other potential pathways. El-Salhy and colleagues have published several works on the roles peptide YY (PYY) plays in IBS and other disorders. PYY is found in intestinal neuroendocrine cells. Their highest concentrations are in the rectum. Stimulation of these cells by lipids, short-chain fatty acids, amino acids, glucose, bile salts, vagal stimulation, vasoactive intestinal peptide, cholecystokinin (CCK) and gastrin releases PYY which affects GI motility and absorption of water and electrolytes. It has been shown that in post-infectious IBS patients there is an increase in colonic PYY producing cells and cells that release serotonin. In the duodenum we see an increase in cells that secrete CCK. Alternatively in the general IBS population we see lower densities of cells that release somatostatin, serotonin, and CCK. This data suggests that PYY availability is determined by concentration of CCK and serotonin concentrations [31]. Beta 3 -adrenoreceptors line the intestinal mucosa and upon stimulation release somatostatin from D cells found in gastrointestinal mucosa. This decapeptide inhibits cholinergic contractions in the gut. The premise is that release of somatostatin will lower secretions, decrease diarrhea, and improve visceral analgesia [32]. Diagnosis A careful history that discusses length and quality of symptoms, a discussion of psychosocial issues (including a history of abuse), and relieving and exacerbating factors provide the basis for making an accurate diagnosis. The use of imaging studies, endoscopy, and pertinent labs are dependent upon the patient's age, infirmities, and level of suspicion for organic diseases. A minimal evaluation should include thyroid function testing, CBC, metabolic panel, celiac panel, and fecal occult blood. If gas and bloating is a major complaint then formal breath testing for bacterial overgrowth, fructose intolerance, and lactose intolerance is recommended due to the high incidence of these malabsorption syndromes [33,34]. For diarrhea that is more prominent than typically seen or awakens the patient from sleep, colonic biopsies for microscopic colitis should be considered. Noninvasive tests that can be helpful in distinguishing functional GI disorders from organic causes include fecal lactoferrin and fecal calprotectin. Calprotectin measures neutrophils and lactoferrin measures a glycoprotein expressed on neutrophils. Patients with IBD, tumors, or GI infections will have abnormally high levels as compared to IBS patients or healthy controls whom will have normal levels [35]. The standard of diagnosis thus is based upon classical signs and symptoms and the necessary exclusion of organic illnesses in the differential diagnosis. IBS is defined by criteria established by a panel of experts whom meet every few years in Rome to set diagnostic criteria for functional disorders. The current Rome III criteria [36] defines IBS as recurrent abdominal pain or discomfort at least 3 days/ month in the last 3 months with 2 or more of the following: 1. Improvement with defecation 2. Onset associated with a change in frequency of stool 3. Onset associated with a change in form (appearance) of stool. These criteria must be met for the previous 3 months with an onset of symptoms at least 6 months prior to diagnosis. IBS is further subgrouped according to bowel patterns: 1) IBS-C, defined as hard or lumpy stools at least 25% of the time and loose or watery stools less than 25% of time. 2) IBS-D, defined as loose or watery stools at least 25% of time and hard or lumpy stools less than 25% of time Management Given the complex pathophysiology of IBS it comes as no surprise that numerous treatment regimens have been developed to control symptoms. Before embarking on pharmacotherapy the foundation for success lies in a sound physician-patient relationship. Many patients come to the clinic after years of testing and without a formal diagnosis. Providing a definitive diagnosis often reduces a patient's anxiety and ameliorates the intensity of their symptoms. Reassurance gives the patient peace of mind and is a proven tool in the management of IBS [37,38]. It is very important to assure the patient of the diagnosis of IBS and set realistic expectations of the therapy planned. The patient should know that the goals are reduction in frequency and intensity of symptoms and not a cure. This builds a patient's confidence and solidifies the doctor-patient bond [39] and reduces unnecessary calls and visits. In the next sections treatment of specific symptoms will discussed. Bloating Much is said about diet and each patient is unique in their responses to food. Some have no precipitants and others may note discomfort or urgency with even water. Those symptomatic owe their problem to an enhanced gastrocolic reflex. Fiber is often helpful despite lack of good clinical data but too much insoluble fiber can exacerbate symptoms [40], especially bloating. For patients whom have malabsorption of fructose or lactose an elimination diet can reduce symptoms of gas and bloating and serve as adjunctive therapy to standard IBS treatments. Avoidance of sorbitol and sugar alcohols and limitations of gas producing vegetables such as cruciferous vegetables and legumes may be helpful. Carbonated beverages cause distention and elimination of these usually reduces bloating and discomfort. A recent study showed that reduction in daily intake of fermentable oligosaccharides, disaccharides, monosaccharides, and polyols (sugar alcohols), the FODMAP diet reduces flatulence, bloating, and discomfort [41]. Role of Pharmacotherapy Pharmacotherapy based upon an understanding of the various pathophysiological parameters has greatly improved our management of IBS patients. Some are more directed toward analgesia and others toward improving perturbation of defecation. A panel of experts reviewed the major studies on a variety of agents used to manage IBS. A grade of 1 is considered a strong recommendation and 2 as a weak recommendation. Scores of A, B, and C correlated to strong, moderate or weak evidence respectively [42]. Pain Antidepressants, especially tricyclics have been used for years and have traditionally targeted the pain component [43,44]. The majority of studies have concluded they are efficacious. As a class they have a rating of 1B. Desipramine appears to be tolerated better than others in the TCA class [45,46]. The limiting factors with TCAs are side effects which include dryness, constipation, sleep disturbances, and rarely palpitations. To limit these adverse events a starting dose of 10mg each evening and titrating the dose every few days to 2 weeks is recommended. Selective serotonin reuptake inhibitors have shown to be beneficial in IBS-C patients since they may improve colonic transit time and modify discomfort [47][48][49]. As a group both TCAs and SSRIs modulate the enteric nervous system with regard to motility and visceral hypersensitivity and alter the brain-gut axis. Anti-cholinergics and oil of peppermint have been used for years as anti-spasmotic agents yet neither have undergone vigorous placebocontrolled randomized trials with methodology required to reduce the placebo effect. The ACG position paper rates these agents as class 2C. Oil of peppermint which contains carminative has been used in Asia and peppermint candies have been a staple in American restaurants for decades to relieve the discomfort that follows indulgence. Since the relaxation effect is not unique to the small and large intestines, it may reduce the resting pressure of the lower esophageal sphincter and produce pyrosis and regurgitation. Investigators thus advocate an encapsulated preparation that is avoids the stomach and is available in the small bowel. Like the many anti-cholinergics available on the market, peppermint oil has anecdotally been efficacious and has a following worldwide [50]. Diarrhea Research in recent years has targeted modification of stool frequency. Alosetron, a 1B drug, is FDA approved for women with severe IBS-D, and is a 5HT3 antagonist. It also has modest relief on pain. The pharmacology of this drug is based upon the knowledge that serotonin acts on motility, secretion, and visceral pain fibers. Several studies have shown alosetron to be superior to placebo in reduction of pain, urgency, and frequency. Another pivotal trial showed durability of the drug in a yearlong study [51,52]. The recommended starting dose is 0.5 mg twice daily with a dose escalation to 1 mg twice daily if efficacy is not reached in a month. Though ischemic colitis is a concern recent experience has shown it to be uncommon in patients and physicians educated on the product. In fact one small study showed that low-dose desipramine could be added to alosetron for pain control without producing adverse events [53]. Rifaximin, is an oral antibiotic that is not absorbed and is selective for a variety of intestinal flora. It has been used in the treatment of small intestinal bacterial overgrowth [54]. Several quality studies have looked at rifaximin in the management of IBS. The major studies used rigorous methodology and used intention to treat analyses. Furthermore all patients randomized were chosen with strict Rome 3 criteria. This strategy appears best with IBS-D patients and/ or those with significant bloating. The optimal doses appear to be 1.1 g to 1.2 g/day in divided doses for 10-14 days [55][56][57]. Some experts argue that these trials did not account for bacterial overgrowth (SIBO), while other others believe SIBO or dysbiosis is part of the spectrum of IBS. Rifaximin is a class 1B drug. Constipation Lubiprostone is FDA approved for both IBS-C as well as for chronic constipation (both idiopathic and opiate -induced). The dosage for IBS-C is 8 mcg twice daily. To reduce nausea, it's most common side effect; the medication should be taken with meals. The mechanism of action is unique among agents on the market. It is a selective C-2 chloride channel activator. By opening chloride channels into the lumen, sodium follows to neutralize the charge, bringing water with it into the lumen of the small bowel. Lubiprostone is thought to work by increasing colonic motility as the increased small intestinal volume flows into the large bowel. In clinical trials 8mcg twice daily vs. placebo showed effectiveness over the course of a year. Compared to placebo, lubiprostone showed superiority with regard to global symptom relief, health concerns, and body image [58][59][60]. Lubiprostone is a class 1B agent. The end result is a decrease in intestinal transit time [61,62]. Activation of c-GMP also has an effect on intestinal noviceptors and provides pain relief [63,64]. The once a day administration (290 mcg before breakfast) and dual action make this an attractive addition to the armamentarium of agents used in the treatment of IBS-C. There are many other agents that have been used, one tegaserod, a 5HT4 agonist with a rating of 1A, used for IBS-C was efficacious in improving spontaneous bowel movements [65], but after scrutiny arose following cardiac events, it was withdrawn from the market. As with the use of anti-cholinergics for pain, laxatives have been used forever for the constipation of IBS with effective results in some patients. The methodology of trials however has not met the standards of evidence-based practice and thus laxatives are grade as 2C agents. Non-pharmacological Therapies In patients with anxiety and depression that appear to exacerbate symptoms, psychotherapy has been proven by evidenced-based medicine to be effective as first line or second line therapy [66]. Many studies have shown overall improvement in all IBS symptoms and superiority to standard care practices. Psychotherapy, cognitive and behavioral therapy, and hypnotherapy are efficacious in relieving symptoms, but the same has not been shown for relaxation therapy [67][68][69]. The difficulty in analyzing these approaches is the inability to blind the studies. The rating for these forms of treatment is nevertheless 1C. Given the expense and accessibility of the various forms of psychotherapy, these treatments should be utilized in patients with underlying psychiatric illnesses or stressors that contribute to their symptoms [70]. Finally we have a novel treatment. It is serum-derived bovine immunoglobulin protein isolate and a medical food as labeled by the FDA. In a randomized, double-blinded, placebo-controlled study it was superior to placebo in reducing symptoms of IBS. It is thought to play a role in promoting tight cellular junctions, the reduction of mucosal inflammatory cytokines, and decrease in intestinal secretion [71]. Many published works have shown varying degrees of efficacy with regard to probiotics. It is difficult to make recommendations on their use due to this variability seen in the results of these studies. Of note is the fact that many studies have looked at different strains so comparative data is not available. A review of the trials suggests that probiotics offer at least modest benefits in treating IBS patients. The rating for these products as a group is 2C. Bifidobacteria appears to be superior to Lactobacillus as a sole agent or in combination with other species [72][73][74]. Given their safety profiles and lack of adverse effects it is almost certain their popularity will rise in adjunctive therapy. There are multiple drugs in various phases of clinical trials. A promising new agent for IBS-D, eluxadoline has undergone vigorous investigation. It has simultaneous agonism of mu-opiod receptors and antagonism of delta-receptors. In a randomized, double-blinded, placebo controlled study eluxadoline proved to be superior to placebo in reduction of both pain and diarrhea and had minimal side effects [75]. Summary IBS is an intestinal disorder of significant social and economic burden which for many leads to a poor quality of life. Effective management begins with a strong patient-physician relationship. Treatment should include attention to psychosocial issues and directing pharmacotherapy as well as psychotherapy when appropriate, toward specific symptoms. Some agents are geared toward reduction of pain and others are used to normalize abnormal stool patterns. The use of several treatments is often necessary in many patients.
2019-03-12T13:05:41.656Z
2014-03-09T00:00:00.000
{ "year": 2014, "sha1": "7ed7e290383fc1f3bb6d8b4c96b83be99733eedc", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/open-access/current-strategies-in-the-management-of-irritable-bowel-syndrome-2165-8048.S1-006.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "69453b087e0b19612973fd8952c9f4242998e598", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
28773793
pes2o/s2orc
v3-fos-license
V-Band Fade Dynamics Characteristics Analysis in Tropical Region Problem statement: Satellite operations at V-band in tropical and equ atorial regions are constrained as a result of attenuation from rain. Approach: Statistics for 20 consecutive months of V-band terrestrial link signal attenuation measurem ents in Malaysia were presented in this analysis. Such information was considered very pertinent for Ea th-space communication link design and can be used for initial groundwork plan for the engineers as well as researchers. Results: The measured statistics were then fittingly scaled up to fit Ear th-satellite link. The statistics were then broken down to examine diurnal variations. Characteristics of rain events such as fade duration and inter-fade interv al were presented. Conclusion/Recommendations: It is essential to identify such characteristics f or the design and implementation of future fade countermea sure techniques on satellite links. INTRODUCTION The high attenuation experienced in these areas is caused by significantly higher rainfall rates compared to other parts of the world. Due to the intensification in the use of frequency spectrum, new and existing satellite operators in the tropics may soon have no other alternative but to progress up to frequencies as high as the V-band (Brussaard and Watson, 1995). Nonetheless, the effects of rainfall on satellite signals at such high frequencies in the tropical region have not yet been fully detailed. Additional measured data, researches, experiments and investigations are considered essential in order to obtain more insights in this issue. The databases made available to the research from a measurement campaign in Malaysia provide invaluable opportunities to examine the V-band propagation characteristics in the tropical region. Measurements acquired from a microwave link establishment should be able to offer some initial impressions of the V-band link's characteristics in the absence of an actual satellite-Earth link. The information is deemed very critical for future Earth space communication link design and can be exploited as preliminary groundwork plan for the researchers as well as engineers. Experimental setup and background: A 38GHz experimental Ericsson MINI-LINKS was installed at University Technology Malaysia (UTM) Johor Bahru, Malaysia in 1998. 0.6 m diameter antennas with horizontal polarization covered by radomes were assembled. The antennas were separated 300 m apart from one another. The transmitter was installed on a tower located at 103.38 and 1.33°N. The receiver was positioned on a roof-top at 103.35 and 1.33°N. The line of sight of this set up was at approximately 18 m Above Sea Level (ASL). The Automatic Gain Control (AGC) output level of the RF unit was interfaced with a PC through a data acquisition card. During clear sky condition data were sampled every 10 min and during rainy condition, they were sampled at every second. The signal measurements campaign is shown in Fig. 1. MATERIALS AND METHODS Fade duration statistics: Fade duration statistics are usually presented as conditional distributions of the number of fades exceeding certain durations, given that a specified fade threshold has been exceeded. According to ITU-R model (ISH, 2005), the mean duration, D 0 of the log normal distribution of the fraction of fading times given the attenuation is greater than A is: Where: F = Frequency (Ghz) ϕ = Elevation angle (degrees) A = Attenuation threshold (dB) This representation provides information on the number of outages and system availability due to propagation on a link. The average fade duration for a given fade level can be calculated from the accumulated time at that fade level divided by the number of events that occurred at that fade level. Inter-fade interval: Inter fade interval is the duration between the fade threshold goes back to the same level of the thresholds. Since the interval between fades of a given level can be calculated as the inter fade duration, that is the complement of the fade duration, the average inter-fade interval can also be found. Figure 2 illustrates the features commonly used in characterizing precipitation events. Cumulative Distribution for diurnal variability: In designing communications links, cumulative distributions are the most effective presentation format for long-term data. For instance, link availability or exceedance at a point can be determined from its annual cumulative distribution (Badron et al., 2009). Thus, appropriate rain induced attenuation margins can be put into the system to acquire the desired link performance. The attenuations exceeded for 99.7, 99.9 and 99.99% of the average year are reported in Table 1 and compared with data from the nearest stations in terms of climate and location. Noted that the remarkably low year to year variability of the annual cumulative distributions for the two years over the 0.1% exceedance range detected in Malaysia. The data measured in Abakaliki, Nigeria (Omotosho and Oluwafemi, 2009) and in Rio de Janeiro, Brazil (Pontes et al., 2005), both in equatorial region, show considerably more fading than Malaysia. With the variety of rain climates to be characterized, considerably more data are thus required for the development of satisfactory physically based models for prediction. The effects from extended analyses demonstrate that attenuation in an equatorial country such as Malaysia is subjected not only to seasonal, but also to diurnal variations. Significant diurnal variations for attenuation are also been observed. The diurnal variations of attenuation may have important influence on certain particular applications. For example, at point rainfall in climates where possibilities of severe signal degradations due to rainfall are higher at certain time of the day, the uplink power can be pre-programmed with additional margins to combat the uplink rain induced attenuation. Panagopoulos et al. (2004) These cumulative distributions are very important since they provide information concerning the estimation of rain induced attenuation margins required for a given link reliability. RESULTS AND DISCUSSION Fade duration analysis: The statistics of fade duration for the period of campaign are shown in Fig. 3. The selected fade thresholds presented are of 5,8,10,13,15,18,20,23,25 and 27 dB. The fade duration is measured for 1 sec and above. From these statistics there could be 23 occasions within the twenty consecutive months when a 10 dB fade depth lasts for longer than 600 sec (equivalent to 10 min). Such analyses of the fade duration for different fade thresholds observed at 38 GHz link is used to evaluate the effects of rain induced attenuation on the operational aspect of various satellite services like the telecommunication services and television broadcasting. Attenuation and rain fade characteristics compilation are important findings that have to be addressed in determining the effect of signal fading during rainfall events. This is especially true for the Vband satellite-Earth link services to be embarked in tropical region. The further analysis of the fade duration for various fade depths observed at the experimentation station can be used to study the effects of rain induced attenuation on operational aspect of various Earth-space satellite services like TV broadcasting, VSAT and other telecommunication services. Average fade duration and inter-fade interval: The corresponding average fade duration and the inter-fade interval as the function of fade depth on a tropical satellite-Earth link are given in Fig. 4. From the analyses of the twenty months data, the average fade duration is found to be almost independent of the fade depth. The average values is obtained by dividing total measured time with number of fades equal or exceeding each fade level (with duration equal to or exceeding 1 sec). Similar findings have also been reported in several locations in Spain by Garcia del Pino et al. (2006). The frequency of fading increases with the decrease of fade threshold as can be observed in Fig. 4 with noticeable relationship between these two parameters. It is suggested that the average inter-fade duration is independent from the fade value because the larger time percentage for which lower fade threshold is exceeded is distributed among a larger numbers of fades, whereas the lower time percentage at higher fade threshold is distributed among fewer numbers of fades. Average fade duration of approximately 2.4 min for most fade threshold can be observed in the Fig. 4. The spread of fade duration around the average value is expected to increases with the decrease of fade threshold because of the region is subjected to extreme and severe widespread events. The results at the lower fade threshold are very much dependent to the integration time applied, since the events are influenced by scintillation spikes. A shorter integration time will definitely detect a higher number of occurrences of specific fade threshold compared to a longer one. The variation of inter-fade interval with fade depth is also included in Fig. 3. It can be observed that the inter-fade interval lies between 1 and 3 min above the 10 dB fade threshold. As observed in this measurement, fade duration and inter-fade duration of the link can be characterized in the below Eq. 1 and 2: Where: t fd = The fade duration t if = The average inter-fade interval a = The attenuation during rain events Diurnal variability: The effects from extended analyses demonstrate that attenuation in an equatorial country such as Malaysia is subjected not only to seasonal (Badron et al., 2009) but also to diurnal variations. Significant diurnal variations for attenuation have been observed. The diurnal variations of attenuation may have important influence on certain particular applications. For example, at point rainfall in climates where possibilities of severe signal degradations due to rainfall are higher at certain time of the day, the uplink power can be pre-programmed with additional margins to combat the uplink rain induced attenuation. Comparisons were made between findings from Malaysia with other previously reported experimental results in equatorial region. This is in order to investigate any possible similarity or distinct pattern of rain fade diurnal characteristics. Investigators (Fiebig and Riva, 2004) reported that based on their measurement campaign data, a greater probability for excess attenuation is expected during the hours from 12:00-18:00 detected at Milan, Italy. Milan is classified as humid subtropical country. In addition, Fig. 5 shows the hourly variation for the same measurements. Probability of exceeding specific attenuation: The depths of selected fades over the twenty months are presented in detail to highlight the severity of raininduced attenuation encountered in tropical region. Investigation results are presented below by the probability of exceeding a specific threshold of attenuation level statistics. The outcomes that are presented through this type of statistics were derived from the cumulative distribution of attenuation. The results are more suitable for the presentation of diurnal effects than the cumulative distribution of attenuation because comparisons between specific attenuation levels at a specific time interval are easier to observe. Figure 6 presented the probabilities that a specific attenuation level such as 5, 10, 15, 20, 25, 30, 35 and 40 dB are exceeded. The respective probability was denoted in the process of revealing and determining the monthly or seasonal and detailed diurnal variations. The statistics for selected rain induced attenuation fades for twenty consecutive months are shown in Fig. 6. Figure 6 is the presentation of monthly variations. From Fig. 6, it can be observed that probabilities of getting denoted attenuation levels are higher in the month of October 1999 therefore confirming the existence of 'worst month' of attenuation. On the other hand, denoted attenuation levels can be observed with extremely low values in the month of September 1999, justifying the month as 'best' month'. This is the month experiencing very limited rain induced attenuation with zero probability of getting attenuation level exceeding 20 dB. Figure 7 on the other hand is the histograms for the percentage time exceedance of specific attenuation of 5, 10, 15, 20 25, 30, 35 and 40 dB for diurnal characteristics. Figure 8 shows the corresponding probability of exceeding specific attenuation level with respect to diurnal variations for whole period of measurements. The resolution concerned in the analysis is a 4 h time interval such as there are 6 measured values for a day. Probability of getting specific threshold of attenuation 10 dB and above level detected is at value of extremely low. Probability of exceeding specific threshold of attenuation levels denoted perceived in general depended on actual time of the day. It can be concluded that the probabilities are smallest during the first half of the day and largest in the sec half. Probability of exceeding higher attenuation level reveals even stronger diurnal variations with large peaks in the late afternoon and early evening hours. The specific attenuation level even at 5 dB is strongly subjected to diurnal variation. In other words, it can be assume that rain induced attenuations lower than that of 5dB are typically due to normal rainfalls as opposed to heavy rainfalls. Heavy rainfalls are rainfalls that typically take place during thunderstorms. Note was also made where normal rainfall may be seen as being independent of the time of the day, where heavy rainfalls (thunderstorms) are most likely to occur in the evening hours (Ismail and Watson, 2000). Furthermore, normal rainfall usually lasts considerably longer than heavy rainfall. Thus normal rainfall contributes much more to the probability of exceeding 5 dB than thunderstorm rainfall. CONCLUSION Fade characteristics of V-band frequency link in a tropical country are presented. These introductory results are fundamental in developing the best fade mitigation technique for the future satellite to Earth link in the tropics. The diurnal variability of signal loss due to rain can be the basis for both the determination of link availabilities and the development of novel fade countermeasures in reducing communication link outages. Fade mitigation techniques that incorporate modulation selections or forward error correction coding for upcoming satellite communications can be very much dependable on information relating to fade dynamics and long-term fade occurrence statistics.
2019-02-14T14:20:02.801Z
2010-08-31T00:00:00.000
{ "year": 2010, "sha1": "5c81f057b0ff7b1387e6783e5d87ddad33de329b", "oa_license": "CCBY", "oa_url": "http://thescipub.com/pdf/10.3844/ajassp.2010.1109.1114", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3c7b5cb831509fe5e1a8f232cb2511cc097600c4", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
54796580
pes2o/s2orc
v3-fos-license
Neutron capture cross sections from Surrogate measurements The prospects for determining cross sections for compound-nuclear neutron-capture reactions from Surrogate measurements are investigated. Calculations as well as experimental results are presented that test the Weisskopf-Ewing approximation, which is employed in most analyses of Surrogate data. It is concluded that, in general, one has to go beyond this approximation in order to obtain (n,γ) cross sections of sufficient accuracy for most astrophysical and nuclear-energy applications. Introduction Cross sections for compound-nuclear (n,γ) reactions are needed for a variety of applications, including astrophysics and nuclear energy.Modeling astrophysical processes that produce the heavy isotopes beyond iron, simulating nuclear reactor operations, exploring alternative fuel cycles for energy generation, and studying transmutation options for radioactive waste, requires cross sections for neutroninduced reactions on isotopes from different regions of the nuclear chart.As many short-lived species cannot be made into targets for direct cross-section measurements, one has to rely on calculations or explore indirect approaches. The desired accuracies for the cross sections of interest, often in the range of 10% or less, can be much smaller than the theoretical uncertainties that exist when the model parameters are insufficiently constrained by data.For instance, standard evaluations for the (n,γ) reaction on the s-process branch point nucleus 95 Zr (t 1/2 = 64 d) vary from each other roughly by a factor of four 1 .Exploiting regional systematics, whenever cross sections or relevant structural data (level densities, γ-ray strength functions, etc.) for nearby nuclei are known, can provide valuable constraints for the calculations. In this contribution we explore the prospects for determining or constraining (n,γ) cross sections through Surrogate measurements.The Surrogate nuclear reaction technique combines experiment with theory to obtain cross sections for compound-nuclear (CN) reactions, a+ A → B * → c+C, involving targets (A) that are difficult or impossible to obtain [1][2][3].In the Hauser-Feshbach formalism, the cross section for this "desired" reaction takes the form: a e-mail: escher1@llnl.gov 1 Based on a comparison of ENDF/B-VII, JEFF-3.1,JENDL-3.3, and ROSFOND evaluations provided through the database of the National Nuclear Data Center (NDDC) in November 2009 with α and χ denoting the relevant entrance and exit channels, a + A and c + C, respectively.The excitation energy E ex of the compound nucleus, B * , is related to the centerof-mass energy E a in the entrance channel via the energy needed for separating a from B: E a = E ex − S a (B).In many cases the formation cross section σ(a ) can be calculated to a reasonable accuracy by using optical potentials, while the theoretical decay probabilities G CN χ (E ex , J, π) for the different decay channels χ are often quite uncertain.The objective of the Surrogate method is to determine or constrain these decay probabilities experimentally. In the Surrogate approach, the compound nucleus B * is produced by means of an alternative ("Surrogate"), direct reaction, d + D → b + B * , and the desired decay channel χ(B * → c + C) is observed in coincidence with the outgoing particle b (see Fig. 1).The coincidence measurement provides which is the probability that the CN was formed in the Surrogate reaction with spin-parity distribution F CN δ (E ex , J, π) and subsequently decayed into the channel χ.The distribution F CN δ (E ex , J, π), which may be very different from the CN spin-parity populations following the absorption of the projectile a in the desired reaction, has to be determined theoretically, so that the branching ratios G CN χ (E ex , J, π) can be extracted from the measurements.In practice, the decay of the CN is modeled and the G CN χ (E ex , J, π) are obtained by adjusting parameters in the model to reproduce the measured probabilities P δχ (E ex ) [4,5].Subsequently, the sought-after cross section can be obtained by combining the calculated cross sections σ CN α (E ex , J, π) for the formation of B * (from a + A) with the extracted decay probabilities G CN χ (E ex , J, π) for this state.In Section 2 we will consider an application of the method to the the 155 Gd(n,γ) reaction.The relevant compound nucleus is 156 Gd, which was, in this case discussed, produced via inelastic proton scattering on the stable 156 Gd Fig. 1.Schematic representation of the "desired" (top) and "Surrogate" (bottom) reaction mechanisms.The basic idea of the Surrogate approach is to replace the first step of the desired reaction, a + A, by an alternative (Surrogate) reaction, d + D → b + B * , that populates the same compound nucleus.The subsequent decay of the compound nucleus into the relevant channel, c + C, can then be measured and used to extract the desired cross section.ground state.The γ exit channel can be identified by measuring γ-rays characteristic of electromagnetic transitions between levels of 154 Gd (such as transitions between members of the rotational band built on the ground state). The Surrogate method was originally developed in the 1970s [1,2] and has primarily been used to obtain (n,f) cross sections.Almost all analyses of Surrogate data to date have made use of the Weisskopf-Ewing approximation (or the related Surrogate Ratio approach [3]).In the Weisskopf-Ewing limit of the Hauser-Feshbach formalism, the branching ratios become independent of spin and parity, G χ (E, J, π) → G(E), which greatly simplifies the description of both the desired reaction, and the Surrogate measurement, This approximation has been found to work reasonably well for determining (n,f) cross sections, as long as one considers neutron energies above about 1 or 2 MeV.The extracted cross sections are typically in reasonable agreement with direct measurements (≈10-15%), when the latter are available, or with results from other Surrogate experiments.The approach has provided valuable new data for cases where direct measurements were either not possible or covered only a limited energy range.Examples include (n,f) cross sections for 237 U [6] and 233 Pa [7]. Obtaining (n,γ) cross sections from Surrogate measurements For low energies (E n < 1-2 MeV) the conditions for the Weisskopf-Ewing approximation are not expected to be well satisfied [3] and it becomes necessary to account for the "spin-parity mismatch," that is differences between the spin-parity distributions of the compound nuclei produced in the desired and Surrogate reactions, respectively.Introducing such corrections in the analysis of Surrogate fission data has been shown to improve the agreement with direct measurements [4,5].Since the low-energy regime is important for many (n,γ) cross sections needed for astrophysical and nuclear-energy applications, one would expect that the Weisskopf-Ewing approximation does not provide cross sections of sufficient accuracy. Gamma decay probabilities and the Weisskopf-Ewing approximation For the Zirconium region, the validity of the Weisskopf-Ewing approximation was studied theoretically by Forssén et al. [8].The dependence of the γ-decay probabilities G γ (E, J, π) on spin and parity was investigated and (n,γ) cross sections were extracted from a Weisskopf-Ewing analysis of simulated Surrogate data.To obtain the branching ratios, a Hauser-Feshbach calculation for 91 Zr(n,γ) was carried out, with parameters adjusted to reproduce known structural data (s-wave resonance spacing, average radiative widths), as well as directly-measured cross sections. The fitting procedure defined a model for the decay of the CN 92 Zr and made it possible to extract theoretical branching ratios for the γ channel.In Fig. 2 we show the resulting γ-decay probabilities, G γ (E, J, π = −), for angular momenta J = 0, 3, 6, 9, 12 and excitation energies E ex = 8.64-12.5 MeV, which correspond to neutron energies, E n = E ex −S n , in the range of 0 − 4 MeV (in Ref. [8] J values 0 − 4 and both positive and negative parities were considered).Fig. 2 illustrates one of the main insights gained by the study of Ref. [8], the fact that the branching ratios G γ (E, J, π) depend very sensitively on angular momentum and parity of the decaying nucleus.In the energy regime considered, the decay of the 92 Zr CN proceeds primarily by γ or neutron emission, with negligible contributions from other channels.Due to the low level density in the neighboring nucleus 91 Zr, very few neutron decay channels are available; the opening of each new channel corresponds to a discontinuity in one or more γ-branching ratios.This circumstance, and the fact that the neutron decay is dominated by low partial waves (mainly s and p wave), leads to γ-decay probabilities that are very sensitive to the Jπ population of the decaying compound state.It is clear that the Weisskopf-Ewing approximation is not valid in this region. The situation is expected to improve as one moves away from closed shells.For example, while 91 Zr has only one level below 1 MeV (the ground state), the well-deformed rare-earth nucleus 155 Gd has over 60, and the actinide nucleus 235 92 Zr, 156 Gd, and 236 U. Shown is the probability that the compound nucleus, when produced with a specific Jπ combination, decays via the γ channel.The excitation energies shown correspond to incident-neutron energies of 0-4 MeV.Only negative-parity decay probabilities are given.Note that the scale for the y axis of panel (a) differs from that for panels (b) and (c).energy and exhibit significantly less sensitivity to the Jπ values of the compound nucleus.This is shown in panels (b) and (c) of Figure .2, where the same branching ratios are plotted for the γ-decays of 156 Gd and 236 U, respectively.Overall, the curves for the heavier nuclei show a much smoother behavior than those for the Zirconium case.Moreover, for large enough energies, the shape of the probabilities becomes almost independent of spin (for the range of spins considered, J = 0 − 12 ); for 156 Gd this occurs at E ex ≈ 10.0-10.5 MeV, i.e.E n ≈ 1.5-2.0MeV, while for 236 U the G γ (E, J, π = −) have similar shapes for excitation energies above about 7.0-7.5MeV, i.e. neutron energies higher than E n ≈ 0.5-1.0MeV.For 156 Gd, the curves for the highest J value, J = 12 , are larger in magnitude than those for J = 0, 3 by a factor of about 2.5-3 in this higher-energy regime; for 236 U, the difference is somewhat smaller, around 2.0-2.5.The results shown in Figure 2 indicate that the Weisskopf-Ewing approximation is less likely to be valid in lower-mass nuclei, and particularly near closed shells, where the level densities are low. Cross sections from Weisskopf-Ewing analyses Whether it is reasonable to employ the Weisskopf-Ewing approximation for a particular reaction depends not only on the energy regime considered, but also on the range of angular momenta populated in both the desired and Surrogate reactions.It is possible that one reaction, for example the desired reaction, populates a narrow range of spins, while the other involves a wider range of angularmomentum values. The effects of the spin-parity mismatch can be further explored by using schematic spin-parity distributions, F CN δ (E, J, π), to simulate Surrogate coincidence data via Eq. 2. The calculated P sim δγ (E) = J,π F CN δ (E, J, π) G CN γ (E, J, π) can be used in a Weisskopf-Ewing 'analysis' to yield the desired (n,γ) cross section, σ WE,sim n,γ (E) = σ n+target (E)P sim δγ (E), where σ n+target (E) denotes the compound-nucleus formation cross section.The range of cross sections, σ WE,sim n,γ (E), obtained by varying the simulated spin distributions within reasonable limits provides a measure of the uncertainty in the extracted cross section due to the use of the Weisskopf-Ewing approximation.For the Zirconium region such sensitivity analysis was carried out by Forssén et al. [8].An order-of-magnitude difference between the known reference cross section for 91 Zr(n,γ) and that extracted from the simulation was found, indicating that using the Weisskopf-Ewing approximation for the this region of the nuclear chart is not appropriate. For the rare-earth and actinide cases, the discrepancies are expected to be smaller.Recent studies [9] show that this is indeed the case.Results for the 155 Gd(n,γ) example are shown in Fig. 3. Plotted are the reference cross section (solid curve), obtained by fitting a Hauser-Feshbach calculation to direct measurements (filled circles with x and y error bars), and three cross sections extracted from simulated Surrogate data (dotted, dash-dotted, and dashed curves).The associated spin distributions are shown in Fig. 4. Also shown in Fig. 3 are results from an actual Surrogate experiment, carried out by the STARS/ LBACE collaboration at the 88-inch cyclotron at Lawrence Berkeley Laboratory: The diamond-shaped symbols with y error bars indicate the cross section obtained from a Weisskopf-Ewing analysis of a 156 Gd(p, p ) measurement with 22-MeV protons. EPJ Web of Conferences Details of the experiment and its analysis can be found in Refs.[10,11]. Cross Section [barns] Weisskopf-Ewing analysis of Surrogate data Direct measurement Reference cross section Weisskopf-Ewing analysis for p=1 Weisskopf-Ewing analysis for p=2 Weisskopf-Ewing analysis for p=3 155 Gd(n,!) Fig. 3. Cross sections for 155 Gd(n,γ).The cross section obtained from a Weisskopf-Ewing analysis of a Surrogate 156 Gd(p, p ) experiment [11] (filled diamonds with y error bars) is compared to direct measurements (filled circles with x and y error bars) and a Hauser-Feshbach calculation (solid curve) for which the parameters were adjusted to reproduce the available data.Cross sections obtained from a Weisskopf-Ewing analysis of simulated surrogate measurements with spin-parity distributions p = 1, 2, 3 (see Fig. 4) are shown as well. For neutron energies below about 1.5 MeV, the cross sections extracted from the Weisskopf-Ewing analysis of the simulated Surrogate data are consistently too high, up to a factor of four.The discrepancies between the directlymeasured cross sections and those extracted from the Weisskopf-Ewing analysis of the Surrogate data can be understood with the help of the γ-decay probabilities G CN γ (E, J, π) shown in Fig. 2b: If the Surrogate reaction populates the relevant compound nucleus, 156 Gd here, with a spin-parity distribution that contains larger angular-momentum values than the population relevant to the neutron-induced reaction, then the measured (or calculated) decay probability P δγ (E) of Eq. 2 contains larger contributions from those G CN γ (E ex , J, π) associated with large J values than the cross section expression for the desired (n, γ) reaction, Eq. 1, does.Consequently, the cross section extracted by using the Weisskopf-Ewing assumption and approximating G CN γ (E) ≈ P δγ (E), gives too large a result.The largest deviations from the expected results occur for distribution p = 3.This is the distribution that has the smallest overlap with the Jπ population of the compound nucleus in the neutron-induced reaction.The spinparity distributions relevant to neutron-induced reactions for 155 Gd are not shown here; calculations illustrate [9] that the spin distributions for the desired reaction shift from angular-momentum values J = 1 − 3 at 0.1 MeV to somewhat higher values with increasing energy, but for the energy range considered there is little to no contribution for angular momenta above 5-6 .Distributions that peak at low spins, such as distribution p = 1, yield much closer agreement with the reference cross section (for E n > 0.4 MeV, the 155 Gd(n,γ) results are within ∼25% of the expected result).If an experiment can be identified and carried out such that the reaction mechanism and experimental conditions (projectile energy, angle of detection of the outgoing direct-reaction particle, etc.) create Jπ distributions similar to the one produced in the desired reaction, one can expect the cross sections extracted from a Weisskopf-Ewing analysis of the data to be in reasonable agreement with the true (n,γ) cross section.Presently, the compound-nuclear spin-parity distributions are not known for Surrogate reactions.Efforts are underway to develop methods for calculating these distributions and to test the theoretical predictions.This will make it possible to select a reaction mechanism and conditions that approximately reproduce the Jπ distribution of the desired reaction and/or to correct for the mismatch.Fig. 3 also shows the results obtained from a Weisskopf-Ewing analysis of the Surrogate data of Ref. [11].The extracted cross section falls, for the most part, between the calculated curves.It is somewhat larger than the curve corresponding to distribution p = 2, but smaller than the curve for p = 3.This indicates that the (p, p ) reaction on 156 Gd produced a spin-parity distribution which contained J-values above 5 − 6 .The cross section extracted from the Surrogate measurement is a factor of 2-3 larger than the reference cross section.Clearly, it is important to correct for the spin-parity mismatch if one wants to improve on this result.Efforts to do so are underway. 06001-p.4 CNR*09 The cross sections obtained from the simulations p = 1, 2, 3 are seen to converge for energies larger than about 1.5 MeV, i.e. the dependence on the spin-parity distribution decreases and the Weisskopf-Ewing assumption becomes a better approximation.In this energy region, the experimental results seem to be in rough agreement with the reference result (or possibly slightly too high), but the statistical uncertainties from the measurement become too large to draw more detailed conclusions. Summary and conclusions Motivated by the renewed interest in the Surrogate nuclear reactions approach, we have examined the prospects for determining (n,γ) cross sections for deformed rare-earth and actinide nuclei from Surrogate measurements.In particular, we investigated the validity of the Weisskopf-Ewing approximation, which is commonly employed when extracting (n,f) cross sections from Surrogate experiments.The Weisskopf-Ewing approach, which neglects the fact that the spin-parity population of the compound nucleus produced in the Surrogate reaction is different from that of the compound nucleus occurring in the desired reaction, was tested with calculations that simulated observables for typical Surrogate experiments.The approach used here is similar to the method employed in our earlier study of (n,f) reactions [3] and complements and extends the investigation of (n,γ) reactions for near-spherical nuclei in the mass 90-100 region [8].The validity of the Surrogate Ratio Approach, which makes use of the Weisskopf-Ewing approximation, can be investigated analogously.This issue is studied elsewhere [9,11]. Overall, we found that the probability for a compound nucleus to decay via γ emission depends sensitively on the spin-parity population of the nucleus prior to decay.The dependence of the γ-branching ratios on the Jπ distribution is greater than that found previously for fission.Calculations for representative Zirconium, Gadolinium, and Uranium nuclei showed a strong dependence of the γ branching ratios on the spins populated in the compound nucleus, in particular for the lightest system considered here, the 92 Zr nucleus, which has a closed proton subshell (Z=40) and a nearly-closed neutron shell (N=52 ≈ 50).A comparison with the results for Gadolinium and Uranium confirms the notion that the higher level densities present in the deformed rare-earth and actinide regions do indeed reduce the sensitivity of the γ-decay probabilities to compoundnuclear spin-parity distributions and nuclear-structure effects. For Gadolinium, we also demonstrated that the (n,γ) cross sections obtained from a Weisskopf-Ewing analysis of Surrogate data can differ significantly from the expected 'true' cross section.The uncertainty seen in the cross sections extracted from simulated Surrogate measurements is clearly greater than that found previously for (n,f) cross sections.It illustrates the limitations of this approximation when considering applications of the method to mass regions and/or types of reactions for which the method has not been tested yet.We complemented our theoretical sensitivity studies with results from a recent Surrogate experiment.A Weisskopf-Ewing analysis of the 156 Gd(p, p γ) coincidence data measured by Scielzo et al. [10,11] yielded a 155 Gd(n,γ) cross section that differs up to a factor of three from the directly-measured cross section.These results are in agreement with our theoretical predictions and further underscore the need to account for the spin-parity mismatch between the Surrogate and desired reactions. Measurements that test the validity of the Surrogate method are important and valuable.Applications of the method to (n,f) reactions have been tested in numerous experiments over the years.For (n,γ) reactions, only a few experiments exist [11][12][13][14][15][16].Still fewer have been designed to properly test the method.In order to provide useful information on the validity and limitations of the method, Surrogate benchmark experiments need to yield cross section results that can be compared to direct measurements: The energy ranges covered by the direct and Surrogate measurements must have a sizable overlap, and the error bars associated with the two data sets have to be small enough to allow for a meaningful distinction between agreement and disagreement.Further, sufficient nuclear structure and reaction data should be available for the region to allow for calculations supporting the interpretation of the Surrogate measurement.It is also important to test the Weisskopf-Ewing approximation independently from the Ratio approach, as in the latter effects might cancel that have to be understood for a proper application of the Surrogate approach across a range of nuclei. To move beyond the approximate analysis methods currently employed, a comprehensive theoretical treatment of the Surrogate approach is required.This involves a description of direct reactions that populate highly-excited, unbound states, and the damping of these doorway states into more complicated configurations that lead to a compound nucleus.The possibility that the intermediate system produced in a given Surrogate reaction does not lead to the compound nucleus of interest, but decays via nonequilibrium particle emission prior to reaching the compound stage, has to be considered.The probability for this process needs to be calculated, along with its dependence and influence on angular momentum, parity, and energy of the decaying nuclear system [17].Developing a reliable theoretical description of the formation of a compound nucleus following a direct reaction will be crucially important for improving the accuracy and reliability of the Surrogate method and for extending its applicability beyond (n,f) reactions on actinide targets to other reaction types and mass regions. While the strong spin-parity dependence of the observables used to tag the exit channel makes extracting (n,γ) cross sections from Surrogate measurements very challenging, it also provides valuable information.In particular, simultaneously measuring the yields of several γ-ray transitions of a decaying compound nucleus can provide signatures for the spin-parity distribution of the compound nucleus prior to decay.Relative γ-ray yields for the decay of even-even gadolinium nuclei have recently been measured [10,11] and methods are being developed to use this 06001-p.5EPJ Web of Conferences information in order to improve the (n,γ) cross sections determined from Surrogate experiments. Fig. 4 . Fig.4.Spin-parity distributions of the compound nucleus 156 Gd.Three schematic spin-parity distributions, p = 1, 2, and 3, were selected to simulate the compound nucleus prior to decay via γ and neutron emission. U has approximately 90.Consequently, the decay probabilities for 156 Gd and 236 U depend more smoothly on
2018-12-16T07:59:50.030Z
2009-12-02T00:00:00.000
{ "year": 2010, "sha1": "db71153bed72883442c776101acb4a9dc7380832", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2010/01/epjconf_cnr09_06001.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "db71153bed72883442c776101acb4a9dc7380832", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
234855236
pes2o/s2orc
v3-fos-license
Research on Prediction of Movable Fluid Percentage in Unconventional Reservoir Based on Deep Learning : In order to improve the measurement speed and prediction accuracy of unconventional reservoir parameters, the deep neural network (DNN) is used to predict movable fluid percentage of unconventional reservoirs. The Adam optimizer is used in the DNN model to ensure the stability and accuracy of the model in the gradient descent process, and the prediction effect is compared with the back propagation neural network (BPNN), K-nearest neighbor (KNN), and support vector regression model (SVR). During network training, L 2 regularization is used to avoid over-fitting and improve the generalization ability of the model. Taking nuclear magnetic resonance (NMR) T 2 spectrum data of laboratory unconventional core as input features, the influence of model hyperparameters on the prediction accuracy of reservoir movable fluids is also experimentally analyzed. Experimental results show that, compared with BPNN, KNN, and SVR, the deep neural network model has a better prediction effect on movable fluid percentage of unconventional reservoirs; when the model depth is five layers, the prediction accuracy of movable fluid percentage reaches the highest value, the predicted value of the DNN model is in high agreement with the laboratory measured value. Therefore, the movable fluid percentage prediction model of unconventional oil reservoirs based on the deep neural network model can provide certain guidance for the intelligent development of the laboratory’s reservoir parameter measurement. Introduction The fluids in unconventional oil reservoirs can be divided into two categories according to their existence states: one is bound fluid (immovable fluid), and the other is free fluid (movable fluid) [1]. The bound fluid exists in the extremely tiny pores and walls of the larger pores. The bound fluid in the smaller pores is difficult to flow due to the large capillary force; the fluid in the middle of the larger pores is subject to the smaller capillary force. It can flow under a certain driving pressure, so it is called movable fluid. The presence of boundary fluids often leads to the reduction of seepage space in the reservoir pores and the increase of seepage resistance. The more movable fluid in the reservoir, the stronger seepage capacity of the corresponding reservoir, and more oil and gas resources can be recovered [2]. In conventional reservoir evaluation, researchers generally use porosity and permeability as the characterization parameters of reservoir physical properties. However, the experimental evaluation of movable fluids and unconventional reservoir core experiments show that: there is a good positive correlation between the oil displacement efficiency and the percentage of movable fluid [2], which also proves that movable fluid percentage can better reflect the development potential of unconventional reservoirs than permeability. At present, the most reliable measurement of the movable 2 of 11 fluid percentage of the reservoir is the core nuclear magnetic resonance technology [3]. The current disadvantages of this technology are the long experimental period and the large amount of manpower required, which make it difficult to synchronize the evaluation of the reservoir with the development and deployment of the oil field. In recent years, with the development of science and technology, artificial intelligence methods have been increasingly applied to the petroleum industry: Yushu and Qidi used the XGBoost algorithm to identify complex carbonate rock lithology with an accuracy rate of 88.18% [4,5]. Mohamed used machine learning methods to study lithology classification, and concluded that the classification accuracy of supervised learning algorithms was better than that of unsupervised algorithms [6]; Liuqing predicted the porosity of sandstone reservoirs based on deep neural network and logging data. The correlation between the predicted value of the model and the actual porosity is as high as 0.9725 [7]. Yuyang combined the NMR transverse relaxation time spectrum and mercury intrusion data to intelligently predict the permeability of sandstone reservoirs through BP neural network and achieved a good prediction accuracy [8]; Dongxiao realized the automatic generation of logging curves based on cyclic neural network [9]. Ye predicted unconventional reservoir saturation based on NMR logging data and machine learning methods [10]. Deep learning was first proposed by Hinton in 2006. The ability to achieve complex nonlinear fitting through artificial neural networks with multiple hidden layers had greatly improved the prediction and classification accuracy of artificial intelligence models [11]. NMR logging data are affected by many factors in the reservoir, and the effective information contained in NMR logging often has large deviations. In order to improve measuring speed and accuracy of the movable fluid percentage of unconventional reservoirs, this article is based on the deep learning method and the unconventional reservoir core NMR T 2 spectrum data of laboratory measurement to predict movable fluid percentage of unconventional oil reservoirs. Correlation Analysis between NMR T 2 Spectrum and Percentage of Movable Fluid By comparing and studying the NMR T 2 spectrum of oil-saturated cores with different degree of tightness in Figure 1, it was found that: (1) with the increase of reservoir tightness, the left peak of the NMR T 2 spectrum of oil-saturated cores gradually rose and shifted to the left, the right peak gradually decreased or even disappeared; (2) the proportion of movable fluid in the reservoir gradually decreased, and the boundary fluid gradually increased. Through the above comparative studies, there was a strong correlation between the shape characteristics of the oil-saturated core NMR T 2 spectrum of unconventional oil reservoirs and movable fluid percentage of reservoirs. Therefore, this article predicted the percentage of movable fluid in unconventional oil reservoirs based on the shape characteristics of the oil-saturated core NMR T 2 spectrum. Data Source and Preprocessing In this paper, a total of 580 unconventional reservoir cores NMR T 2 spectrums and the corresponding movable fluid percentage were collected, all of which were obtained through the laboratory measurement. The movable fluid percentage of the reservoir was predicted based on the shape characteristics of the core NMR T 2 spectrum of unconventional oil reservoirs. By discretizing the NMR T 2 spectrum of the core, the horizontal axis T 2 relaxation time of each discrete point was fixed. At this time, all discrete points' T 2 distribution values could represent the shape characteristics of the core NMR T 2 spectrum [11]. The result of the discretization processing of different cores' NMR T 2 spectrum is shown in Figure 2. In the unconventional reservoir NMR T 2 spectrum, T 2 distribution values of discrete points with a relaxation time greater than 1000ms is basically 0. Therefore, for the nuclear magnetic resonance T 2 spectrum of each core, collecting T 2 distribution values of the first 55 discrete points could completely extract the shape characteristics of the core Appl. Sci. 2021, 11, 3589 3 of 11 NMR T 2 spectrum. Before model training, the data needed to be standardized. The specific standardized processing methods were as follows: where: a i and x i are the parameter value after standardization and the original parameter value, µ is the average value of input parameters, and σ is the standard deviation of input parameters. Correlation Analysis between NMR T2 Spectrum and Percentage of Movable Fluid By comparing and studying the NMR T2 spectrum of oil-saturated cores with different degree of tightness in Figure 1, it was found that: (1) with the increase of reservoir tightness, the left peak of the NMR T2 spectrum of oil-saturated cores gradually rose and shifted to the left, the right peak gradually decreased or even disappeared; (2) the proportion of movable fluid in the reservoir gradually decreased, and the boundary fluid gradually increased. Through the above comparative studies, there was a strong correlation between the shape characteristics of the oil-saturated core NMR T2 spectrum of unconventional oil reservoirs and movable fluid percentage of reservoirs. Therefore, this article predicted the percentage of movable fluid in unconventional oil reservoirs based on the shape characteristics of the oil-saturated core NMR T2 spectrum. Data Source and Preprocessing In this paper, a total of 580 unconventional reservoir cores NMR T2 spectrums and the corresponding movable fluid percentage were collected, all of which were obtained through the laboratory measurement. The movable fluid percentage of the reservoir was predicted based on the shape characteristics of the core NMR T2 spectrum of unconventional oil reservoirs. By discretizing the NMR T2 spectrum of the core, the horizontal axis T2 relaxation time of each discrete point was fixed. At this time, all discrete points' T2 distribution values could represent the shape characteristics of the core NMR T2 spectrum [11]. The result of the discretization processing of different cores' NMR T2 spectrum is shown in Figure 2. In the unconventional reservoir NMR T2 spectrum, T2 distribution values of discrete points with a relaxation time greater than 1000ms is basically 0. Therefore, for the nuclear magnetic resonance T2 spectrum of each core, collecting T2 distribution values of the first 55 discrete points could completely extract the shape characteristics of the core NMR T2 spectrum. Before model training, the data needed to be standardized. The specific standardized processing methods were as follows: where: and are the parameter value after standardization and the original parameter value, is the average value of input parameters, and is the standard deviation of input parameters. The original data set was divided into a training set and a test set. The training set was mainly used for model learning, and the test set was used for evaluating the effect of model learning. The training set contained 500 cores NMR data, and the test dataset contained 80 cores NMR data. Data Source and Preprocessing In this paper, a total of 580 unconventional reservoir cores NMR T2 spectrums and the corresponding movable fluid percentage were collected, all of which were obtained through the laboratory measurement. The movable fluid percentage of the reservoir was predicted based on the shape characteristics of the core NMR T2 spectrum of unconventional oil reservoirs. By discretizing the NMR T2 spectrum of the core, the horizontal axis T2 relaxation time of each discrete point was fixed. At this time, all discrete points' T2 distribution values could represent the shape characteristics of the core NMR T2 spectrum [11]. The result of the discretization processing of different cores' NMR T2 spectrum is shown in Figure 2. In the unconventional reservoir NMR T2 spectrum, T2 distribution values of discrete points with a relaxation time greater than 1000ms is basically 0. Therefore, for the nuclear magnetic resonance T2 spectrum of each core, collecting T2 distribution values of the first 55 discrete points could completely extract the shape characteristics of the core NMR T2 spectrum. Before model training, the data needed to be standardized. The specific standardized processing methods were as follows: where: and are the parameter value after standardization and the original parameter value, is the average value of input parameters, and is the standard deviation of input parameters. The original data set was divided into a training set and a test set. The training set was mainly used for model learning, and the test set was used for evaluating the effect of model learning. The training set contained 500 cores NMR data, and the test dataset contained 80 cores NMR data. The original data set was divided into a training set and a test set. The training set was mainly used for model learning, and the test set was used for evaluating the effect of model learning. The training set contained 500 cores NMR data, and the test dataset contained 80 cores NMR data. Feedforward Algorithm of Deep Neural Networks In the deep neural network, the final output value of the model is obtained by complex nonlinear operations on the input vector, weight vectors and bias vectors [12]. Assuming that the deep neural network has a total of six layers of neurons, this article took the i-th neuron in the L-th layer as an example for calculation: where: z L i represents the input value of the i-th neuron in the L-th layer, w is the output value of the i-th neuron in the L-th layer, f L (·) represents the activation function of the i-th neuron in the L-th layer, when L = 6, a (L) i represents the output vector. Back Propagation Algorithm of Deep Neural Networks The backpropagation algorithm was used to calculate the partial derivative of the loss function ζ(y,ŷ) to the model parameters, which was used to update the model's parameters. Because the calculation of involves the partial differentiation of the vector to the matrix, the calculation process is cumbersome and complicated, so the backpropagation algorithm obtained according to the chain rule could be greatly simplified the calculation process [13]. The meaning of the backpropagation algorithm is: the error term of the l-th layer was obtained by multiplying the weight of the error term of the neurons of the (l + 1)-th layer and the gradient of the activation function of the neurons of the l-th layer [14]. The Equation (3) is the calculation formula of the sensitivity error term of the l-th layer neurons. After calculating the sensitive error term of the l-th layer, the partial derivative of the loss function to the weight and bias of the neuron of the l-th layer could be obtained to achieve the parameter update. Equations (3)-(5) are the calculation formula for the above process. where: l represents the l-th neuron layer, δ l the sensitive error term of the l-th neuron layer, a (l−1) is the output value of the (l-1)-th neuron layer, and f l (·) presents the derivative of activation function of the l-th neuron layer, represents the vector product, W l represents weights value of the l-th neuron layer, and b (l) represents all the biases of the l-th neuron layer. Adam Optimization Algorithm In the deep neural network's training process, Adam was selected as the optimizer for model parameter update. Adam is a fusion of momentum method [14] and RMSprop algorithm [15]. It not only uses momentum as the direction of parameter update, but also adjusts the learning rate adaptively to ensure the accuracy and stability of the gradient descent during training [16]. The Adam optimizer calculates the exponentially weighted average of the gradient square g 2 t on the one hand (similar to the RMSprop algorithm), and on the other hand calculates the exponentially weighted average of the gradient g t (similar to the momentum method). The Adam optimization is expressed by: where: β 1 and β 2 are the attenuation rates of the two moving averages, usually β 1 = 0.9, β 2 = 0.99, M t is the exponentially weighted average gradient, and G t is the square of the average gradient. When M 0 = 0, G 0 = 0, the value of M t and G t will be smaller than the true mean and variance at the beginning of the iteration, especially when β 1 and β 2 are both close to 1, the deviation will be greater, so the deviation should make corrections: where:M t andĜ t are deviation correction values of the exponentially weighted average gradient and the average gradient, β 1 and β 2 are attenuation rates of the two moving averages, and t is the time step. Finally, the modified gradient value was used to update parameters of the model, and the update formula is: where: α is the learning rate, is the constant of stable value, = 1 × 10 −8 , θ represents parameters of the model. BP Neural Network Model Back propagation neural network (BPNN) is a forward-propagation neural network model trained based on the back error propagation algorithm. Through training, it learns the inherent feature relationship between the input vector and the output vector, and continuously updates the model weights through the gradient descent algorithm to achieve non-linear mapping between input features and output values [14]. The BPNN model in this experiment consisted of an input layer, a hidden layer, and an output layer. The neuron node of input layer was set to 55, the number of hidden layer neurons was 200, the output layer neuron node was set to 1, and the learning rate was set to 0.005. The Relu(Rectified Linear Units) function is used as the activation function of the hidden layer. The maximum number of training iterations was 1000 times. K-Nearest Neighbor Regression Model The K-nearest neighbor (KNN) model is a simple supervised learning algorithm. The input of the K-nearest neighbor method is the feature vector of the instance, it corresponds to the point in the feature space, and the output is the predicted value of the instance [17]. When the K-nearest neighbor model is used as regression model, it is assumed that a training data set is given, and the label value of each data has been calibrated; The KNN model calculates the average value of label values of the K nearest neighbor training instances of the new instance as the output of the model. In this experiment, the K value of the KNN model was set to 10, and the method of calculating the distance between different instances was the Euclidean distance. Support Vector Regression Model Support vector regression (SVR) model is one of the most widely used models in machine learning. It was proposed by former Soviet Union scientists Vladimir Vapnik and Alexey Chervonenkis in 1963 and 1995 respectively [18]. For a sample data (x,y), general Appl. Sci. 2021, 11, 3589 6 of 11 regression models are usually based on the direct difference between the model's predicted value f(x) and the true label value y to calculate the loss. Only when the predicted value f(x) is completely equal to the y value, the regression model's loss function is 0. There is a big difference between the SVR model and the general regression model, which is SVR allows a maximum deviation of between the predicted value f(x) and y. Only when the direct deviation between f(x) and y is greater than , SVR will calculate the error between the two. It is equivalent to taking f(x) as the center and establishing an interval band with a width of 2 . When the predicted value of the sample falls within the interval band, the prediction is considered accurate [19]. SVR tries to find the optimal hyperplane to minimize the deviation from all sample points to the optimal hyperplane. Seeking the optimal hyperplane is equivalent to finding the maximum interval. In this experiment, the support vector regression model used the radial basis function as the kernel function, the regularization constant C was set to 5 and gamma was set to 0.02. Model Evaluation Method This article used the root mean square error function (RMSE) and the R 2 coefficient to measure the prediction accuracy of the model. The R 2 coefficient is a method to measure the correlation between true values and predicted values. The formula is as follows: where: f (x i ) represents the predicted value of the i-th sample's the movable fluid percentage, y i represents the true movable fluid percentage of the i-th sample, and y represents the average value of true movable fluid percentage of all samples. RMSE reflects the error between the true movable fluid percentage of the reservoir and the predicted movable fluid percentage. The formula is as follows: where: y i is the true movable fluid percentage of the i-th sample, f (x i ) is the predicted movable fluid percentage of the i-th sample. N is the total number of samples. Optimization of Deep Neural Network's Hyperparameters This experiment used Tensorflow developed by Google as the implementation platform. Tensorflow supports automatic derivation and doesn't need to manually write the derivation code, and the neural network's structure can be freely designed [20]. Hornik found that any function can be approximated when using a neural network with more than three layers through research [21]. In the training process of the model, it is necessary to optimize the hyperparameters, otherwise the model is prone to high deviation or high variance. In this experiment, L 2 regularization was selected as a method to prevent the model overfitting, and the regularization coefficient was set to 0.01. Relu was selected as the activation function to accelerate the update of parameters Optimization of Learning Rate The learning rate is an important hyperparameter in deep neural networks' training. In the gradient descent method, the value of the learning rate is very critical. If the learning rate is too large, the model cannot converge, and if the learning rate is too small, the convergence speed of the model is too slow. The experiment began to set DNN model parameters based on experience, given the hidden layer n = 2, the neural network structure was 55-200-160-1, and the number of training times was 1000. Figure 3 shows changes of RMSE in training set under different learning rates during training. When the learning rate was 0.01, the model's training error value dropped rapidly in the early stage of training, and it could also converge at the end of training. Compared with other curves, the root mean square error curve with a learning rate of 0.01 was smoother and had less fluctuation, so 0.01 was selected as the optimal learning rate for this experiment. Optimization of Hidden Layer Neuron Nodes In order to determine the optimal number of neuron nodes in the hidden layer, this article used a grid search method to optimize the number of neuron nodes. Grid search is a method to find a suitable set of hyperparameter configurations by trying all the combinations of hyperparameters. Assuming that there are a total of K hyperparameters, and the K-th hyperparameter can take values, then the total number of configuration combinations is × × ⋯ × . When there are too many hyperparameters or when a certain hyperparameter takes more values, the number of hyperparameter configuration combinations will increase explosively, which leads to a significant increase in the time cost for the optimal number of hidden layer neuron nodes. In order to reduce the time for searching the optimal number of hidden layer neurons, this paper adopted the following two methods: (1) Set the possible value of each hyperparameter at an interval of 20, and the value range of each hyperparameter was 20 to 300. Taking the neuron node of the first hidden layer as an example, the possible values of the neuron node were 20,40,60,⋯ ⋯,300. (2) On the premise that the values of the hyperparameters that were optimized by grid search are fixed, the values of other hyperparameters were further optimized. Taking the optimization of the number of neuron nodes in the third hidden layer as an example, it was assumed that the optimal number of neuron nodes in the first and second hidden layer after grid search was 200 and 160. When optimizing the neuron node of the third hidden layer, the neuron node of the first hidden layer was set to 200 and the neuron node of the second hidden layer was set to 160, then the grid search method was used to select the optimal neuron node of the third hidden layer. The method for optimizing the number of neurons in other hidden layers was similar to the above process. After grid search optimization, the optimal structure of DNN models with different hidden layers was finally obtained as shown in the Table 1: Table 1. The number of optimal neurons in different hidden layers. Number of Hidden Layers The Optimization of Hidden Layer Neuron Nodes In order to determine the optimal number of neuron nodes in the hidden layer, this article used a grid search method to optimize the number of neuron nodes. Grid search is a method to find a suitable set of hyperparameter configurations by trying all the combinations of hyperparameters. Assuming that there are a total of K hyperparameters, and the K-th hyperparameter can take m k values, then the total number of configuration combinations is m 1 × m 2 × · · · × m k . When there are too many hyperparameters or when a certain hyperparameter takes more values, the number of hyperparameter configuration combinations will increase explosively, which leads to a significant increase in the time cost for the optimal number of hidden layer neuron nodes. In order to reduce the time for searching the optimal number of hidden layer neurons, this paper adopted the following two methods: (1) Set the possible value of each hyperparameter at an interval of 20, and the value range of each hyperparameter was 20 to 300. Taking the neuron node of the first hidden layer as an example, the possible values of the neuron node were 20,40,60,· · · · · · ,300. (2) On the premise that the values of the hyperparameters that were optimized by grid search are fixed, the values of other hyperparameters were further optimized. Taking the optimization of the number of neuron nodes in the third hidden layer as an example, it was assumed that the optimal number of neuron nodes in the first and second hidden layer after grid search was 200 and 160. When optimizing the neuron node of the third hidden layer, the neuron node of the first hidden layer was set to 200 and the neuron node of the second hidden layer was set to 160, then the grid search method was used to select the optimal neuron node of the third hidden layer. The method for optimizing the number of neurons in other hidden layers was similar to the above process. After grid search optimization, the optimal structure of DNN models with different hidden layers was finally obtained as shown in the Table 1: Table 1. The number of optimal neurons in different hidden layers. Number of Hidden Layers The Optimal Number of Hidden Layers In order to explore the inherent relationship between the prediction accuracy of movable fluid percentage of unconventional reservoirs based on the DNN model and the depth of the neural network, this experiment carried out a sensitivity analysis between the prediction accuracy and the depth of the neural network. Learning rates of experimental models with different hidden layers were all set to 0.01. The number of hidden layer neurons in different experimental models is shown in Table 1. It can be seen from Figure 4 that as the number of hidden layers increased, the prediction error of the deep neural network model on the test dataset continued to decrease, and the R 2 correlation coefficient continued to increase. When n = 5, the prediction result of the deep neural network model on the test dataset after training was the best (RMSE = 2.901, R 2 coefficient = 0.9753). This is because the complexity of the model increased as the depth of the model increased, the DNN model's ability to fit the mapping relationship between input features and output parameters was also increasing. When the number of hidden layers of the deep neural network exceeded 5, the prediction effect of the model began to deteriorate. When n = 7, the prediction accuracy of the model was even lower than that of n = 3. This is because the complexity of the model was too high, which led to the model's ability to fit the training set too strong. The model's strong ability to fit the training set led to the deterioration of the model's robustness, which made the DNN model's prediction accuracy on the test set worse. Based on the prediction accuracy of the model in Figure 4, a deep neural network with five hidden layers was selected as the best model to predict percentage of movable fluid of unconventional reservoirs. Optimal Number of Hidden Layers In order to explore the inherent relationship between the prediction accuracy of movable fluid percentage of unconventional reservoirs based on the DNN model and the depth of the neural network, this experiment carried out a sensitivity analysis between the prediction accuracy and the depth of the neural network. Learning rates of experimental models with different hidden layers were all set to 0.01. The number of hidden layer neurons in different experimental models is shown in Table 1. It can be seen from Figure 4 that as the number of hidden layers increased, the prediction error of the deep neural network model on the test dataset continued to decrease, and the R2 correlation coefficient continued to increase. When n = 5, the prediction result of the deep neural network model on the test dataset after training was the best (RMSE = 2.901, R2 coefficient = 0.9753). This is because the complexity of the model increased as the depth of the model increased, the DNN model's ability to fit the mapping relationship between input features and output parameters was also increasing. When the number of hidden layers of the deep neural network exceeded 5, the prediction effect of the model began to deteriorate. When n = 7, the prediction accuracy of the model was even lower than that of n = 3. This is because the complexity of the model was too high, which led to the model's ability to fit the training set too strong. The model's strong ability to fit the training set led to the deterioration of the model's robustness, which made the DNN model's prediction accuracy on the test set worse. Based on the prediction accuracy of the model in Figure 4, a deep neural network with five hidden layers was selected as the best model to predict percentage of movable fluid of unconventional reservoirs. Training and Evaluation Results of Different Models Deep neural network and three contrast regression models were used to predict the percentage of movable fluid in unconventional reservoirs. The prediction results are shown in Table 2. Because the K-nearest neighbor model did not have an explicit training process [22], so KNN model could not express the RMSE and R2 coefficients on the training set. It can be obtained from the data in Table 2: whether it was on the training set or the test set, the deep neural network achieved better prediction results than the other three comparison models. Compared to the BPNN model, KNN model, and SVR model, the prediction errors of the deep neural network model on the test dataset were reduced by Number of hidden layers Training set Testing set Training and Evaluation Results of Different Models Deep neural network and three contrast regression models were used to predict the percentage of movable fluid in unconventional reservoirs. The prediction results are shown in Table 2. Because the K-nearest neighbor model did not have an explicit training process [22], so KNN model could not express the RMSE and R 2 coefficients on the training set. It can be obtained from the data in Table 2: whether it was on the training set or the test set, the deep neural network achieved better prediction results than the other three comparison models. Compared to the BPNN model, KNN model, and SVR model, the prediction errors of the deep neural network model on the test dataset were reduced by 45.89%, 61.74% and 61.84%, and the predicted correlation coefficients R 2 were increased by 6.51%, 17.29% and 17.40%. In summary, the deep neural network model had a good ability to extract the shape features of the core NMR T 2 spectrum. After models learned the training set data, the DNN model had the smallest prediction error (RMSE = 2.901) and the highest prediction correlation coefficient (R 2 = 0.9745) for the test dataset, which also showed that the DNN model had the best robustness. Application Results of the Deep Neural Network Model In order to further verify the prediction effect of the deep neural network model on the percentage of movable fluid in unconventional reservoirs, we performed the prediction of the percentage of movable fluid in 10 unconventional reservoir cores from Changqing Oilfield. Firstly, the core was saturated with oil, and the nuclear magnetic resonance T 2 spectrum data of the saturated oil core were measured by the laboratory's nuclear magnetic resonance instrument. Then the laboratory method of measuring the core movable fluid percentage was used to measure the true movable fluid percentage of this 10 cores. Secondly, the trained deep neural network used the nuclear magnetic resonance T 2 spectrum data of oil-saturated cores to predict the percentage of movable fluid. The predicted value of the model and the true value obtained in the laboratory are shown in Figure 5. From the Figure 5, it can be drawn: (1) the prediction result obtained by the deep neural network model (DNN) was the closest to the result measured by the laboratory method, followed by the BP neural network model, and the worst prediction model was the SVR; (2) on the whole, the predicted value of the machine learning model was greater than the movable fluid percentage measured in the laboratory; (3) compared with the BPNN, KNN, and SVR models, the prediction RMSE of the DNN model was reduced by 39.65%, 51.18%, and 54.35%. . Sci. 2021, 11, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/applsci shown in Figure 5. From the Figure 5, it can be drawn: (1) the prediction result obtained by the deep neural network model (DNN) was the closest to the result measured by the laboratory method, followed by the BP neural network model, and the worst prediction model was the SVR; (2) on the whole, the predicted value of the machine learning model was greater than the movable fluid percentage measured in the laboratory; (3) compared with the BPNN, KNN, and SVR models, the prediction RMSE of the DNN model was reduced by 39.65%, 51.18%, and 54.35%. Conclusions 1. The deep neural network model achieved the complex non-linear mapping from the core NMR T 2 spectrum to the movable fluid percentage, and the prediction effect of DNN model was compared with that of BPNN, KNN and SVR model. The experimental results demonstrated that for the 10 core data of Changqing Oilfield, the R 2 correlation coefficient between the predicted value of the DNN model and the real movable fluid percentage of core is as high as 0.9632. The prediction RMSE of the DNN model is reduced to 2.447, and a good prediction effect is achieved. 2. Compared with the method of predicting unconventional reservoir saturation based on logging data, the method proposed in this article to predict the percentage of movable fluid in unconventional reservoirs based on laboratory NMR data has achieved better prediction results and faster prediction speed. Therefore, this method can provide certain guidance for the intelligent development of laboratory reservoir parameter measurement. 3. The study found that the prediction accuracy of DNN model gradually decreased when the number of hidden layers of the deep neural network model was greater thanfive. The reason for the above phenomenon may be that there are fewer training data and the model's overfitting in the later stage of training. In the future research work, the above two aspects will be optimized: (1) increasing the core NMR data of the training set; (2) taking a variety of methods to solve the overfitting in the later stage of model training.
2021-05-21T16:56:47.772Z
2021-04-16T00:00:00.000
{ "year": 2021, "sha1": "e005074dbe3e06f58e511eca753ae17232a05908", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/11/8/3589/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "7c95de5bce091e3425f02853261951675ff42865", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Geology" ] }
18485696
pes2o/s2orc
v3-fos-license
Handling controversial arguments by matrix We introduce matrix and its block to the Dung's theory of argumentation framework. It is showed that each argumentation framework has a matrix representation, and the indirect attack relation and indirect defence relation can be characterized by computing the matrix. This provide a powerful mathematics way to determine the"controversial arguments"in an argumentation framework. Also, we introduce several kinds of blocks based on the matrix, and various prudent semantics of argumentation frameworks can all be determined by computing and comparing the matrices and their blocks which we have defined. In contrast with traditional method of directed graph, the matrix method has an excellent advantage: computability(even can be realized on computer easily). So, there is an intensive perspective to import the theory of matrix to the research of argumentation frameworks and its related areas. Introduction In recent years, the area of argumentation begins to become increasingly central as a core study within Artificial Intelligence. A number of papers investigated and compared the properties of different semantics which have been proposed for argumentation frameworks (AFs, for short) as introduced by Dung [10,4,3,11,9,19]. In early time, many of the analysis of arguments are expressed in natural language. Later on, a tradition of using diagrams has been developed to explicate the relations between the components of the arguments. Now, argumentation frameworks are usually represented as directed graphs, which play a significant role in modeling and analyzing the extension-based semantics of AFs. For further notations and techniques of argumentation, we refer the reader to [10,18,2,17,1]. This paper is a continuous work of [20]. Our aim is to introduce matrix as a new mathematic tool to the research of argumentation frameworks. First, we assign a matrix of order n for each argumentation framework with n arguments. Each element of the matrix has only two possible values: one and zero, where one represents the attack relation and zero represents the nonattack relation between two arguments (they can be the same one). Under this circumstance, the matrix can be thought to be a representation of the argumentation framework. Second, we give the matrix methods to determine the indirect attack relation and indirect defence relation between two arguments. By this way, we can find the "controversial arguments" in an AF by computing the matrices of it. Finally, we analysis the internal structure of the matrix corresponding to various prudent semantics of the argumentation framework, define several blocks corresponding to various prudent semantics, and give the matrix approaches to determine the stable p-extension, admissible p-extension and complete p-extension, which can be easily realized on computer. As will be seen in later, the matrix of an argumentation framework is not only visualized as the directed graph, but also has another significant advantage on the aspect of computation. We shall study various prudent semantics of the argumentation framework by comparing and computing the matrix of the AF and its blocks. Dung's theory of argumentation Argumentation is a general approach to model defeasible reasoning and justification in Artificial Intelligence. So far, many theories of argumentation have been established [5,7,8]. Among them, Dung's theory of argumentation framework is quite influence. In fact, it is abstract enough to manage without any assumption on the nature of arguments and the attack relation between arguments. Let us first recall some basic notion in Dung's theory of argumentation framework. We restrict them to finite argumentation frameworks. An argumentation framework is a pair F = (A, R), where A is a finite set of arguments and R ⊂ A × A represents the attack-relation. For S ⊂ A, we say that (1) S is conflict-free in (A, R) if there are no a, b ∈ S such that (a, b) ∈ R; (2) a ∈ A is defeated by S in (A, R) if there is b ∈ S such that (b, a) ∈ R; (3) a ∈ A is defended by S in (A, R) if for each b ∈ A with (b, a) ∈ R, we have b is defeated by S in (A, R). (4) a ∈ A is acceptable with respect to S if for each b ∈ A with (b, a) ∈ R, there is some c ∈ S such that (c, b) ∈ R. Remark: For convenience, when S = {b} has only one element, we will also say that a is defended by b instead of S. The conflict-freeness, as observed by Baroni and Giacomin[1] in their study of evaluative criteria for extension-based semantics, is viewed as a minimal requirement to be satisfied within any computationally sensible notion of "collection of justified arguments". However, it is too weak a condition to be applied as a reasonable guarantor that a set of arguments is "collectively acceptable". Semantics for argumentation frameworks can be given by a function σ which assigns each AF F = (A, R) a collection S ⊂ 2 A of extensions. Here, we mainly focus on the semantic σ ∈ {s, a, p, c, g, i, ss, e} for stable, admissible, preferred, complete, grounded, ideal, semi-stable and eager extensions, respectively. Definition 1 [17] Let F = (A, R) be an argumentation framework and S ∈ A. (1) S is a stable extension of F , i.e., S ∈ s(F ), if S is conflict-free in F and each a ∈ A \ S is defeated by S in F . (2) S is an admissible extension of F , i.e., S ∈ a(F ), if S is conflict-free in F and each a ∈ A \ S is defended by S in F . (3) S is a preferred extension of F , i.e., S ∈ p(F ), if S ∈ a(F ) and for each T ∈ a(F ), we have S ⊂ T . (4) S is a complete extension of F , i.e., S ∈ c(F ), if S ∈ a(F ) and for each a ∈ A defended by S in F , we have a ∈ S. (5) S is a grounded extension of F , i.e., S ∈ g(F ), if S ∈ c(F ) and for each T ∈ c(F ), we have T ⊂ S. (6) S is an ideal extension of F , i.e., S ∈ i(F ), if S ∈ a(F ), S ⊂ ∩{T : T ∈ p(F )} and for each U ∈ a(F ) such that U ⊂ ∩{T : T ∈ p(F )}, we have S ⊂ U. (7) S is a semi-stable extension of F , i.e., S ∈ ss(F ), if S ∈ a(F ) and for each T ∈ a(F ), we have (8) S is a eager extension of F , i.e., S ∈ e(F ), if S ∈ c(F ), S ⊂ ∩{T : T ∈ ss(F )} and for each U ∈ a(F ) such that U ⊂ ∩{T : T ∈ ss(F )}, we have S ⊂ T . Note that, there are some elementary properties for any argumentation framework F = (A, R) and semantic σ. If σ ∈ {a, p, c, g}, then we have σ(F ) = ∅. And if σ ∈ {g, i, e}, then σ(F ) contains exactly one extension. Furthermore, the following relations hold for each argumentation framework F = (A, R): Since every extension of an AF under the standard semantics (stable, preferred, complete and grounded extensions) introduced by Dung is an admissible set, the concept of admissible extensions plays an important role in the study of argumentation frameworks. Controversial arguments in framework The controversial arguments was first defined when Dung discussed the coherence of argumentation frameworks in [13]. Then, Coste-Marquis, Devred and Marquis considered a refinement of the concept of "conflict-free set" in order to exclude "controversial arguments", i.e. arguments x, y such that, although (x, y) / ∈ R there is an "indirect attack" by x on y: the resulting approach gives to the prudent semantics of [8]. Furthermore, Cayrol, Devred and Lagasquie-Schiex studied the "controversial arguments" in bipolar argumentation frameworks [5]. Definition 2 [10] Let F = (A, R) be an AF and a, b ∈ A. (1) The argument a indirectly attacks the argument b iff there is an odd-length path from a to b in F , i.e., there is a finite sequence a 0 , a 1 , ..., a 2n+1 such that 1) a = a 0 and b = a 2n+1 , and 2) for each 0 ≤ i ≤ 2n, a i attacks a i+1 . (2) The argument a indirectly defends the argument b iff there is an evenlength path from a to b in F (length ≥ 2), i.e., there is a finite sequence a 0 , a 1 , ..., a 2n such that 1) a = a 0 and b = a 2n , and 2) for each 0 ≤ i < 2n, a i attacks a i+1 . ?>=< 89:; e X X 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 F F It is easy to see that a is attacked by b and the defeater e of b is controversial w.r.t a. Even there is no direct conflict between a and e, it seems uncautious to accept together both arguments in the same extension for the coherence of the extension set. This problem has some practical background in AI, and motivates the concept of prudent semantics and its study. Definition 4 [8] Let F = (A, R) be an AF, and S ∈ A. (1) S is p(rudent)-admissible iff every a ∈ S is acceptable w.r.t S and S is without indirect conflicts, i.e., there is no pair of arguments a and b of S such that there is an odd-length path from a to b in F . (2) S is a stable p-extension iff S attacks every argument from A \ S and without indirect conflicts. (3) S is a preferred p-extension iff it is maximal w.r.t ⊆ among the padmissible sets, (4) S is a complete p-extension iff it is p-admissible, and every argument which is acceptable w.r.t S and without indirect conflicts with S belongs to S. For prudent extensions of argumentation framework, there are several basic properties which can be easily deduced from the definition. On purpose of the self completeness, we list them in the following. (1) If a is controversial w.r.t b, then {a, b} can not included into any p-admissible set. (2) The set of all p-admissible subsets of A is a complete set of (2 A , ⊆). (3) For every p-admissible set S ⊆ A, there is at least one preferred pextension E ⊆ A such that s ⊆ E. The matrix of argumentation frameworks We know that the directed graph is a traditional tool in the research of argumentation framework, and has the feature of visualization. It is widely used for modeling and analyzing argumentation frameworks. In this paper, we will introduce the matrix representation of argumentation framework. Except for the visibility, it has a excellent advantage in analyzing argumentation framework and computing various semantics extension. Let us first introduce some basic notation about matrix. An m×n matrix A is a rectangular array of numbers, consisting of m rows and n columns, denoted by The m × n numbers a 1,1 , a 1,2 , ..., a m,n are the elements of the matrix A. We often called a i,j the (i, j)th element, and write A = (a i,j ). It is important to remember the first suffix of a i,j indicates the row and the second the column of a i,j . A column matrix is an n × 1 matrix, and a row matrix is an 1 × n matrix, denoted by respectively. Matrices of both these types can be regarded as vectors and referred to respectively as column vectors and row vectors. Usually, the i-th row of a matrix A is denoted by A i, * , and the j-th column of A is denoted by A * ,j . Definition 6 In a m × n matrix A we specify any k(≤ min{m, n}) different rows i 1 , i 2 , ..., i k and the same number of different columns. The elements appearing at the intersections of these rows and columns form a square matrix of order k. We call this matrix a principal block of order k of the original matrix A; it is denoted by .., i k are the numbers of the selected rows and columns. Definition 8 In a m × n matrix A we specify any k(≤ min{m, n}) different rows i 1 , i 2 , ..., i k and h(≤ min{m, n}) different columns j 1 , j 2 , ..., j h . The elements appearing at the intersections of these rows and columns form a k × h matrix. We call this matrix a block of order k × h of the original matrix A; it is denoted by .., i k are the numbers of the selected rows and j 1 , j 2 , ..., j h the numbers of the selected columns. For the underlying set A of the arguments of AF , we may enumerate it by using natural numbers. Contrasting with the form A = {a, b, ...}, it is more convenience to put A = {1, 2, ..., n} if the cardinality of A is large. In fact, this arrangement has an obvious advantage for computing, and we will follow this arrangement in the below discussion. Definition 9 Let F = (A, R) be an AF , in which the cardinality of A is n. The matrix of F is an n × n matrix, its entries is determined by the following rule: ( In comparison with graph-theoretic way and mathematical logic way, the matrix of an argumentation framework has many excellent features. First, it possess a concise mathematical format. Secondly, it contains all information of the AF by combining the arguments with attack relation in a specific manner in the matrix M(F ). Also, it can be deal with by program on computer. The most important is that we can import the knowledge of matrix to the research of argumentation frameworks. It is obviously that 1 is defended by 5. Next, we study the structure of the matrix of F = (A, R) to find the reflection of defence relation between 5 and 1. First, let us write out the matrix of F ; Handling controversial arguments by matrix in AF In the column vector F * ,1 (column 1), a 2,1 = 1 means that (2, 1) ∈ R, and thus the argument 2 attacks the argument 1. In the row vector F 5, * (row 5), a 5,2 = 1 means that (5, 2) ∈ R, and thus the argument 5 attacks the argument 2. By combination, a 2,1 = 1 and a 5,2 = 1 in the matrix M(F ) ensure the fact that 1 is defended by 5. This can be come down to the element b 5,1 = 0 in the matrix A 2 = B = (b i,j ). By a similar discussion, we can see that a 3,1 = 1 and a 4,3 = 1 in the matrix M(F ) play the similar role to guarantee that 1 is defended by 4. In the column vector F * ,3 (column 3), a 4,3 = 1 means that (4, 3) ∈ R, and thus the argument 4 attacks the argument 3. In the row vector F * ,5 (row 5), a 5,4 = 1 means that (5, 4) ∈ R, and thus the argument 5 attacks the argument 4. Therefore, in the matrix M(F ) a 4,3 = 1 and a 5,4 = 1 ensure the fact that 3 is defended by 5. This can be come down to the element b 5,3 = 0 in the matrix Further analysis indicates that the converse is also true. And, we can generalize this idea to obtain a matric method, by which the defence relation between two arguments will be easily be determined. Theorem 13 Given an argumentation framework Proof Assume that j is defended by i in F , then there is some 1 ≤ t ≤ n such that the argument i attacks the argument t, and the argument t attacks the argument j. This implies that (i, t) ∈ R and (t, j) ∈ R. Therefore, we have that a i,t = 1 and a t,j = 1. Since a i,t is at the intersection of row i and column t, a t,j is at the intersection of row t and column j, we have that n a n,j = 0. Then, there is some 1 ≤ t ≤ n such that a i,t a t,j = 1. This implies that a i,t = 1 and a t,j = 1, i.e., (i, t) ∈ R and (t, j) ∈ R. It follows that the argument i attacks the argument t, and the argument t attacks the argument j. This induce that the argument j is defended by the argument i. Remark: From the proof of the above theorem, we can deduce that b i,j = a i,1 a 1,j + ... + a i,t a t,j + ... + a i,n a n,j = k if and only if, there are k different paths from i to j in F , whose length is 2. Example (cont) In this example, we can also find out that the argument 5 indirectly attacks the argument 1, while the argument 1 is defended by the argument 5. Since the argument 1 is defended by the argument 4, and the argument 5 attacks the argument 4. Next, we study the structure of the matrix M(F ) of F to find the reflection of indirect attack relation between 5 and 1. In the column vector M 2 (F ) * ,1 (column 1), b 4,1 = 1 means that the argument 1 is defended by the argument 4. In the row vector Further discussion tell us that the converse is also true. We generalize this idea and give the matric method to determine the indirect attack relation between two arguments as follows. Proof Assume that i indirectly attacks j in F , then there are 1 ≤ i 1 , i 2 , ..., i k−1 ≤ n such that the argument i attacks the argument i 1 , the argument i 1 attacks the argument i 2 , ..., and the argument i k−1 attacks the argument j, where k is an odd number. This implies that (i, i 1 ), (i 1 , i 2 ), ..., (i k−1 , j) ∈ R. Therefore, we have that a i,i 1 = 1, a i 1 ,i 2 = 1, ..., a i k−1 ,j = 1. Since a i,i 1 is at the intersection of row i and column i 1 , a i 1 ,i 2 at the intersection of row i 1 and column i 2 , ..., a i k−1 ,j at the intersection of row i k−1 and column j, we have that Conversely, suppose that m(k) i,j = 0 in the matrix M k (F ) = M k−1 (F )M(F ), i.e., m(k−1) i,1 a 1,j +m(k−1) i,2 a 2,j +...+m(k−1) i,n a n,j = 0, where k is an odd number. Then, there is some 1 ≤ i 1 ≤ n such that a i 1 ,j = 1 (in the matrix M(F ) = (a i,j )) and m(k −1) i,i 1 = 0 (in the matrix M k−1 (F ) = (m(k −1) s,t )). Since m(k −1) i,i 1 = m(k −2) i,1 a 1,i 1 +m(k −2) i,2 a 2,i 1 +...+m(k −2) i,n a n,i 1 = 0 in the matrix M k−1 (F ) = M k−2 (F )M(F ), there is some 1 ≤ i 2 ≤ n such that a i 2 ,i 1 = 1 (in the matrix M(F )) and m(k − 2) i,i 2 = 0 (in the matrix M k−2 (F ) = (m(k − 2) s,t )). By similar discussion, we can find out 1 ≤ i 3 , i 4 , ..., i k−1 ≤ n such that a i 3 ,i 2 = 1, a i 4 ,i 3 = 1, ..., a i k−1 ,i k−2 = 1, a i,i k−1 = 1. Therefore, we have (i, i k−1 ), (i k−1 , i k−2 ), ..., (i 3 , i 2 ), (i 2 , i 1 ), (i 1 , j) ∈ R, i.e., the argument i attacks the argument i k−1 , the argument i k−1 attacks the argument i k−2 , ..., the argument i 1 attacks the argument j. Since k is an odd number, we conclude that the argument i indirectly attacks the argument j. For the indirect defence relation between two arguments in an argumentation framework, we have a similar result as follows, which can be proved by referring to the proof of theorem 13 and theorem 14. Theorem 15 Given an argumentation framework F = (A, R) with A = {1, 2, ..., n}, i, j ∈ A(1 ≤ i, j ≤ n). Then, i indirectly defends j in F iff there is some even number k which is greater than 2 such that m(k) i,j = 0 in the matrix M k (F ) = (m(k) s,t ). Remark: In theorem 15, the determination of indirect defence relation between two arguments does not involve k = 2, but it is obvious that theorem 13 and theorem 15 does have the same core feature. For the sake of coherence, we rewrite theorem 13 as follows. Theorem 16 Given an argumentation framework From the above discussion, we summarize a matrix method for determining the "controversial arguments" in AF. It is important in the theoretic sense, and we will refine it to a very perfect grade later on. Proof Without lost of generality, we assume that i 1 ≤ i k+1 and i t = i t+1 for 1 ≤ t ≤ k. For the argument i 2 ∈ A, we put the t 1 to be the last one in the sequence 2, 3, ..., k (correspond to the sequence i 2 , i 3 , ..., i k ) such that i t 1 equals to i 2 . Then, there is a path from i 1 to i k+1 satisfying that For the argument i t 1 +1 , we put the t 2 to be the last one in the sequence t 1 + 1, t 1 + 2, ..., k such that i t 2 equals to i t 1 +1 . Then, there is a path from i 1 to i k+1 satisfying that (i 1 , i t 1 ), (i t 1 , i t 2 ), (i t 2 , i t 2 +1 ), ..., (i k , i k+1 ) ∈ R. This process will be end at some step, and we finally obtain a path from i 1 to i k+1 such that (i 1 , i t 1 ), (i t 1 , i t 2 ), (i t 2 , i t 3 ), ..., (i t r−1 , i tr ), (i tr , i k+1 ) ∈ R. It follows that a i 1 ,it 1 = 1, a it 1 ,it 2 = 1, ..., a it r ,i k+1 = 1, and thus we have m(r) i 1 ,i k+1 = 0 in the matrix M r (F ) (just as the proof in first paragraph of theorem 14). From the selection of i t 1 , i t 2 , ..., i tr , we conclude that they are different from each other, and thus r ≤ n. With this preparation, we can improve the above matrix method to be a powerful tool for finding out the "controversial arguments" in AF as follows. Then, by checking the elements of them we will find all the "controversial arguments" in F . If both m(O) i,j and m(E) i,j are nonzero, then i is controversial w.r.t j. Otherwise, i is not controversial w.r.t j. Compare with checking directed graph, computing matrices has great advantage in determining the "controversial arguments". We only need to computer the matrices without any comparing and reasoning on the directed graph. Especially when the AF has large number of arguments, writing a directed graph is not an easy thing. Furthermore, the computing of matrices can be carry out on computer easily. Determination of the stable p-extensions For convenience, from this section we assume that the sequences i 1 , i 2 , ..., i k and j 1 , j 2 , ..., j h are all increasing. The principal block of order k in the matrix M(F ) is called the cf -block of S, and denoted by M cf for short. In other words, the elements appearing at the intersection of rows i 1 , i 2 , ..., i k and the same number columns in the matrix M(F ) form the cf -block M cf of S. Definition 21 [20] Let F = (A, R) be an argumentation framework with A = 1, 2, ..., n, M(F ) = (a i,j ) is the matrix of F , and S = {i 1 , i 2 , ..., i k } ⊂ A is a stable extension of F . We say that the k × h block .., i k } ⊂ A is a stable p-extension of F . We say that the block In fact, the elements appearing at the intersection of rows (2) hold by definition 4 and theorem 23. For any i s , i t ∈ S(1 ≤ s, t ≤ k), we know that the argument i t does not indirectly attack the argument i s . So, we have that m(O) it,is = 0 by corollary 19. It follows that the p-block M p (O) i 1 ,i 2 ,...,i k i 1 ,i 2 ,...,i k of S is zero. Conversely, suppose that the conditions (1), (2) and (3) hold. Then, by theorem 23 S is firstly a stable extension. For any i t , i s ∈ S(1 ≤ s, t ≤ k), we have m(O) it,is = 0 by condition (3). It follows that the argument i t does not indirectly attack the argument i s . Therefore, we conclude that S is a stable p-extension. Remark: In this theorem, condition (1) ensures that S is a conflict-free set. And, condition (2) shows the feature of S that there are no indirect attack relation between any two arguments in S. Determination of the admissible p-extensions Definition 25 [20] Let F = (A, R) be an argumentation framework with A = {1, 2, ..., n}, and S = {i 1 , i 2 , ..., i k } ⊂ A be an admissible extension of F . Then, the h × k block For the determination of preferred p-extensions of an argumentation framework, we may compare all the p-admissible sets to find the maximal ones. By proposition 5, each maximal p-admissible set is a preferred p-extension of the AF. Determination of the complete p-extensions Definition 28 [20] Let F = (A, R) be an argumentation framework with A = {1, 2, ..., n}, and S = {i 1 , i 2 , ..., i k } ⊂ A is a complete extension of F . We say that the block For any p-admissible set S, the argument i ∈ S is obviously acceptable w.r.t S and without indirect conflict with any argument of S. By definition 4, the following lemma is an immediate result for S to be a complete pextension. Lemma 31 Let F = (A, R) be an argumentation framework with A = {1, 2, ..., n}, S = {i 1 , i 2 , ..., i k } ⊂ A, and A \ S = {j 1 , j 2 , ..., j h }, then S is a complete p-extension of F iff S is a p-admissible set and for each 1 ≤ t ≤ h, j t ∈ S is not defended by S or has indirect conflict with some element of S. Namely, the elements appearing at the intersection of rows i 1 , i 2 , ..., i k and columns j 1 , j 2 , ..., j h in the matrix M 2 (F ) form the pcd-block M pcd (2) i 1 ,i 2 ,...,i k Proof Condition (1) ensures that j t ∈ A \ S is not defended by S, i.e., j t is not acceptable w.r.t S, condition (2) means that j t ∈ S is attacked by some i r ∈ S(1 ≤ r ≤ k), and condition (3) indicates that j t ∈ S attacks some i r ∈ S(1 ≤ r ≤ k). Any one of these conditions implies that j t does not belong to S, and thus the p-admissible set S is a complete p-extension. Conversely, suppose that S is a complete p-extension. Then, by lemma 31 S is certainly a p-admissible set. Furthermore, any j t ∈ A \ S should not be defended by S or has indirect conflict with some element of S in light of definition 4. Therefore, there must be one of the three conditions to hold. Conclusions and perspectives In this paper, we introduced the matrix M(F ) of an argumentation framework several theorems to decide various prudent extensions (stable p-extension, p-admissible set, preferred p-extension, complete p-extension) of the AF, by blocks of the matrices M 2 (F ) and M O (F ) of the AF and relations between these blocks. Comparing with traditional way to deal with argumentation frameworks such as reasoning and directed graph, our matric method has the advantage of computability. If we want to find out all the "controversial arguments" or prudent extensions, we only need to computer the related matrices and blocks. More important is that we can apply the matrix theory into the research of argumentation frameworks, and this may bring a mathematical period of argumentation frameworks. Interestingly, the p-block M(O) i 1 ,i 2 ,...,i k i 1 ,i 2 ,...,i k of S corresponds to the determination for S to be a stable p-extension (p-admissible set, complete p-extension respectively). And, the pca1-block M(O) i 1 ,i 2 ,...,i k j 1 ,j 2 ,...,j k of S is exactly the complementary block of the pca2-block M(O) j 1 ,j 2 ,...,j h i 1 ,i 2 ,...,i k of S. Also, we can determine the complete p-extensions of an AF by computing and checking the pad-block M pcd (2), the pca1-block M pca1 (O) and the pca2-block M pca2 (O). These facts indicate that there is indeed a corresponding relation between the argumentation framework and its matrix. So, we can investigate the structure and properties of an argumentation framework by using the theory and method of matrix. The prospective is that, we can find out the internal pattern of AFs and the relations between different objects which we concerned in AFs, by studying the related matrices and blocks of AFs. Our future goal is to develop the matrix method in the related areas of argumentation frameworks, such as argument acceptability, dialogue games, algorithm and complexity and so on [14,19,13,12,15,16].
2011-10-19T22:58:23.000Z
2011-10-18T00:00:00.000
{ "year": 2011, "sha1": "af16e5a64c9cdfd4e726f624e59c1677da1cbea5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "af16e5a64c9cdfd4e726f624e59c1677da1cbea5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
198691174
pes2o/s2orc
v3-fos-license
UvA-DARE (Digital Academic Repository) Practitioners' perspective on user experience and design of cycle highways Cycle highways, also known as “ fast cycle routes ” , are an emerging concept in urban planning that describes long distance, high quality bicycle routes built for commuter use. In Northern European countries, large sums of money are invested into cycle highways promising to induce a mode shift with little critical assessment as to how cyclists experience these infrastructures. Through eleven interviews of practitioners from fi ve European countries � the Netherlands, Belgium, Germany, United Kingdom and Denmark � this paper explores how practitioners de fi ne cycle highways and how their conceptualizations of cycling experience shape the physical design of cycle high-ways. Results show that while practitioners are guided by infrastructural standards for cycle highways such as width, design speed, and intersection treatments, it is less clear how these infrastructure elements fi t within the surrounding environment to create desirable cycling experiences. In addition to commuters, cycle highways are also used by recreational and sport cyclists, so policy makers and designers should consider a wide variety of user groups and their aesthetic and social experiences in the planning and design of cycle highways. Future research should investigate cycle highway experiences from the perspective of various user types. Introduction Cities around the world are building cycle highways to encourage sustainable inter-urban transport using bicycles, e-bikes, and other forms of small wheeled vehicles (Pucher and Buehler, 2017).To further reduce automobile use and to promote physical activity, environmental sustainability, economic growth, and accessibility, cities in Europe have invested in a variety of infrastructure and policies to improve the attractiveness of cycling (Buekers et al., 2015).Cycle highways are often framed within a package of interventions, along with improvements to public transport, with the intent of changing commuting behavior by substituting investments in road infrastructure to cope with expected commuter traffic growth (Skov-Petersen et al., 2017).From general cycling research, we know that cycling becomes relatively less attractive compared to other modes as trip distances increase (Heinen et al., 2010;Scheepers et al., 2013).Cycle highways seek to encourage cycling for longer distance commuting trips, and survey data from governments seem to suggest that users of cycle highways do indeed tend to take longer trips ( "Cycling Report for the Capital Region", 2016 ; "FIETS-GEN studie Eindrapport", 2012 ; Faber).On a policy level, Rayaprolu et al. (2018) attribute cycle highways to a Dutch concept in response to "rising environmental and health consciousness, and the growing popularity of electric bicycles".At the time of writing this paper, there are major cycle highway routes and networks being planned and constructed in northern and western Europe (Rayaprolu et al., 2018).The Netherlands was the first to experiment with the cycle highway concept with demonstration routes in Tilburg and The Hague in the 1970s, yet modern designs have only been implemented since 2004(ter Avest, 2015;;Kristjansd ottir and Sj€ o€ o, 2017).More recently, the concept of "Cycle Superhighways" has been popularized in the English media with London opening its first routes in 2010 and having eight completed as of 2018 ( "Cycle Superhighways", 2018 ).Copenhagen opened their first cycle highway in 2012, with fifteen planned for 2021 ( "Cycle Superhighways Capital Region of Denmark", 2018 ).More recently, Germany began executing their first plans for cycle highways with three pilot projects in 2012, following examples of cycle highways from the Netherlands, Copenhagen, Belgium, and London (Thiemann-Linden and Van Boeckhout, 2012).Similarly, the Netherlands is planning a nation-wide network of bicycle highways that connect urban cores. As more attention, funding and projects utilize the language of cycle highways to improve cycling numbers, there does not appear to be a clear understanding among design and planning professionals and policymakers of what cycle highways are and what they should be, with evolving conceptualization of its design and purpose.For example, the first generation of cycle superhighways in London, built in 2010, was little more than blue paint on high traffic roads.London's new cycle superhighways have since evolved towards more "continental" design, incorporating elements such as traffic separation and protected intersections ( "Evolution of cycle superhighways in London", 2018 ).The European Cyclists' Federation CHIPS project defines cycle highways as, ". . .a mobility product that provides a high quality functional cycling connection.As backbone of a cycle network, it connects cities and or suburbs, residential areas and major (work)places and it satisfies its (potential) users" (Faber).However, there are multiple terms that could be used almost interchangeably to describe similar typologies, such as "cycle superhighways", "greenways", "high quality cycle paths", "through cycle routes", and "fast cycle routes" to name a few.Without a clear definition and especially given the variety of languages used to describe the cycle highway concept, it is difficult to assess the performance of cycle highways as an intervention and to transfer knowledge about successes and failures, especially across countries.It also blinds us to underlying, and contested, assumptions of what cycling is, or ought to be. Currently, using the terminology of "cycle highway" might be strengthening an underlying vehicular approach to bicycle infrastructure design.In relation to this, Dutch practitioners Sargentini and Valenta (2015) warn that bicycle paths should not be built with the same logic as automobile highways and instead should take cyclists' embodied experiences and a variety of individual motives into account.They urge practitioners to stay away from car-oriented thinking, moving beyond A-to-B logic, and proclaim "do not make cycle highways into car highways!"(Sargentini and Valenta, 2015).This plea for the unpacking of the black box of travel by developing a more nuanced understanding of the journey is echoed by mobilities researchers who have conceptualized travel in terms of meanings and experiences (Sheller and Urry, 2006;Spinney, 2011;Jensen et al., 2016;te Br€ ommelstroet et al., 2017). There is also tension within the concept "cycle highway" itself.On the one hand, cycling has an experiential element that scholars have attempted to conceptualize in relation to aesthetics, emotions, and spatial design (Stef ansd ottir, 2014;Forsyth and Krizek, 2011;Spinney, 2009;Krizek, 2019;Liu et al., 2018).Yet, the term highway seems to place this type of infrastructure more in common with the logic of automobile highways; focused only on the fast and efficient transport of people and goods (Koglin and Rye, 2014).Hamilton-Baillie (2004) conceptualized traffic zones versus social zones as realms of competing logic, both physically and conceptually.Hamilton-Baillie defines the traffic zone as "single purpose, uniform, regulated, impersonal, and predictable", whereas the social zone is characterized as "multi-functional, diverse, culturally defined, personal, unpredictable".On a street, these zones are demarcated by the sidewalk for pedestrians and the roadway for motorized vehicles.Where do cycle highways belong on this scheme, and what design logic do cycle highways currently follow?To what extent do practitioners pay attention to each aspect of Hamilton-Baillie's logic, and do cycle highways seek to create a unique zone for the cyclist, taking into account Forsyth and Krizek's unique perspective of the cyclist (Forsyth and Krizek, 2011)? In academic literature, cycle highways have been analyzed from a few perspectives.From bicycle counter data and three questionnaire campaigns, Skov-Petersen et al. (2017) analyses Copenhagen cycle highways in the framework of induced travel demand, cyclist satisfaction and competition for funding.From the a public health perspective, Buekers et al. (2015) estimates health impact of modal shift due to two cycle highways in Flanders, Belgium.From the physical design perspective, Kristjansd ottir and Sj€ o€ o (2017) provides a technical review of European cycle highway standards in the Netherlands, Denmark, United Kingdom, Germany, Norway, and Sweden, focusing on engineering criteria such as infrastructure type, intersections, markings, lighting, width, curve radii, etc.This paper seeks to develop an understanding of how practitioners define cycle highways and how they conceptualize users, experiences, and design in relation to cycle highways.Cycle highways incorporate many of the elements known to improve the attractiveness of cycling, such as priority crossings, rest areas, lighting and effective wayfinding (Thiemann-Linden and Van Boeckhout, 2012).While these measures have been shown to improve the attractiveness of cycling routes (Heinen et al., 2010), there is a relatively little academic research on how these elements impact the experience of cycling and none to date that explore practitioners' conceptualization of cycling experience.Thus, our research questions are: 1. What are the main concepts used to describe and define cycle highways by practitioners? 2. How do practitioners articulate cyclist types and cyclists' motives within the conceptualization of cycle highways?3. How is cycling experience conceptualized by cycle highway practitioners? 4. How is the perspective of the cyclist reflected in the design of cycle highways? Selection of practitioners We interviewed practitioners from five European countries that are actively working on developing cycle highway networks the Netherlands, Denmark, Germany, United Kingdom, and Denmark.To select interview participants, an initial search was conducted of internet and media reports of cycle highway projects that are either recently constructed, under construction, or being planned in the near future.Particular attention was paid to northern and western European countries in which cycling is relatively matured (Pucher and Buehler, 2008;Vandenbulcke et al., 2009).London, although with lower cycling rates, has been actively building a cycle highway system. From the list of projects based on geographic location, expert government practitioners were selected for interview based on their associated project, their position in the organization, and their work portfolio having contained cycle highways.Interviewees for this research hold, or have previously held, positions in regional or provincial governments working on cycle highway projects for at least two years; the time in their position is used as an indicator of their familiarity with the subject area.Given the relative novelty of cycle highways as a concept, none of the interviewees had a formal education in cycle highway planning and design, and perhaps due to the novelty of the cycle highway concept, none spent more than ten years working on cycle highways.All Interviews were conducted in English (Table 1). Interview structure We followed a semi-structured interview format, consisting of four sections.These sections ask practitioners about 1) the general concept of cycle highways, including their typology, differentiation, and best practices 2) the cycle highways they have currently worked on, including design priorities, good and bad aspects of design and target users 3) describing the ideal cycling experience, and relating this ideal experience to any considerations of cycling experience in the design of the case study cycle highway and 4) the professional role and knowledge sources of the interviewee, including the focus of their work, extent and type of their professional network, experience with cycle highways, and use of professional and academic sources on cycle highway design. Each interview lasted between 45 min to 1 h, and participants were encouraged to share personal anecdotes where relevant to the question.Interviews were recorded in person or through recorded telephone or internet voice call.Interview data was transcribed then coded inductively focusing on the following themes: 1) definition of cycle highways, 2) design of cycle highways, 3) user types and trip purposes, and 4) experience of cycling.(See Appendix A for interview script).After transcription of the interview and coding for themes in the interview answers.These themes then formed the basis for the findings of this paper. Competing logics The interviews begin by establishing how cycle highways are defined. Participants were asked, "what is a cycle highway?"and "what makes cycle highways distinct from other types of infrastructure?"Participants 2 responded with reference to three general themes, representing competing logics that are implicit in the discourse surrounding cycle highways.These logics contextualize the extent to which cycling experience plays a role in current discourse among practitioners.Broadly, these categories are: 1. Political context, jurisdiction, and funding 2. Infrastructure and environmental quality 3. Directness, efficiency, and competition with other modes Cycle highways are defined differently among the practitioners interviewed, varying among responses coming from the perspective of policy makers, designers, and engineers.Some respondents feel there is no clear definition at this point.NL2 states, "I've got no clue.I've been working for 10 years in it, I've got no clue, but it really depends on who you ask.I think that's a proper answer."Policy makers have also framed the concept of cycle highways differently depending on the state of political priorities. In reference to the Netherlands, "Probably the answer in the coming four years is that it will help us reduce our carbon dioxide emissions, and maybe in the four years after that it might contribute to a healthier city. . .By strategic positioning of projects as a cycling highway you see that it gets us more attention and gets us more political attention and thus you can get more funding, and then suddenly you can also become more ambitious as a matter of fact, and you can invest more" (NL2). Cycle highways should also distinguish itself from other cycling infrastructure by having a distinct character achieved through signage, infrastructure design, and environmental quality.GR2 states, "at the first glimpse, you should see it's more than an ordinary bike path, meaning there should be a special design, a special color scheme, and unique signage of the cycle highway, so you see that it is not just an ordinary bike path, but that you have really a special way for cycling."When asked about taking cyclists' experiences into account, BE2 says there is a growing realization of the importance of the surrounding environment adjacent to the bike path, stating, "we are struggling with that question because, our main goal, what our politicians asked from us, is that we build a clean, smooth, and wide infrastructure, and there is not really a real vision about how a cycle highway feels and what it has to offer alongside this infrastructure."Definitions of cycle highways tend to require high quality cycling infrastructure, yet quality is defined in terms of minimum physical design standards and lacks a vision for how physical design relates to improving the cycling experience. Some practitioners choose to define cycle highways primarily through a political lens in relation to jurisdiction and funding.UK1 emphasized the importance of allocating cyclists' own space on the street and distinctive branding, yet jurisdiction boundaries can limit the types of infrastructure that can be built.UK1 gives the example that Transport for London only has jurisdiction over major arterial roads, so London's Cycle Superhighway infrastructure is built on heavy traffic corridors.Given this limitation, London's Cycle Superhighways focus on creating an easy to follow route from the suburbs to central London.In the context of Copenhagen, cycle highways must go through many municipalities with different objectives and political agendas, so compromises are made in the quality of routing and design elements where political boundaries are crossed.In practical terms, "it means some municipalities are not very ambitious. They must do what they need to do in order to get it approved" (DK1).Thus, cycle highways are also distinguished from other cycling infrastructure through their strategic relevance on a regional and national level, in many cases requiring cooperation from many municipalities in order to realize a continuous cycle highway route. In addition to physical design and political context, a third logic is revealed through the language used to describe geographic connections and relative efficiencies over a larger scale.These descriptions place cycle highways in relation to traffic network and urban planning goals.Interviewees conceptualize cycle highways as providing the fastest, most direct, and most efficient route between two places over relatively longer commuting distances, directly connecting suburbs to urban centers."To bring them (cyclists) from A to B, without lots of interference with other traffic and giving them their own space is crucial.But that's the dream.In reality, we do not always achieve the high level that we want."(BE2).Another goal of cycle highway is to encourage people to switch from cars to cycling, especially for commuting trips, where convenience is a key factor in accomplishing this goal.The German RS1 case reveals that the literal translation of the term "radschnellweg", or "bicycle highway" is taken seriously in the marketing of the route.The RS1 logo is one of a bicycle imposed on a recognizable blue sign used to represent the German Autobahn network (Radschnellwege in NRW, 2014).UK2 also relates cycle highways to the design of motorways, "I would say it is a dedicated cycle facility.And one that is a pretty fast and direct.If I was thinking what a highway is and then applying it to cycling, that's what I come up with." These definitions of cycle highways by practitioners illustrate that the existing logic of cycle highways seeks to implement an engineeringbased criteria of cycle highway design that is limited by funding, ambitions, and cooperation among bordering political entities.It is clear that conceptualizations of cycling experiences are missing from initial definitions given by practitioners, even though interviewees have an intuitive sense that the experiential elements play a role in improving the attractiveness of cycling trips. User differentiation by motives, demographics, and vehicle types After defining cycle highways, practitioners were then asked about their conceptualization of relationships between the various users of cycle highways and to their cycling experiences.In general, practitioners prioritize commuter cyclists' needs and design cycle highways with home-to-work journeys in mind."The question is, for what do we design it for?We do it for the commuters etc., and they want to spend the least time on mobility and transportation, so that means they want to get A to B in the shortest time" (GR2).There are other cyclist needs, but the primary target group of cycle highways is commuters who want to minimize their travel time."If you are doing it via greenways etc., it may be the case that it takes much longer and that is okay if it is about leisure activities on the weekend, but I think most of the people just want to get to their destination quite quickly" (GR2).Cycle highways should also be inclusive for users of all ages and abilities.BE2 says, "when we design or a cycle highways we try to design them for eight year olds so they can cycle independently from A to B." But problems may also arise from the mix of users on the cycle highways, and how they interact with each other."We have a problem from certain cyclists. . . the more soft kindergarten children, elderly.And when we used the words FAST as a term to define a cycle highway. .., then you refer to what people see when they think about the highway and, and they think SPEED.It's a real discussion.Some people are afraid because of the high speeds."BE1's response considers how the faster speeds of speedpedelecs (fast e-bikes) and sport cyclists creates potential conflict with the needs of more leisurely commuters, "There are also people who bike more at ease and they say, 'I don't want to hurry.'These people also want to use the cycle highways.Cycle highways are also for them."BE2 then mentions the problem of understanding and accommodating cycling experiences of different people, "We have some colleagues who are older. They like something else compared to the younger ones.Men, women, and children may also like different things, so you try to make something one fits all or, or, at least appreciated by different target groups."Like in more famous cycling contexts Copenhagen and Amsterdam, urban tourists on bikes are a category that is being recognized in London as well.". ..there's now at least three, probably four companies who do cycle tours around central London, and they all use the super highways more or less to get round the tourist sites and obviously with the London cycle hire, you see a lot more people cycling along the inner superhighways, whereas before they would have kept themselves to the parks instead of the road" (UK1).Hence, UK1 sees different users for each part of the cycle highway network, "[We want to] to get commuters in from the outside to central limit and then get them out of the cars.I would say the central part of the behind is we're much more than designed with recreational use in mind as well, so we don't just design something for the morning rush and the evening rush". While it is clear that cycle highways are primarily designed for commuters, practitioners are well aware of different experiences as perceived through different people.In addition to commuters, users are differentiated by their trip purpose (sport cyclists, leisurely recreation cyclists, commuters, etc.), their vehicle (e-bike, normal bike, etc.), and age (children, elderly, etc.), and gender.Although the primary target audience of "commuters" is clear, cycle highways should also be designed with different users in mind. Elements of experience Safety is the most frequently mentioned topic in relation to cycling experience, and traffic safety is the main concern for practitioners in Germany and the United Kingdom where cycling rates are lowest.There is a perceived tradeoff between traffic safety and expediency, especially when handling cyclists at intersections.UK2 states, "I think in the Quietways, [as opposed to Superhighways], there's perhaps a perception that cyclists emphasize safely.So the idea is that when you get to an intersection, you may not have an advantage over traffic, but. . .you will be able to cross safely."UK2 emphasizes social safety in addition to traffic safety, saying, "[In] isolated areas like parks or down under, under railways or through subways, we seek to enhance or improve security conditions.I suppose the word is social safety. . .under the healthy streets approach now that is even more important". It is also a variety of experiences along a route that seems to be important.There may not be one ideal cycling environment, but a combination of environments with transitions to give variety to the cycling journey may be more ideal.GR2 states, "you are also passing through greenbelts and then you have the rural experience of just being in the countryside, so it is a mixture of both urban areas and rural parts.So that makes it quite attractive because you have both experiences being on the cycle highway" Design considerations change when designing for long distance versus short distance journeys, and DK1 emphasizes both the social and sensory aspects of cycling, and how these relate to a sense of time."Longer distance, especially commuting and in that sense if time is important for you, but also the experience as a cyclist you just like dealing with pedestrians, you like to have something to look [at].You like to have other people around you, so I think to that extent it's possible, you should definitely try to have the cycle highways away from car traffic with the noise.And have it in places where it's either really beautiful or there's other people around that you can look at it because it'll make time fly by.And also, that's what you can do on a bike.You interact with your surroundings."Practitioners from Flemish Belgium reflects on the similarity of their cycling culture compared to the Netherlands in that cycling is seen as a social experience, highlighting the importance of being able to cycle side by side, especially over long distances on cycle highways. GR1 gives a vivid account of the journey experience, from a spatial perspective alluding to many of Kevin Lynch's (1960) ideas about navigating and experiencing the city.GR1 describes, "For example, when you go on the cycle highway, you see the biggest inner-city tower or something that you want to reach.Like when I go. . .I live in Heidelberg, it's 20km from Mannheim, when go cycling to the office, I always see the Television Tower of Mannheim, so you see it getting closer and closer and you think, 'I'll get there.'It's not hard stuff, but the soft topics should not be ignored and there should be no feeling like 'How much longer will it still take?'. . .You should say, 'Ah, how fast that my ride is over now!' so when you reach your office, it should be like 'Ah, I want to continue cycling. . . the weather was so nice, etc.'" UK2 mentions wayfinding as an important aspect of experience, "I think having that certainty of where you're going, where you're going or what's close to you is a big deal.There's nothing like going out on a bike and like kind of embarking on a journey through a network and then you get lost and your confidence will just drop and you need to use your phone."DK2 remarks cyclists should feel like they are part of the traffic picture. "People should have a good time while using cycle highways.. .and feel like they are contributing by taking the bicycle instead of the car." Overall, visual aspects of experience were mentioned, including greenery, nature, and landscape.Landmarks are an interesting case that represents both an element of aesthetic pleasantry as well as wayfinding reference points.Participants also made the distinction between urban and rural environments, and mentioned the importance of these transitions and variations as important to creating an interesting cycling experience.Non-visual experience includes noise, weather, and comfort in relation to the quality of the infrastructure.In terms of comfort, surfacing is an aspect that was deemed an important factor, with overall quality determined by materials, construction quality, and maintenance.There are also differing views on cycling together with other people.Some pictured a solitary cyclist on the highway in the countryside, while others talked about the pleasure of being able to interact with others.Others mentioned the ideal cycling experience as one that provides opportunities for "serendipity", or "being able to ride hands free", and perhaps good design is one that enables these experiences as well. Design considerations Width, quality standards, and intersections are the main concepts mentioned in relation to design.Practitioners say they refer to design standards to guide their work but many cite difficulties when the ideal physical requirements of cycle highway design conflict with other uses of space in urban settings.For example, GR2 refers to the design standard for cycle highways in Germany, which is ideally a 4 m, bi-direction cycle path with a 2 m path for pedestrians ( "Feasibility Study Radschnellweg Ruhr RS1", 2014 ).However, participants recognize that segregated cycling infrastructure is not possible on streets where space is limited in the central city, so mixing or separation of bicycle traffic from motorized traffic seems to be a recurring design consideration in urban environments.Even though high quality is frequently mentioned in describing the design of cycle highways, it is unclear what exactly high quality entails.Where cycling infrastructure is relatively new, for example in the context of London, cycle highway designers have started recognizing cyclists as road users with their own needs, distinct from the needs of pedestrians or automobiles.UK1 states, Instead of being either treated as pedestrians, you put them on the foot way or, and treat them as a traffic and put them in with general traffic. . .you design specifically for the cyclists, at the start of your scheme instead of trying to put a cycle facility almost as an afterthought to your designs.Yeah, I would say that's probably the biggest change is that cyclists are now thought of right to the start of a project instead of as a, Oh yeah, we just need to do something.Let's put a little bit of wide lane in or bit of paint for them. (UK1) Some practitioners also emphasize the perspective of cyclists in the design process.BE2 explains that cycling infrastructure is best understood by those who have experience using them."In cycle infrastructure it is the Flemish road agency that designed a lots of cycle paths, but they are engineers who don't cycle and then you see the difference" (BE2).DK2 uses the example of traffic lights to illustrate a counterintuitive example that highlights the behavior of people in response to unreasonable infrastructure.DK2 says, "the worse thing is always, of course, is when you have a good speed on the bicycle then you have to stop for a red light."DK2 continues, "we must be aware that if they feel annoyed by stopping, they will actually try to break the red lights and that could lead a situation where they actually have some accidents which you could perhaps have avoided because they get impatient."So, it seems that not losing momentum, especially on a human powered vehicle, is an important part of the cycling experience, and designing around this experience can also help cyclists negotiate traffic safely.Cycling experience also depends not only on design, but on the behavior of others.BE2 remarks, "we have to be respectful to each other.It's a soft mode of transport." Practitioners agree that the design of cycle highways cannot be wholly copied from automobile infrastructure, "It's not my aim to make a copy of highways now to cycle highways because it's different.Cyclists are not motorists.They have other needs.You can't just copy paste.It's not possible.It's not a good idea."(BE2).Yet BE3 suggests the aesthetic considerations of scenic parkways in the United States can serve as inspiration for some aspects of cycle highway design, "even motorways are sometimes designed from the point of view of pleasure in a way.You could find some interesting examples where you add a slight bend where you look at the landscape and the scenery.I think in the United States, sometimes they have beautiful examples."This sentiment resonates with ideas from Appleyard, Lynch, and Myer's The View from the Road on how to design landscapes and environments to be enjoyed on the move (Appleyard et al., 1965).However, BE3 cautions, "of course you have to be careful with comparing with motorways, but I think for cycling, and that's really important point. . .one of the motivations to cycle is also the pleasure of cycling, and doing something healthy, and working on your condition, and enjoying the environment, and nature, and the weather, et cetera.And if we want people to commute more, we want to, we have to think about their motivation to commute".Traffic logic is also implied in wayfinding signage, directing cyclists to go the fastest route, not necessarily the most scenic, "Cycle highways are directed at commuters who go to work, and serves a wayfinding function to signal the most direct route to follow."(DK2).DK1 mentions the importance of providing alternatives to fit cyclists' desires for directness and experience, especially through built up areas."We have these route that runs along an old railway line and it actually goes right through Copenhagen.But it will never be the fastest route, because it curves a lot.But it's just so much more fun to take it.The infrastructure's good but you go through parks and squares and there's something happening along the entire route, so I think that would be a case of if you want to go really direct you would take one of the main roads along with people cars.Or, if you want to experience something, you would take the other route.It's also just a trade-off what can actually be done here because there's already a city."The conceptualization of design varied in scales of analysis, from detail design such as smoothness of pavement to cycle path width, to more network level characteristics such as route connectivity and directness.Experiential elements such as enjoyability, convenience, safety, and attractiveness are often mentioned in relation to physical design, along with concrete ideas such as design speed, traffic separation, curves, traffic volume, and other measurable variables.Although designing for good cycling experiences is not prescribed by design standards, practitioners try to incorporate their own intuition of good design with the goal of making journeys more pleasant for cyclists. Defining cycle highways Practitioners gave two types of cycle highway definitions, with one relating to goals and another relating to execution.Policies set out visions and goals that cycle highways should fulfil, while design manuals attempt to translate these visions and goals into physical design.Bridging policy and design manuals are funding requirements that define what types of infrastructure qualify for regional and national funding schemes.A definition in terms of goals refers to matters of policy, such as sustainability, traffic congestion, and the desirability of a fast, efficient, and equitable transport system.A second type of definition focuses on the design of cycling infrastructure to meet these goals, such as speed, directness, width, quality standards, and signage.The two types of definitions can be linked by examining how good design can serve policy goals.Practitioners believe that good design of cycle highways can induce commuters to cycle instead of travelling by car for commuting, and the main mechanism for this modes shift is better comfort and travel time and cost savings.This logic of using cycle highways to induce mode shift is tested by the research of Skov-Peterson et al. (Skov-Petersen et al., 2017), on a Copenhagen case study, yet they found that most of the increased cycling along the new cycle highways is the result of cyclists switching from alternative routes, with "only a modest share (4 6%) of the bicyclists on the renewed routes switched to cycling from other transport modes" (Skov-Petersen et al., 2017).At the same time, their surveys showed improved cycling experience along the new route in terms of surface quality, lighting conditions, traffic safety, and personal safety (Skov-Petersen et al., 2017).These research findings suggest that cycle highways may not be meeting their desired policy goals for shifting commuter traffic towards cycling, but higher quality cycling infrastructure still impart benefits for existing cycle commuters and recreational cyclists.Thus, defining cycle highways in relation to the policy goal of achieving mode shift may not fully capture the intrinsic benefits of higher quality design that makes cycling a more comfortable mode of travel for existing users. Non-commuting uses of cycle highways Cycle highways are a challenge for practitioners because it is unclear how related concepts such as "high quality", "functional", and "attractive" should be interpreted and how these criteria can be translated into physical design.On a policy level, cycle highways are conceptualized as functional infrastructures to reduce automobile congestion by encouraging commuting by bicycle (CHIPS, 2016).Yet, even with measures to improve directness and flow, the slower speed of cycling over longer distances cannot compete directly motorized modes in terms of minimizing travel time.Attention to the quality of the surrounding environment can make cycle highways more attractive not just on the basis of time savings, but also for creating a pleasant experience for cyclists (Forsyth and Krizek, 2011).Practitioners are aware that the same cycle highways built to attract commuters also draw other uses such as recreation, sport, and tourism.For urban designers, these uses are considered optional activities that highlight the intrinsic attractiveness of cycling in relation to the environment, and a high level of optional activities are indicative of good quality physical environments.In reference to pedestrians, Gehl (2011) defines optional activities as, ". . .taking a walk to get a breath of fresh air, standing around enjoying life, or sitting and sunbathing.These activities take place only when exterior conditions are favorable, when weather and place invite them" Gehl (2011).For cycling, a high proportion of non-commuting activity is an indication of good spatial quality, which also benefit commuter cyclists through intrinsic benefits such as better familiarity with one's surroundings, connection with other people, freedom and cognitive stimulation (Krizek, 2019).It is likely that commuter cyclists enjoy the positive intrinsic benefits of those gained by non-commuting cyclists, plus the quantified health, cost and travel time benefits of cycling (Buekers et al., 2015;Rayaprolu et al., 2018). User experience from a cyclists' perspective Practitioners recognize the importance of designing for a good cycling experience.When asked about what makes for an ideal cycling experience, interviewees engaged in broader concepts such as greenery, noise, weather, landscape and moving scenery.Practitioners benefit from being able to view a design in relationship to the potential experiences of people that their infrastructure seek to serve, and we found that practitioners draw extensively on their own experiences to talk about cycle highway design.A recent Dutch study by Goudappel Coffeng found that large enough differences between respondents that there is no average cycling experience and that it is more informative to understand cycle routes from the perspective of different cyclists.They identified five different user types and found that many people cycle for both commuting and leisure, so there is not always a clear relationship between individual trip purpose and the characteristic of the cyclist (Kalter and Groenendijk, 2018).A diversity of speeds on the cycle path also leads to a social problem of interaction between various users of the space (te Br€ ommelstroet et al., 2017). It seems that the challenge with cycle highways, in the model of Hamilton-Baillie, is the quest to provide a uniform, regulated, and predictable environment for faster cyclists while also providing enough variety to satisfy the desire for a diverse, personal, and serendipitous environments for more relaxed, leisure cycling.Public transport research shows that the subjective feeling of waiting for a bus feels twice as long as being underway, and waiting time can be subjectively reduced by giving passengers an indication of expected arrival time (Fan et al., 2016).The same logic can be applied to traffic lights or to the design of wayfinding elements.Wayfinding is generally focused on quality signage and readability at higher speeds, but some practitioners also conceptualize wayfinding in terms of reference points and notable changes in physical environments.Lynch (1960) discusses a multisensorial, albeit primarily visual, approach to wayfinding and ethnographic research by van Duppen and Spierings (2013) shows that journeys experienced on a bike is also composed of transitory experiences such as smells, traffic, sounds and the weather.As cyclists experience each journey differently, these observations highlight the opportunity for a multisensory and inclusive approach to cycle highway design. Flexibility in design Practitioners tend to conceptualize and high quality standards in terms of wide paths, direct connections, quality of paving, and wayfinding, yet it is unclear to what degree positive experiences arise from welldesigned infrastructure and traffic regulation devices versus aesthetic elements and social activity along a cycle highway.Some cycle highway designs include pedestrian paths and others do not.Some cycle highways include sections of shared streets with automobiles while other routes are completely separated from motor traffic (Figs. 1 and 2).Cycle highways in the Netherlands permit heavy vehicles such as mopeds travelling up to 45 km/h while cycle highways in Germany only permit lighter e-bikes with a maximum of 25 km/h.There are opportunities to take advantage the mix of typologies seen on existing cycle highways like the RijnWaalpad in the Netherlands, and in plans for future cycle highways as illustrated in a feasibility study for Mannheim to Heidelberg connection (Albrecht et al., 2018).We know that design concepts carry different meanings when applied to automobile landscapes (Appleyard et al., 1965) versus pedestrian environments (Gehl, 2011), and the term "cycle highway" is taken more literally in some contexts than others.For example, the German RS1 stands in clear relationship with automotive highways through both the design of its logo as well as an image of a bicycle in the middle of an empty motorway (Radschnellwege in NRW, 2014). As an alternative to "highway", the Dutch also uses the term "fast bicycle routes" to describe their system of long-distance bicycle infrastructure in order to move the discourse away associations with automobile highways, but as revealed in the interviews, even the word "fast" is a point of contention (Appleyard et al., 1965). In terms of design logic, cycle highway practitioners struggle with how the uniform, predictable, and regulated engineering of highway environments can be balanced with the diverse, vibrant, and humanscale design of pedestrian environments (Hamilton-Baillie, 2004).However, all participants recognize to varying degrees that the idea of a "highway" means something different for bicycles than for automobiles."There needs to be a middle ground, but I do feel that in the current debate we sometimes tend to move too much to the engineering part," says NL2 recounting the construction of the RijnWaalpad between Arnhem and Nijmegen in the Netherlands, "It's something we, at that point, discuss it from a traffic engineering point of view, but during the process, we quickly discovered that this wasn't enough."As meeting minimum cycle highway standards is necessary for many projects to receive subsidies from the national and regional government, these funding criteria standards determine the basic physical form of cycle highways in terms of width, intersection frequency, lighting, and grading in various street and spatial typologies.Whereas these design requirements form the building blocks for the cycle highway typology, practitioners are still left with flexibility in terms of route choice and designing cycle highways to fit their surrounding context. Limitations and future research There are four limitations to this study that provide opportunities for future research.First, as there is growing awareness of the cycle highway concept outside of Europe, the views of European practitioners may not translate directly to other contexts.It would be interesting to explore how the cycle highway concept can be adapted to contexts with different planning agendas and a wider diversity of land use patterns and to work towards a framework for evaluation.Second, cycle highways have not been researched in relation to the perspective of cyclists themselves.It is clear that practitioners draw extensively from their personal experiences of cycling, but the exact meaning of experiences should be properly explored and defined from the perspective of various user groups in the context of cycle highways.From Jensen's (2013) Staging Mobilities perspective, this paper explored staging from above in how planning, design, regulations, and institutions shape bicycle highways from the perspective of practitioners.In addition, a nuanced understanding of experiences should be obtained from users themselves and how cycle highways are staged from below by the activity of its users.Third, written knowledge, in the form of design manuals and policy documents have not been extensively reviewed in this paper.Practitioners derive their knowledge and framework of discussion from policy documents and design guidelines, so research focusing on those documents extensively would add depth to understanding how the process of designing cycle highways and other cycling infrastructure takes place.Fourth, practitioners have repeatedly mentioned that cycle highways can facilitate the use of e-bikes, and studies do show that e-bike users perform more trips and cycle longer distances than conventional cyclists (Fishman and Cherry, 2016;Fyhri and Fearnley, 2015).The discussion of user experience and behavior becomes increasingly important as we see an increasing heterogeneity of speeds and vehicle types such as e-bikes, scooters, and other personal electric vehicles sharing cycling infrastructure with human-powered transport. Table 1 Interview participants, affiliations, and their cycle highway projects. We don't say what this high-quality means in the definition.It's more a functional definition, but it means that you have higher quality than just normal cycle infrastructure. . .The problem with qualities, you could say we need, for instance, four meters wide and not too much pedestrians, or if there are a lot of pedestrians, you have space for the pedestrians like in the RS1 in Germany.In the practice, you could also have sometimes just a quiet road where you have a little bit mixed with cars.
2020-01-31T03:31:53.994Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "7a7ea58da8b1f9902bcb361a0008ab2a6fe8d41d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.trip.2019.100010", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a49cf5c792d93be4dfc7ea0f222984b6b82b482a", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Business" ] }
251607999
pes2o/s2orc
v3-fos-license
MoDHX35, a DEAH-Box Protein, Is Required for Appressoria Formation and Full Virulence of the Rice Blast Fungus, Magnaporthe oryzae The DExD/H-box protein family encompasses a large number of RNA helicases that are involved in RNA metabolism and a variety of physiological functions in different species. However, there is limited knowledge of whether DExD/H-box proteins play a role in the pathogenicity of plant fungal pathogens. In the present work, the DExD/H-box protein MoDHX35, which belongs to the DEAH subfamily, was shown to be crucial in appressoria formation and full virulence of the rice blast fungus, Magnaporthe oryzae. The predicted protein sequence of MoDHX35 had typical DEAH-box domains, showed 47% identity to DHX35 in Homo species, but had no orthologs in Saccharomyces cerevisiae. Deletion of the MoDHX35 gene resulted in reduced tolerance of the mutants to doxorubicin, a nucleic acid synthesis disturbing agent, suggesting the involvement of MoDHX35 in RNA metabolism. MoDHX35-deleted mutants exhibited normal vegetative growth, conidia generation and conidial germination, but showed a reduced appressorium formation rate and attenuated virulence. Our work demonstrates the involvement of DEAH-box protein functions in the pathogenicity of plant fungal pathogens. Introduction Magnaporthe oryzae, a well-known filamentous fungus, causes rice blast, the most devastating worldwide rice disease [1,2]. The fungus can cause systemic symptoms by infecting rice leaves, sheaths, necks, and even rots [3]. In addition to rice, the pathogen may infect a variety of domesticated grasses, including barley, wheat, pearl millet and turf-grass [4]. The infection cycle of this fungus is initiated from a three-cell conidium [5]. Abundant conidia are produced on the surface of a lesion and repeat the reinfection during the rice growing season, surviving the winter to start a new infection cycle in the following year. The conidium exudes mucilage that helps it stick to the surface of the host to aid germination within a few hours in the right environment [6]. Once germinated, the germ tube develops a specialized infection structure called the appressorium on its tip. The appressorium possesses a thick cell wall and accumulates highly concentrated glycerol to generate enormous turgor [7,8]. Relying on the turgor, a thin penetration peg emerges under the mature appressorium to puncture the host surface, enter a plant epidermal cell, and commence the invasive development. The capability to generate appressoria is thus of key importance for the pathogenicity of the rice blast fungus. The signal pathways that control appressorial morphogenesis in M. oryzae were extensively investigated in recent decades [9,10]. MPG1, a tiny hydrophobin-encoding gene, MoDHX35 Is Up-Regulated during Appressoria Formation To assess the expression profile of MoDHX35, the conidia suspension was allowed to germinate and form appressoria on plastic slices, and the relative transcription levels of the gene were tested using quantitative RT-PCR. MoDHX35 was found up-regulated gradually during germination and appressoria formation, reaching the peak value at 10 to 12 h for induction, the key period of appressorium formation ( Figure 3). MoDHX35 Is Up-Regulated during Appressoria Formation To assess the expression profile of MoDHX35, the conidia suspension was allowed to germinate and form appressoria on plastic slices, and the relative transcription levels of the gene were tested using quantitative RT-PCR. MoDHX35 was found up-regulated gradually during germination and appressoria formation, reaching the peak value at 10 to 12 h for induction, the key period of appressorium formation ( Figure 3). MoDHX35 Is Up-Regulated during Appressoria Formation To assess the expression profile of MoDHX35, the conidia suspension was allowed to germinate and form appressoria on plastic slices, and the relative transcription levels of the gene were tested using quantitative RT-PCR. MoDHX35 was found up-regulated gradually during germination and appressoria formation, reaching the peak value at 10 to 12 h for induction, the key period of appressorium formation ( Figure 3). Relative transcript abundance of MoDXH35 during conidial germination and appressorial formation. The transcript abundance normalized to β-tubulin gene (MGG_00604) was measured by quantitative RT-PCR at time points and compared to that in the non-incubated conidia. Gene Replacement of MoDHX35 and Mutant Recovery The gene placement vector P1300-HPH-DHX35KO was transferred into Guy11 via AtMT ( Figure 4A). One hundred and sixty-three hygromycin B-resistant transformants were obtained. Twenty-two transformants, selected randomly, were single-spore isolated and then screened preliminarily by PCR ( Figure 4B). Four transformants (DHX35-6, DHX35-7, DHX35-8, and DHX35-9) lacking the amplicon of the MoDHX35 locus were identified as potential mutants and were further confirmed by Southern-blotting. Their genomic DNA was digested with Eco RI and hybridized with a 2127 bp probe downstream MoDHX35. The null mutants with gene replacement occurrence produced a 4.7 kb hybridized band, in contrast to the wild-type Guy11, which produced a 6.6 kb band. The random insertion transformants possessed a 6.6 kb band, representing the wild-type MoDHX35 locus, and another band in random sizes ( Figure 4C). Thus, DHX35-6, DHX35-7, DHX35-8 and DHX35-9 are regarded as the true MoDHX35 null mutants, and DHX35-5 is a random insertion transformant. Reverse transcription PCR was used for further confirmation and showed that the MoDHX35 transcription was eliminated in the mutants. The four mutants were identical in colonial morphology, radical growth and conidiation; DHX35-9 was used as a representative in this study. The complement plasmid p1300-BAR-HB-MoDHX35 was reintroduced into the DHX35-9 genome. The complemented transformants, DHX35-9-10 and DHX35-9-16, were confirmed by genomic PCR and RT-PCR. MoDHX35 in DHX35-9-10 and DHX35-9-16 were transcribed at a comparable level to that in wild-type strain Guy11 ( Figure 4D). gene (MGG_00604) was measured by quantitative RT-PCR at time points and compared to that in the non-incubated conidia. Gene Replacement of MoDHX35 and Mutant Recovery The gene placement vector P1300-HPH-DHX35KO was transferred into Guy11 via AtMT ( Figure 4A). One hundred and sixty-three hygromycin B-resistant transformants were obtained. Twenty-two transformants, selected randomly, were single-spore isolated and then screened preliminarily by PCR ( Figure 4B). Four transformants (DHX35-6, DHX35-7, DHX35-8, and DHX35-9) lacking the amplicon of the MoDHX35 locus were identified as potential mutants and were further confirmed by Southern-blotting. Their genomic DNA was digested with Eco RI and hybridized with a 2127 bp probe downstream MoDHX35. The null mutants with gene replacement occurrence produced a 4.7 kb hybridized band, in contrast to the wild-type Guy11, which produced a 6.6 kb band. The random insertion transformants possessed a 6.6 kb band, representing the wild-type MoDHX35 locus, and another band in random sizes ( Figure 4C). Thus, DHX35-6, DHX35-7, DHX35-8 and DHX35-9 are regarded as the true MoDHX35 null mutants, and DHX35-5 is a random insertion transformant. Reverse transcription PCR was used for further confirmation and showed that the MoDHX35 transcription was eliminated in the mutants. The four mutants were identical in colonial morphology, radical growth and conidiation; DHX35-9 was used as a representative in this study. The complement plasmid p1300-BAR-HB-MoDHX35 was reintroduced into the DHX35-9 genome. The complemented transformants, DHX35-9-10 and DHX35-9-16, were confirmed by genomic PCR and RT-PCR. MoDHX35 in DHX35-9-10 and DHX35-9-16 were transcribed at a comparable level to that in wild-type strain Guy11 ( Figure 4D). , and potential ΔMoDHX35 mutants (D6, D7, D8 and D9), digested with Eco RI and subjected to Southern blotting. The 6.6-kb hybridization band was detected in the wild type, whereas the 4.7-bp bands were present in the four potential mutants, consistent with the gene deletion events. Ectopic transformant generated two bands, one of which was equal in size to the wild type. (D) Genomic PCR was used to validate the complement transformants of MoDXH35 by amplifying the fragment MoDXH35 and Bar gene. Loss of MoDHX35 Increases the Sensitivity of the Mutant to Doxorubicin Doxorubicin, which is usually used to cure cancer, can block the synthesis of nucleic acid. Here the null mutant, complemented transformants and wild-type Guy11 were grown on CM plates with 75 μg/mL doxorubicin. The pictures show clearly that the null mutant growth rate is slower than that of Guy11 ( Figure 5A,B). However, the complemented mutants grow at almost the same rate as that of the wild type. These results indicate that MoDHX35 may play an important role in nucleic acid synthesis. , and potential ∆MoDHX35 mutants (D6, D7, D8 and D9), digested with Eco RI and subjected to Southern blotting. The 6.6-kb hybridization band was detected in the wild type, whereas the 4.7-bp bands were present in the four potential mutants, consistent with the gene deletion events. Ectopic transformant generated two bands, one of which was equal in size to the wild type. (D) Genomic PCR was used to validate the complement transformants of MoDXH35 by amplifying the fragment MoDXH35 and Bar gene. Loss of MoDHX35 Increases the Sensitivity of the Mutant to Doxorubicin Doxorubicin, which is usually used to cure cancer, can block the synthesis of nucleic acid. Here the null mutant, complemented transformants and wild-type Guy11 were grown on CM plates with 75 µg/mL doxorubicin. The pictures show clearly that the null mutant growth rate is slower than that of Guy11 ( Figure 5A,B). However, the complemented mutants grow at almost the same rate as that of the wild type. These results indicate that MoDHX35 may play an important role in nucleic acid synthesis. MoDHX35 Contributes to M. oryzae Appressorium Formation To determine which infection steps resulted in the pathogenicity defects of MoDHX35 mutant, germination and appressoria formation of the null mutant were compared to those of the wild type and the complement strains. At 2 h post incubation, 97% Guy11 conidia germinated, as did 95.2% of DHX35-9 conidia, without obvious differences between them, indicating MoDHX35 has no effect on M. oryzae conidial germination. All tested strains were able to form an appressorium ( Figure 6A). However, as shown in Figure 6B, only 53.82% conidia (n = 1000) of the null mutants produced appressoria, compared with 99.74% Guy11 conidia. This comparison statistic shows that the knockout of MoDHX35 really affected the ability to form appressoria. Appressoria formation of the complement mutants was restored, which means that the appressorium formation reduction in the null mutant is due to the MoDHX35 knockout itself. MoDHX35 Contributes to M. oryzae Appressorium Formation To determine which infection steps resulted in the pathogenicity defects of MoDHX35 mutant, germination and appressoria formation of the null mutant were compared to those of the wild type and the complement strains. At 2 h post incubation, 97% Guy11 conidia germinated, as did 95.2% of DHX35-9 conidia, without obvious differences between them, indicating MoDHX35 has no effect on M. oryzae conidial germination. All tested strains were able to form an appressorium ( Figure 6A). However, as shown in Figure 6B, only 53.82% conidia (n = 1000) of the null mutants produced appressoria, compared with 99.74% Guy11 conidia. This comparison statistic shows that the knockout of MoDHX35 really affected the ability to form appressoria. Appressoria formation of the complement mutants was restored, which means that the appressorium formation reduction in the null mutant is due to the MoDHX35 knockout itself. MoDHX35 Is Required for M. oryzae Pathogenicity A pathogenicity assay was performed on the rice susceptible cultivar CO39. Conidial suspensions from Guy11, DHX35-9, DHX35-9-10 and DHX35-9-16, in equal concentrations, were spray inoculated onto rice seedlings. At 7 d post inoculation, the rice leaves inoculated with the wild type formed typical symptoms, while the DHX35-9 mutant exhibited significantly reduced virulence compared with that of the wild type. An average of 24.0 ± 2.6 lesions was generated on the 5-cm leaves inoculated with DHX35-9, significantly lower than the 115.0 ± 15.7 caused by Guy11. On the other hand, the complement strains DHX35-9-10 and DHX35-9-16 exhibited comparable virulence to that of the wild type on rice leaves ( Figure 7A,B). The data indicate that MoDHX35 is required for the full virulence of the rice blast fungus on rice. We then examined the infection structures on barley leaves under a microscope. As observed on the artificial surface, the DHX35-9 mutant forms appressoria at a reduced rate on barley leaves. Nevertheless, the appressoria of the mutant has infection capability, and the development of the infection hyphae did not exhibit significant difference to those of the wild type ( Figure 7C). The data indicate that the attenuated virulence of the MoDHX35 null mutant was due, predominantly, to the reduction in appressoria formation. MoDHX35 Is Required for M. oryzae Pathogenicity A pathogenicity assay was performed on the rice susceptible cultivar CO39. Conidial suspensions from Guy11, DHX35-9, DHX35-9-10 and DHX35-9-16, in equal concentrations, were spray inoculated onto rice seedlings. At 7 d post inoculation, the rice leaves inoculated with the wild type formed typical symptoms, while the DHX35-9 mutant exhibited significantly reduced virulence compared with that of the wild type. An average of 24.0 ± 2.6 lesions was generated on the 5-cm leaves inoculated with DHX35-9, significantly lower than the 115.0 ± 15.7 caused by Guy11. On the other hand, the complement strains DHX35-9-10 and DHX35-9-16 exhibited comparable virulence to that of the wild type on rice leaves ( Figure 7A,B). The data indicate that MoDHX35 is required for the full virulence of the rice blast fungus on rice. We then examined the infection structures on barley leaves under a microscope. As observed on the artificial surface, the DHX35-9 mutant forms appressoria at a reduced rate on barley leaves. Nevertheless, the appressoria of the mutant has infection capability, and the development of the infection hyphae did not exhibit significant difference to those of the wild type ( Figure 7C). The data indicate that the attenuated virulence of the MoDHX35 null mutant was due, predominantly, to the reduction in appressoria formation. Deletion of MoDHX35 Does Not Affect the Sexual Development of the Fungus After cross-inocubated with 2539 strain for 30 days, both of the mutants and wild-type strain generated typical perithecia on oat medium (OMA) (Figure 8A,B). Calcofluor white and Nile-Red flourescent staining were used to highlight respectively the cell wall and cellular lipid in the asci and the ascospores. With the aid of the Calcofluor white and Nile-Red staining, eight ascospores could be found clearly in a mature ascus of the wild type, as well the MoDXH35 deleted mutant ( Figure 8C,D). bars represent the deviation from three replicates and double asterisks indicate significant differences at p < 0.01 level. (C) The infection structures of the mutant and the wild type were compared on barley leaves. The detached barley leaves were drop inoculated with the conidia suspension and incubated for 36 hpi. The infection hyphae of the mutant were found to develop at an equivalent level as that of the wild type. The bar =10 µm. Deletion of MoDHX35 Does Not Affect the Sexual Development of the Fungus After cross-inocubated with 2539 strain for 30 days, both of the mutants and wild-type strain generated typical perithecia on oat medium (OMA) (Figure 8A,B). Calcofluor white and Nile-Red flourescent staining were used to highlight respectively the cell wall and cellular lipid in the asci and the ascospores. With the aid of the Calcofluor white and Nile-Red staining, eight ascospores could be found clearly in a mature ascus of the wild type, as well the MoDXH35 deleted mutant (Figure 8C,D). MoDHX35 Mutants Are Not Temperature Sensitive A number of DExD/H-box mutant genes in S. cerevisiae were temperature-sensitive, so we tested the growth rates of the MoDHX35-deleted mutants at different temperatures. The results show that the growth rates of all tested strains decrease at the temperatures above or below 28 • C, especially at high temperatures, but no significant difference was found in growth rates between mutants and wild types at the same temperature ( Figure S1). This indicates that the MoDHX35 mutants are not temperature sensitive. Deletion of MoDHX35 Does Not Alter the Nutritional Utilization, Osmic STRESS or Resistance to Chemicals of the Fungus We used a culture media with different carbon and nitrogen sources, CM-C (without carbon source), CM-N (without nitrogen source), CM-C + 50 mM sucrose, CMC + 1% Tween80, CM-C + 50 mM sodium acetate, CM-C + 1% oleic acid, CM-C + 1% olive oil and MM, to test whether the deletion of MoDHX35 influenced the nutrient utilization of the fungus. On the above media, the wild-type Guy11, and the mutants DHX35-6, DHX35-8 and DHX35-9 exhibited no significant differences in growth rates or colonial morphology ( Figure S2). On CM containing 0.2 M, 0.4 M, 0.6 M and 0.8 M NaCl, the MoDHX35 deleted mutants and wild type grew at equivalent rates and showed no significant difference in terms of colonial morphology ( Figure S3A). This indicates that MoDHX35 does not participate in carbon and nitrogen utilization, lipid degradation or endurance of osmic stress. To compare the cell-wall integrity of the mutants and the wild type, we cultured the strains on CM supplemented with Calcofluor white for 7 days and found no difference between the mutant and wild type ( Figure S3B). We also compared the endurance of the mutants and the wild type to Carbendazim, a commonly used fungicide and Cycloheximide, an inhibitor of protein biosynthesis, and did not detect obvious differences either ( Figure S3C,D). Discussion In this study we characterized a DExD/H-box protein-coding gene, MoDHX35 (MGG_02518), which plays a role in the pathogenicity of rice blast fungus. MoDHX35 protein possesses seven typical conserved domains of DExD/H-box proteins and shows 47% similarity to DHX35, a DEAH-Box helicase in H. species. The MoDHX35-deleted mutants showed more sensitivity to doxorubicin, a nucleic acid synthesis disturbing agent, also suggesting the role of MoDHX35 as an RNA helicase. The deletion of MoDHX35 affected the formation of appressoria and significantly reduced the virulence on rice leaves. In addition, the deficiency of the MoDHX35 deleted mutants could be recovered by reintroduction of a MoDHX35 cassette. The expression of MoDHX35, reflected by real time RT-PCR, indicates that MoDHX35 is up-regulated in 10 to 12 h post induction of the conidia, a key stage for appressorium differentiation (Talbot, 1995;2003), corresponding well with the phenotype of the mutants and supporting the involvement of MoDHX35 in appressorium formation. To date, only very limited DExD/H-box proteins were documented in M. oryzae, including two dicer-like proteins (DCL), MDL-1 and MDL-2. The gene deletion indicates that MDL-2 is responsible for the RNA silencing pathway in M. oryzae [37]. Meanwhile, the MDL-2 mutant was also found to have a slightly slower growth rate at 22 and 30 • C compared with the wild type. Nevertheless, whether the DCLs are involved in the fungal pathogenicity had not been reported. Therefore, our data provides a case illuminating the involvement of the DExD/H-box family proteins in the pathogenicity of the plant fungal pathogens. DExD/H-box family proteins are widely accepted as RNA helicases which are related to various processes of RNA metabolisms and are, thus, involved in several important metabolic processes of life development [38]. We, therefore, also checked the other phenotypes related to pathogenicity and fungal development, including nutritional utilization, osmic stress and resistance to chemicals, but few were found changed, compared to the wild type, except for the sensitivity to doxorubicin. Thus, MoDHX35 is likely a DEAH protein specifically mediating the pathogenicity in M. oryzae, and the deficiency of the mutants in pathogenicity is mainly due to the failure in appressorium formation. According the BlastP results, the most similar hit to MGG_02518 in S. cerevisiae is Prp22. The S. cerevisiae Prp22 mediates mRNA release from the spliceosome and unwinds RNA duplexes [39]. In H. species, DHX8 is the homolog to S. cerevisiae Prp22. Meanwhile, DHX35, as well as DHX15, DHX16, DHX37 and DHX38 are also regarded as the paralogs to DHX8 for their similarity in sequences and possible related functions in RNA metabolisms [40,41]. The phylogenetic tree made from the closely related DHX and PRP proteins in M. oryzae, S. cerevisiae and H. species support MGG_02518 as the homolog to DHX35, and the MGG_08807 is the protein in M. oryzae most similar to S. cerevisiae Prp22 and H. species, DHX8. Nevertheless, a number of possible paralogs to MoDHX35 were present in the M. oryzae genome, such as MGG_08807, MGG_03893, MGG_04040, MGG_07501 and MGG_11351. Whether these proteins are truly the paralogs or orthologs of the known DExD/H proteins, and what their exact functions in metabolisms, development and pathogenicity may be in M. oryzae are attractive topics and worthy of further investigation. Moreover, we have, in fact, tried to delete all of the possible DExD/H protein-encoding genes using knockout strategies, but failed to obtain the null mutant for most of the genes (data not shown). These facts may reflect the key roles of the DExD/H proteins in fundamental life activities: once deleted, the mutants are lethal. In summary, in the present work, we demonstrated that, MoDHX35, a DEAH-box like protein, is involved in the appressorium formation and pathogenicity of the rice blast fungus, M. oryzae. Fungal Strains and Growth Conditions The strains used in this study are listed in Table 1. M. oryzae wild-type strain Guy11, mutants and complementary strains were routinely cultured on complete medium (CM) at 28 • C, with a 16-hour light/8-hour dark cycle. For liquid cultivation, approximately 1 × 10 5 conidia were placed in 100 mL liquid CM at 28 • C and shaken at 150 rpm for 2 days. MoDHX35 Isolation and Sequence Analysis The CDS fragment of MoDHX35 was amplified using the primers MoDHX35-CDS-F1 and MoDHX35-CDS-R1 from cDNA samples produced on RNA isolated from mycelia of Guy11 strain. The product was cloned into pEASY-T3 vector (TransGen, Beijing, China) and sequenced. The homologous sequences were retrieved by searching from the NCBI database with the BlastP program. Sequence alignment was processed using the Clustal W module in MEGA5.10, with a gap opening penalty of 10 and gap extension penalty of 0.2. The neighbor-joining method with a bootstrap test of 2000 was used to build the phylogenic tree. Vector Construction, Gene Deletion and Mutant Complementation p1300-HPH, a binary plasmid for MoDHX35 deletion, was created by inserting a 1344-bp hygromycin phosphotransferase (HPH) cassette into pCAMBIA1300 as a backbone for gene substitution. Following this, using primers MoDHX35-Up-F1 and MoDHX35-Up-R1, a 1335-bp upstream flanking segment of MoDHX35 was amplified from Guy11 and inserted into P1300-HPH between the SacI and KpnI sites to generate p1300-HPH-MoDHX35UP. Similarly, a 2124-bp downstream flank of MoDHX35 was amplified with primers MoDHX35-Down-F1 and MoDHX35-Down-R1 and inserted into P1300-HPH-MODHX35UP between the BamHI and Hind III sites to generate the disruption vector p1300-HPH-MoDHX35KO. The p1300-HPH-MoDHX35KO was introduced into Guy11. The resulting Hygromycin B-resistant transformants were screened by PCR using the primer-pairs MoDHX35-Genecheck-F1 and MoDHX35-Genecheck-R1, Hph-Check-F1 and Hph-Check-R1, MoDHX35-Upcheck-F1 and Sequence-Up, and MoDHX35-Downcheck-R1 and Sequence-Down. The transformants with 2.6 kb MoDHX35 gene negative, 1.3 kb 5 flank positive and 2.1 kb 3 flank positive were selected as the potential knockout mutants. The mutants emerging from the gene replacement were confirmed by Southern-blotting analysis and selected for phenotypic study. To construct the complement plasmid of MoDHX35, a 4200 bp fragment containing the full length of the MoDHX35 gene and 1700 bp upstream of the start codon was amplified by using the primers MoDHX35-Com-F1 and MoDHX35-Com-R1, and cloned into pEASY-T3 (TransGen, Beijing, China). After confirmation by sequencing, the fragment was inserted into P1300-BAR, a plasmid containing the BAR gene (Glufosinate ammonium resistance) in the pCAMBIA1300 backbone, to generate the complementary vector p1300-BAR-HB-MoDHX35. The p1300-BAR-HB-MoDHX35 vector was introduced into one of the knockout mutants. The resulting complement transformants were confirmed by PCR with primers MoDHX35-Genecheck-F1 and MoDHX35-Genecheck-R1. All vectors were integrated into M. oryzae strains via Agrobacterium tumefaciens-mediated transformation (AtMT). CM plates containing corresponding antibiotics (250 µg/mL Hygromycin B (Roche, Basel, Switzerland) or 200 µg/mL Glufosinate-ammonium (Sigma, St. Louis, MO, USA) were used to screen the transformants. The primers used in this work are listed in Table 2. Table 2. The primers used in this study. Primer Name Primer Sequence Vegetable Growth and Conidiation on Culture Media The vegetative growth of all tested strains was examined by inoculating 5 mm discs of mycelia on 9 cm diameter culture media for 9 d and then measuring the colonial diameters. Sensitivity assays were operated the same way on the CM plates supplemented with corresponding agents. To estimate conidiation, the conidia were harvested by washing the 7-day-old colonies on CM with 10 mL ddH 2 O and filtering through a three-layer lens paper. The conidia were concentrated by 5000 rpm centrifugation at 4 • C for 10 min, resuspended in the centrifuge tubes with 0.2 mL ddH 2 O, and counted using a hemocytometer. The experiment was repeated 3 times with at least 3 replicates each time. Assay for Conidial Germination and Appressorial Formation The 200 µL conidia suspensions in 1 × 10 5 /mL were placed on the plastic coverslips and incubated in a humid Petri dish to induce germination and appressorial formation at 28 • C. The samples at a series of time points until 48 h after incubation were examined under an Olympus BX51microscope (Olympus, Japan). The appressorium formation rate was determined as at least 300 conidia for each sample. Cell wall and hyphae septa were visualized by Calcofluor white staining and cellular lipids were strained with Nile-red as described, respectively. The fluorescent samples were detected using a laser confocal fluorescence system Leica SP2 (Leica, Mannheim, Germany). Pathogenicity Tests The pathogenicity assay was performed using 14-day-old seedlings of the susceptible rice cultivar CO39. The rice seedlings were spray inoculated with 1 × 10 5 /mL conidia suspensions of all tested strains containing 0.25% (w/v) gelatin and incubated in a growth chamber at 28 • C and 90% relative humidity for 7-10 days. Lesions were counted from 5-cm leaf tips randomly chosen and the mean densities of the lesions were calculated and statistically compared. Nucleic Acid Manipulations and Quantitative RT-PCR Genome DNA was extracted by using the CTAB (hexadecyltrimethylammonium bromide) method [11]. Total RNA extraction followed the method described [11]. Electrophoresis, restricted digestion, ligation reaction and Southern blotting were all carried out by following the standard procedures [42]. For Southern blotting, the genomic DNAs of all tested strains were digested with EcoRI, separated on 1% agarose, and hybridized using the 2124-bp downstream flanking fragment as a probe. Total RNA samples isolated from the mycelia, conidia or appressoria were used to synthesize the cDNA using reverse transcriptase. Quantitative RT-PCR was performed on a ABI7500 fast real time PCR system (ABI, Raleigh, NC, USA) with SYBR Premix Ex Taq TM (Takara, Kusatsu, Japan). Sexual Reproduction M. oryzae strain 2539, Guy11 and mutants, perforated and cross-shaped inocubated on an OMA plate, at 20 • C for 24 h continuous light for 30 days; the morphology of perithecia, ascus and ascospores were observed and photographed. The ascus and ascospores were stained by 0.1% Calcofluor white and 0.1% Nile-red before observation, respectively.
2022-08-17T15:11:16.342Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "d4ed3d40c0934ceb2971bc91806d53e18eb34d24", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/16/9015/pdf?version=1660299833", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d1960de32525276688989766dffefa7909b720ee", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
247186298
pes2o/s2orc
v3-fos-license
The Structure of the Porcine Deltacoronavirus Main Protease Reveals a Conserved Target for the Design of Antivirals The existing zoonotic coronaviruses (CoVs) and viral genetic variants are important microbiological pathogens that cause severe disease in humans and animals. Currently, no effective broad-spectrum antiviral drugs against existing and emerging CoVs are available. The CoV main protease (Mpro) plays an essential role in viral replication, making it an ideal target for drug development. However, the structure of the Deltacoronavirus Mpro is still unavailable. Porcine deltacoronavirus (PDCoV) is a novel CoV that belongs to the genus Deltacoronavirus and causes atrophic enteritis, severe diarrhea, vomiting and dehydration in pigs. Here, we determined the structure of PDCoV Mpro complexed with a Michael acceptor inhibitor. Structural comparison showed that the backbone of PDCoV Mpro is similar to those of alpha-, beta- and gamma-CoV Mpros. The substrate-binding pocket of Mpro is well conserved in the subfamily Coronavirinae. In addition, we also observed that Mpros from the same genus adopted a similar conformation. Furthermore, the structure of PDCoV Mpro in complex with a Michael acceptor inhibitor revealed the mechanism of its inhibition of PDCoV Mpro. Our results provide a basis for the development of broad-spectrum antivirals against PDCoV and other CoVs. Introduction Coronaviruses (CoVs) are round or oval enveloped viruses with a positive-sense RNA genome [1]. CoVs are among the most dangerous microbiological pathogens that infect mammals, such as humans, mice, cats, and pigs, as well as birds, such as sparrows, and they are responsible for a large number of gastric, enteric and respiratory syndromes [2][3][4][5][6]. In 2003, an outbreak of severe acute respiratory syndrome (SARS) led to an international epidemic, and severe acute respiratory syndrome coronavirus (SARS-CoV) was demonstrated to be the etiological agent [7][8][9][10]. In 2012, a novel CoV, Middle East respiratory syndrome coronavirus (MERS-CoV), was reported in Saudi Arabia [11]. MERS-CoV infection can cause patients to develop acute renal failure. In late December 2019, a novel coronavirus (SARS-CoV-2) was identified in Wuhan, Hubei Province. This infectious pneumonia has spread worldwide, and as of January 2022, more than 318 million people have been infected, and 5.5 million have died from the disease [12]. The ceaseless emergence of new pathogenic CoVs indicates that CoVs remain an enormous threat to public health security. However, at peptidomimetic inhibitors carrying a Michael acceptor warhead are effective against the M pro s of all CoVs. Gene Expression and Protein Purification The PDCoV M pro coding sequence was cloned into the BamHI and XhoI restriction sites of the pET-28b_SUMO vector and then transformed into Escherichia coli strain BL21 (DE3). The fusion protein SUMO-PDCoV M pro was purified by Ni-affinity chromatography (GE Healthcare, Uppsala, Sweden) and then cleaved with ULP protease. M pro was further purified using anion exchange chromatography (HiTrap Q, GE Healthcare, Uppsala, Sweden) with a linear gradient from 2.5 to 500 mM NaCl (20 mM Tris-HCl pH 8.0) and size exclusion chromatography (Superdex 75 10/300 GL, GE Healthcare, Uppsala, Sweden) in 10 mM HEPES pH 7.5 and 150 mM NaCl. Crystallization, Data Collection, Structure Determination and Refinement Crystals of the complex were obtained by cocrystallization following the incubation of 1 mg mL −1 PDCoV M pro and 10 mM N3 in the buffer of 10 mM HEPES pH 7.5 and 150 mM NaCl at 4 • C at a molar ratio of 1:5 for 12 h. The complex was concentrated to 9 mg mL −1 and then crystallized by the microbatch-under-oil method at 291 K. The successful crystal growth conditions were 0.1 M sodium citrate (pH 5.1) and 4% (w/v) polyethylene glycol 6000. Crystals were cryoprotected with 20% glycerol, 0.1 M sodium citrate (pH 5.1) and 4% (w/v) polyethylene glycol 6000 and flash-frozen in liquid nitrogen. Data were collected at the Shanghai Synchrotron Radiation Facility (SSRF) beamline BL19U1 at 100 K using an ADSC Q315r detector with a wavelength of 0.97923 Å. The crystal belonged to space group P6 1 with unit cell dimensions a = b = 122.3 Å and c = 289.8 Å. Diffraction data were processed with HKL3000 (version 721.3, HKL Research, Inc., Charlottesville, VA, USA) (44). The complex structure was solved by molecular replacement using the structure of PEDV M pro (PDB ID 5GWZ) [27] as a search model through the PHASER [39] program from the CCP4 package [40]. Model building and refinement were performed using PHENIX (version 1.14) [41] and COOT (version 0.8.9) [42]. The R work and R free of the final model were 19.21% and 24.14%, respectively. Enzyme Activity and Inhibition Assays Enzymatic assays were carried out as previously reported [15,28,29]. A fluorogenic substrate of PDCoV M pro , MCA-AVLQ↓SGFR-Lys(Dnp)-Lys-NH 2 (>95% purity, GL Biochem Shanghai Ltd., Shanghai, China), was used to assess enzyme activity by measuring fluorescence intensity with excitation and emission wavelengths of 320 nm and 405 nm, respectively. The assay was performed at 30 • C, and the buffer used consisted of 50 mM Tris-HCl (pH 7.3) and 1 mM EDTA. The K m and k cat of PDCoV M pro and K i and k 3 of N3 were determined according to the methods used in our previous work [15,29]. The values of K i and k 3 were obtained following the addition of PDCoV M pro . The enzyme and substrate concentration were set at 2 µM and 50 µM, respectively. The inhibitor concentration varied among seven different concentrations (6-24 µM). Data were analyzed with the program GraphPad Prism (version 5.0, GraphPad, San Diego, CA, USA). The enzymatic assay used to test M14 and M25 was similar to that used to test N3. Overall Structure We cocrystallized PDCoV M pro with a Michael acceptor inhibitor, named N3, and determined the structure of the complex at 2.60 Å resolution ( Table 1). The crystal structure contained six M pro molecules per asymmetric unit. In the crystals, two neighboring molecules, protomer A and protomer B, formed a typical homodimer. Each protomer contains three domains: domain I (residues 1-97), domain II (residues 98-186) and domain III (residues 200-304). Domain I and II each have a chymotrypsin-like fold, and domain III is composed of five α-helixes and contributes to the formation of a homodimer ( Figure 1A). The substrate-binding pocket, which contains a catalytic dyad (His-41 and Cys-144), is located in the cleft between domains I and II ( Figure 1A). The superimposition of M pro s [15,[21][22][23][24][25][26][27][28][29] from four different CoV genera shows that the PDCoV M pro shares a similar overall structure and backbone with other CoV M pro s ( Figure 1B). Domain I (residues 1-97) and domain II (residues 98-186) of M pro from PDCoV are well conserved, with Cα root-mean-square deviations (RMSDs) of 1.4-1.7 Å, 1.3-1.4 Å and 1.1 Å in comparison with those of alpha-, beta-, and gamma-CoV, respectively. Structural overlay of the M pro s from four CoV genera shows that domain III (residues 200-304) of PDCoV has a similar orientation to that of the other CoV M pro s. The Cα RMSDs between different CoVs and PDCoV are summarized in Table 2. To provide more insight into the properties of PDCoV M pro , we analyzed the substratebinding pocket between domain I and domain II. The S1 binding pocket of PDCoV M pro is composed of residues Phe-139, His-162, Glu-165, and His-171 and the backbones of the other amino acids, such as Leu-140, Asn-141, His-163 and Ile-164. Sequence analysis showed that the M pro cleavage sites at the P1 position in the identified PDCoV were all glutamines, indicating that the S1 substratebinding pocket of PDCoV M pro has an extremely strong preference for glutamine residues. In our previously reported structure of the complex between a SARS-CoV M pro H41A mutant and an 11-peptidyl substrate, the Nε2 atom of His-163 and the main chain carbonyl oxygen of Phe-140 in the S1 binding pocket form three hydrogen bonds with glutamine at the P1 position [28]. Structural superposition of the S1 binding pocket in M pro s from the CoVs of four genera showed that in PDCoV M pro , the carbonyl oxygen atom of Phe-139, the imidazole ring NH of His-162 and the carbonyl oxygen atoms of residues at position 163 in PDCoV M pro , PEDV M pro , SARS-CoV-2 M pro , SARS-CoV M pro and IBV M pro are extremely conserved. In addition, we found that the key residues that form the S1 binding pocket, His-171 and Glu-165, also share a similar structure ( Figure 2A). For the above reasons, we concluded that the Deltacoronavirus PDCoV M pro shares a conserved S1 binding pocket with M pro s from the other three genera. The evolutionary conservation of amino acids plays a crucial role in drug design. In [28]. The backbone atoms of the residues that form the S1 pocket of PDCoV M pro are similar to the corresponding sequences in the other three genera. Sequence alignment showed that the residues at position 25 of M pro from PDCoV, PEDV, SARS-CoV-2, SARS-CoV and IBV are different from the consensus of the CoV M pro s from the four genera; these residues are Thr, Met, Asn and Ser, respectively ( Figure 2B,D). interact with the P1′ residue of the substrate via van der Waals interactions [28]. The backbone atoms of the residues that form the S1′ pocket of PDCoV M pro are similar to the corresponding sequences in the other three genera. Sequence alignment showed that the residues at position 25 of M pro from PDCoV, PEDV, SARS-CoV-2, SARS-CoV and IBV are different from the consensus of the CoV M pro s from the four genera; these residues are Thr, Met, Asn and Ser, respectively ( Figure 2B,D). The Peptidomimetic Inhibitor N3 Efficiently Inhibits PDCoV M pro We determined the K m and k cat of PDCoV M pro to be 56.6 ± 1.9 µM and 0.030 ± 0.009 s −1 , respectively (Table 3). This K m value is close to that of HCoV-NL63 M pro (50.8 ± 3.4 µM) and TGEV M pro (61 ± 5 µM), lower than that of mouse hepatitis virus A59 (MHV-A59) M pro (77 ± 5 µM), HCoV-HKU1 M pro (83.2 ± 13.3 µM), SARS-CoV M pro (129 ± 7 µM) and IBV M pro (139 ± 15 µM), and higher than that of HCoV-229E M pro (29.8 ± 0.9 µM) and FIPV M pro (13.5 ± 1.8 µM) [15,26,29] (Table 3). Structural analysis showed that the substrate-binding pocket of PDCoV M pro shares several features in common with M pro s from CoVs in the other three genera, especially the S1, S2 and S4 subsites. The key residues at these sites are almost completely conserved ( Figure 2). The N3 is a peptidomimetic inhibitor designed against various M pro s, such as those from SARS-CoV, HCoV-229E and FIPV [15]. Therefore, we deduced that the Michael acceptor and peptidomimetic inhibitor N3 may inhibit PDCoV M pro . An enzymatic assay showed that N3 inactivated PDCoV M pro . The calculated K i and k 3 were 11.98 ± 0.13 µM and 72.91 ± 7.05 (10 −3 s −1 ), respectively. The k 3 is approximately 23-fold larger than that of SARS-CoV M pro , which indicates that N3 inactivates PDCoV M pro faster than it does SARS-CoV. Binding of N3 to PDCoV M pro In the crystal structures, N3 is bound to each protomer of the M pro dimer. We thus only discuss the binding mode in one of the protomers. The inhibitor is located in the substrate-binding pocket, which adopts the canonical conformation seen in other M pro -N3 complex structures. As an irreversible inhibitor, the Cβ atom of the vinyl group on N3 is bound to the Sγ atom of Cys-144 through a 1.8 Å covalent bond ( Figure 3A,B). The lactam ring of the glutamine analog N3 at the P1 site inserts into the S1 pocket and forms 3.2 Å, 2.5 Å and 2.9 Å hydrogen bonds with the carbonyl oxygen of Phe-139, the imidazole ring NH of His-162 and the Oε1 atom of Glu-165, respectively. The side chain of Leu at the P2 position extends into the S2 pocket and participates in hydrophobic interactions with the hydrophobic amino acids Trp, Ile and Phe. The valine side chain of N3 at the P3 position is exposed to solvent. The alanine residue in the P4 position inserts into a pocket composed of the residues Pro-183 and Tyr-184, leading to a hydrophobic interaction among these residues. The isoxazole at the P5 position, Gln-167 and Ile-190 form a "sandwich structure" by van der Waals interaction. The P2 and P4 sites insert into the S2 subsite and S4 subsite well. Moreover, the backbone NH of Cys-144, the carbonyl oxygen atoms of His-163 and Glu-165, the Oε1 atom of Glu-188 and the NH group of Glu-165 form hydrogen bonds with the inhibitor N3, which ensures tight binding between the M pro and the inhibitor, as shown in Figure 3. We concluded that peptidomimetic inhibitors carrying the Michael acceptor warhead N3 are effective against the M pro of PDCoV. "sandwich structure" by van der Waals interaction. The P2 and P4 sites insert into the S2 subsite and S4 subsite well. Moreover, the backbone NH of Cys-144, the carbonyl oxygen atoms of His-163 and Glu-165, the Oε1 atom of Glu-188 and the NH group of Glu-165 form hydrogen bonds with the inhibitor N3, which ensures tight binding between the M pro and the inhibitor, as shown in Figure 3. We concluded that peptidomimetic inhibitors carrying the Michael acceptor warhead N3 are effective against the M pro of PDCoV. The P1′ Position May Play an Important Role in the Interaction between PDCoV M pro and Inhibitors Previously, we have designed 16 N3 derivatives that target PEDV M pro (the detailed structures and their chemical synthesis were described in our previous paper) [27]. Next, we evaluated the inhibitory activity of these compounds against PDCoV M pro . Among these compounds, M14 and M25 exhibited stronger inhibition than N3 ( Table 4). The k3/Ki values of M14 and M25 were 13.8 and 9.9, respectively, indicating that they have much better inhibitory activity than N3, which had a k3/Ki of 6.1. The detailed inhibition parameters of N3, M14 and M25 are listed in Table 5. Interestingly, we found that the three compounds shared the same side groups at all positions except the P1′ position ( Table 5). The benzyl group at the P1′ position of N3 interacts with Ser-25 and Leu-27 through van der Waals forces. Therefore, we suggest that rational design of the P1′ position could dramatically enhance the interaction between the substrate-binding pocket and the inhibitor. Future modification of peptidomimetic inhibitors at the P1′ position has the potential to control acute gastroenteritis in pigs infected with PDCoV. Table 4. Evaluation of the inhibitory activity of compounds targeting PDCoV M pro . The inhibition ratio (Ir) is defined as the percent inactivation of the initial enzymatic activity of PDCoV M pro . The P1 Position May Play an Important Role in the Interaction between PDCoV M pro and Inhibitors Previously, we have designed 16 N3 derivatives that target PEDV M pro (the detailed structures and their chemical synthesis were described in our previous paper) [27]. Next, we evaluated the inhibitory activity of these compounds against PDCoV M pro . Among these compounds, M14 and M25 exhibited stronger inhibition than N3 ( Table 4). The k 3 /K i values of M14 and M25 were 13.8 and 9.9, respectively, indicating that they have much better inhibitory activity than N3, which had a k 3 /K i of 6.1. The detailed inhibition parameters of N3, M14 and M25 are listed in Table 5. Interestingly, we found that the three compounds shared the same side groups at all positions except the P1 position ( Table 5). The benzyl group at the P1 position of N3 interacts with Ser-25 and Leu-27 through van der Waals forces. Therefore, we suggest that rational design of the P1 position could dramatically enhance the interaction between the substrate-binding pocket and the inhibitor. Future modification of peptidomimetic inhibitors at the P1 position has the potential to control acute gastroenteritis in pigs infected with PDCoV. Discussion The M pro is an ideal target for drug design against CoVs. Since IBV, the first CoV to be described was discovered in 1937, four genera of CoVs have been identified. Currently, we have a thorough understanding of the M pro structures of alpha-, beta-and gamma-CoVs; however, we know little about the Deltacoronavirus M pro . In this paper, we present the first structure of the M pro of a newly emerged Deltacoronavirus (PDCoV) in complex with a Michael acceptor inhibitor. As observed in the previously reported M pro structures, PDCoV M pro presented a functional homodimer and conserved His-Cys dyads. Furthermore, a detailed comparison of the M pro structures showed that PDCoV M pro shares a similar overall structure and a relatively conserved substrate-binding pocket with the M pro s of the other three CoV genera, especially the key residues located at the S1, S4, and S2 subsites ( Figure 4). Meanwhile, the irreversible inhibitor N3 in our structure, designed based on the structure of SARS M pro in complex with its substrate, could inactivate PDCoV and multiple CoV M pro s. These results also proved the conservation of the overall structures and substrate binding pockets of M pro s. As demonstrated, emerging zoonotic viruses such 11 Discussion The M pro is an ideal target for drug design against CoVs. Since IBV, the first CoV to be described was discovered in 1937, four genera of CoVs have been identified. Currently, we have a thorough understanding of the M pro structures of alpha-, beta-and gamma-CoVs; however, we know little about the Deltacoronavirus M pro . In this paper, we present the first structure of the M pro of a newly emerged Deltacoronavirus (PDCoV) in complex with a Michael acceptor inhibitor. As observed in the previously reported M pro structures, PDCoV M pro presented a functional homodimer and conserved His-Cys dyads. Furthermore, a detailed comparison of the M pro structures showed that PDCoV M pro shares a similar overall structure and a relatively conserved substrate-binding pocket with the M pro s of the other three CoV genera, especially the key residues located at the S1, S4, and S2 subsites ( Figure 4). Meanwhile, the irreversible inhibitor N3 in our structure, designed based on the structure of SARS M pro in complex with its substrate, could inactivate PDCoV and multiple CoV M pro s. These results also proved the conservation of the overall structures and substrate binding pockets of M pro s. As demonstrated, emerging zoonotic viruses such 74% + + + + a Percentage inhibitory activity: + + + + +, >80%; + + + +, 70%−80%; + + +, 50%−70%; + +, 30%−50%; +, <30%. b No inhibition was observed. c The detailed structures and chemical synthesis of the compounds were described in reference [27]. Discussion The M pro is an ideal target for drug design against CoVs. Since IBV, the first CoV to be described was discovered in 1937, four genera of CoVs have been identified. Currently, we have a thorough understanding of the M pro structures of alpha-, beta-and gamma-CoVs; however, we know little about the Deltacoronavirus M pro . In this paper, we present the first structure of the M pro of a newly emerged Deltacoronavirus (PDCoV) in complex with a Michael acceptor inhibitor. As observed in the previously reported M pro structures, PDCoV M pro presented a functional homodimer and conserved His-Cys dyads. Furthermore, a detailed comparison of the M pro structures showed that PDCoV M pro shares a similar overall structure and a relatively conserved substrate-binding pocket with the M pro s of the other three CoV genera, especially the key residues located at the S1, S4, and S2 subsites ( Figure 4). Meanwhile, the irreversible inhibitor N3 in our structure, designed based on the structure of SARS M pro in complex with its substrate, could inactivate PDCoV and multiple CoV M pro s. These results also proved the conservation of the overall structures and substrate binding pockets of M pro s. As demonstrated, emerging zoonotic viruses such 8.80 ± 0.15 86.8 ± 5.1 9.9 Discussion The M pro is an ideal target for drug design against CoVs. Since IBV, the first CoV to be described was discovered in 1937, four genera of CoVs have been identified. Currently, we have a thorough understanding of the M pro structures of alpha-, beta-and gamma-CoVs; however, we know little about the Deltacoronavirus M pro . In this paper, we present the first structure of the M pro of a newly emerged Deltacoronavirus (PDCoV) in complex with a Michael acceptor inhibitor. As observed in the previously reported M pro structures, PDCoV M pro presented a functional homodimer and conserved His-Cys dyads. Furthermore, a detailed comparison of the M pro structures showed that PDCoV M pro shares a similar overall structure and a relatively conserved substrate-binding pocket with the M pro s of the other three CoV genera, especially the key residues located at the S1, S4, and S2 subsites ( Figure 4). Meanwhile, the irreversible inhibitor N3 in our structure, designed based on the structure of SARS M pro in complex with its substrate, could inactivate PDCoV and multiple CoV M pro s. These results also proved the conservation of the overall structures and substrate binding pockets of M pro s. As demonstrated, emerging zoonotic viruses such as SARS-CoV-2, SARS-CoV and MERS-CoV are a potential threat to public health because of the existing viral variants. As an important pathogen of piglets, the nonhuman animal virus PDCoV poses the risk of cross-species transmission to humans as well [30,31]. Therefore, the conserved CoV M pro we identified could be considered a drug target in the event of genetic changes during human-to-human or animal-to-human transmission of CoVs. as SARS-CoV-2, SARS-CoV and MERS-CoV are a potential threat to public health because of the existing viral variants. As an important pathogen of piglets, the nonhuman animal virus PDCoV poses the risk of cross-species transmission to humans as well [30,31]. Therefore, the conserved CoV M pro we identified could be considered a drug target in the event of genetic changes during human-to-human or animal-to-human transmission of CoVs. Figure 2B. The background is PDCoV M pro . Red: identical residues among all ten CoV M pro s; orange: substituted in two CoV M pro s. The S1, S2, S4, and S1′ pockets and the residues that form the substrate-binding pocket are labeled. N3 is shown in green. Interestingly, the structure and conformation of M pro s presented a stable characteristic evolution and obvious species correlation. We superposed the determined structures of M pro s from CoVs in four different genera and found some loops, especially for the region from 41-51, that exhibited corresponding features ( Figure 5). For example, this loop in Alphacoronavirus and Betacoronavirus forms a 310 helix, while in Gammacoronavirus IBV, it forms a short loop. Surprisingly, the loop from residues 41-51 of PDCoV M pro adopts a conformation similar to that of IBV M pro , which supports that the Deltacoronavirus may be closely related to Gammacoronavirus [6]. Since the loop from 41-51 is associated with the outer wall of the S2 pocket, our structural information will support reasonable broad-spectrum peptidomimetic drug design based on the evolutionary conservation of M pro s from CoVs. Figure 2B. The background is PDCoV M pro . Red: identical residues among all ten CoV M pro s; orange: substituted in two CoV M pro s. The S1, S2, S4, and S1 pockets and the residues that form the substrate-binding pocket are labeled. N3 is shown in green. Interestingly, the structure and conformation of M pro s presented a stable characteristic evolution and obvious species correlation. We superposed the determined structures of M pro s from CoVs in four different genera and found some loops, especially for the region from 41-51, that exhibited corresponding features ( Figure 5). For example, this loop in Alphacoronavirus and Betacoronavirus forms a 3 10 helix, while in Gammacoronavirus IBV, it forms a short loop. Surprisingly, the loop from residues 41-51 of PDCoV M pro adopts a conformation similar to that of IBV M pro , which supports that the Deltacoronavirus may be closely related to Gammacoronavirus [6]. Since the loop from 41-51 is associated with the outer wall of the S2 pocket, our structural information will support reasonable broadspectrum peptidomimetic drug design based on the evolutionary conservation of M pro s from CoVs. The peptidomimetic inhibitor N3, which carries a Michael acceptor warhead, was also effective against M pro of PDCoV, the newly emerging Deltacoronavirus. Peptidomimetic compounds are attractive inhibitors for the development of novel antiviral therapies. These compounds target proteases that are essential for viral replication. For example, boceprevir, telaprevir and simeprevir are peptidomimetic drugs that act as viral NS3/4A serine protease inhibitors of hepatitis C virus (HCV) [43][44][45], while saquinavir, indinavir, nelfinavir, ritonavir, and amprenavir are clinically approved human immunodeficiency virus (HIV) protease inhibitors, which have a similar molecular structure to the protease substrate [46,47]. Furthermore, these peptidomimetic drugs were derived from lead compounds identified based on viral protease structures. For instance, boceprevir, which is a tripeptide derivative that forms a covalent bond with Ser-139 to inactivate the NS3/4A protease [45], was designed based on an undecapeptide alpha-ketoamide inhibitor identified from compound libraries. Hence, after multiple rounds of modification, the inhibitor N3 is a currently available compound for broad-spectrum drug design. It is worth noting that P1 may be a key position of compound modification for broad-spectrum drug design because of the side chains variability in the amino acid at position 25, which directly participates in the interaction with the inhibitor. In our study, both N3 derivatives (M14 and M25) with improved inhibitory activity against PDCoV M pro presented a unique group at P1 . Therefore, it is necessary to balance the relatively conserved substrate binding pockets during rational drug design. Furthermore, we found that M25 exhibited potent inhibition of both PDCoV and PEDV M pro proteins [27]. The two main emerging swine CoVs, PDCoV and PEDV, account for the majority of lethal watery diarrhea in neonatal pigs in the past decade. More recently, the epidemiological evidence shows that the rate of PDCoV coinfection with PEDV has increased up to 51% in China [32,48]. Therefore, M25 could be further developed to combat both PDCoV and PEDV infection in the swine industry. The peptidomimetic inhibitor N3, which carries a Michael acceptor warhead, was also effective against M pro of PDCoV, the newly emerging Deltacoronavirus. Peptidomimetic compounds are attractive inhibitors for the development of novel antiviral therapies. These compounds target proteases that are essential for viral replication. For example, boceprevir, telaprevir and simeprevir are peptidomimetic drugs that act as viral NS3/4A serine protease inhibitors of hepatitis C virus (HCV) [43][44][45], while saquinavir, indinavir, nelfinavir, ritonavir, and amprenavir are clinically approved human immunodeficiency virus (HIV) protease inhibitors, which have a similar molecular structure to the protease substrate [46,47]. Furthermore, these peptidomimetic drugs were derived from lead compounds identified based on viral protease structures. For instance, boceprevir, which is a tripeptide derivative that forms a covalent bond with Ser-139 to inactivate the NS3/4A protease [45], was designed based on an undecapeptide alpha-ketoamide inhibitor identified from compound libraries. Hence, after multiple rounds of modification, the inhibitor N3 is a currently available compound for broad-spectrum drug design. It is worth noting that P1′ may be a key position of compound modification for broad-spectrum drug design because of the side chains variability in the amino acid at position 25, which directly participates in the interaction with the inhibitor. In our study, both N3 derivatives (M14 and M25) with improved inhibitory activity against PDCoV M pro presented a unique group at P1′. Therefore, it is necessary to balance the relatively conserved substrate binding pockets during rational drug design. Furthermore, we found that M25 exhibited potent inhibition of both PDCoV and PEDV M pro proteins [27]. The two main emerging swine CoVs, PDCoV and PEDV, account for the majority of lethal watery diarrhea in neonatal pigs in In summary, the structure of PDCoV M pro in complex with the Michael acceptor inhibitor N3 provides a basis for the inactivation of Deltacoronavirus viral proteases. The structural comparison of different viral enzymes identified a conserved substrate-binding pocket in all CoV M pro s; this pocket is a target for the development of broad-spectrum antivirals against all existing and emerging CoVs. Data Availability Statement: Atomic coordinates for the crystal structure of PDCoV M pro in complex with N3 can be accessed using PDB code 7WKU in the RCSB Protein Data Bank (https://doi.org/10 .2210/pdb7WKU/pdb. accessed on 25 January 2022). Authors will release the atomic coordinates and experimental data upon article publication.
2022-03-02T16:15:46.039Z
2022-02-27T00:00:00.000
{ "year": 2022, "sha1": "dbbbf120feeeb1c145a7281ce01773302e9edd2d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/14/3/486/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e75d0602db116bfb077af4a0c6167f141f75067a", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
157807318
pes2o/s2orc
v3-fos-license
Total Factor Productivity and Efficiency in OECD Countries: Possibility of Convergence in 2000-2012 Period Total Factor Productivity and Efficiency in OECD Countries: Possibility of Convergence in 2000-2012 Period bAssoc. Prof. Dr., Sakarya University, Faculty of Political Sciences, Department of Economics, Sakarya, Turkiye, kabasakal@sakarya.edu.tr cAssist. Prof. Dr., Sakarya University, Faculty of Political Sciences, Department of Economics, Sakarya, Turkiye, agulmez@sakarya.edu.tr Abstract: This study discusses efficiency, productivity, and the existence of convergence in 34 OECD countries between 2000 and 2012. Physical capital per worker and human capital per worker are used as inputs to determine total factor productivity and efficiency scores in data envelopment analysis while GDP per worker is used as the output. Efficiency scores observed in CCR and BCC models indicate that Latin American and Eastern European countries are more efficient. Analysis of productivity by using the Malmquist Index indicates that productivity was positive but increased less than 1 percent at the end of the period. Panel regression estimation is used for standard deviations of convergence and to determine beta convergence. If GDP per worker is (yi), there is convergence between log(yi/yi,t-T,) and starting values log(yi,t-T); however, the convergence effect of TFP values could not be determined. Similar results are reported for Sigma convergence. Introduction The Organization for Economic Co-operation and Development (OECD) was established in 1961 and has 34 members. The OECD budget is approximately $357 million. During the study period, total real gross domestic product (GDP) rose by 22 percent, reaching $39 trillion. Capital increased by 8 percent, reaching $7.8 trillion, while labor power increased 11 percent, reaching 610 million people within the same period (https://data.oecd.org/). GDP growth was greater than capital and labor power growth. There was significant total factor productivity growth in OECD countries. Income distribution and differentiation among OECD countries, which are the leading economies of the world, are increasing (http://oecd/idd). The ratio of the top 10 percent to the bottom 10 percent is estimated to be 9.5 percent in 2012; however the relevant figure was 7 percent in 1980. The difference between the rich and the poor requires examining convergence and divergence among the relevant countries. Growth dynamics of countries as well as convergence among those countries have been studied for a long time according to neoclassical growth theories in various studies. A series of studies indicate convergence among OECD countries. The majority of such studies use productivity per worker and total The aim of this study is to analyze convergence in OECD countries over a reasonable period of time at the beginning of the new millennium. The period between 2000 and 2012 is chosen intentionally in order to examine the results of the financial crisis of 2009. Another reason for the short study period is our effort to include DEA, Malmquist, and convergence analyses together. Including more data sets would have expanded the DEA and TFP tables, which in turn would exceed the limits of this study. The study period has given us the chance to employ all three analyses. The set of values in the study are obtained from OECD StatExtracts. Real physical capital per worker, real human capital per worker, and real GDP per worker are calculated based on 2005 U.S. dollars. The literature and methodology forms the first part of the study while the results of the DEA and Malmquist index (MI) analysis of TFP are given in the following chapter. The conclusion and assessment make up the last part of this study. Literature In recent years there have been many studies that examine convergence and economic growth. Sala-i Martin (1991, 1992) and Färe et al. (2006) are the major contributors to the relevant literature. According to Krüger (2003), experiential growth investigations over the last ten years have been pursued at least three different ways. First, the most immense strand of literature applies linear regressions to clarify the growth ratio of real GDP per capita by a great amount of different growth driving factors while at the same time struggling to anticipate the ratio of convergence of countries to their steady state positions. Second, some studies provide new anticipations of ratio of total factor productivity growth on an economywide scale as precautions of technological progress. The third basic approach of experiential growth searches for the dynamics of the entire distribution of real GDP per capita or per worker. The second significant issue forms the focus of interest of a range of studies that aim to find whether the economies of two such country groups converge with one other Sala-i-Martin 1991, 1992;Mankiw et al. 1992). In their study regarding growth and productivity in OECD countries, Dowrick and Nguyen (1989) analyzed the hypothesis that GDP levels and total factor productivity in OECD countries converged following the postwar period. Their findings suggest that convergence of income levels had been weak since 1973; however, they argue that income convergence is contingent upon sample selection and that TFP catch-up is the prevailing trend. In the study concerning the EU countries conducted by Färe et al. (2006), growth in productivity and the phenomenon of convergence was investigated during the period between 1965 and 1998. Efficiency and productivity of human and physical capital are examined as inputs. The existence of convergence among these countries is also investigated. Additionally, relevant countries are divided into subgroups and the existence of a single "convergence club" is investigated. Margaritis et al. (2007) conducted a similar study to that of Färe et al.'s (2006) and developed it further. In the productivity and convergence analysis of OECD countries between 1979 and 2002 by Margaritis et al. (2007), which is one of the most comprehensive studies concerning productivity and 3 BERJ (8) 1 2017 convergence among OECD countries, various tests are applied using the time series method as well as σ and β calculation techniques. Afterwards, the analysis is extended to 1960 by developing the field of series, confirming convergence for the OECD countries. In another study conducted among OECD countries by using the β convergence test by Maudos et al. (2000), labor productivity convergence was investigated. Following a range of analyses, the existence of convergence between labor productivity and the relevant countries is observed; however, the level of convergence was low. In another study, in which a different methodology is used to analyze growth and convergence in OECD countries (Yörük and Zaim 2005), factor productivities measured using the MI and the Malmquist Luamberger index are examined. Furthermore, taking Lee's (2009) study into consideration, in which he examines time series properties of long-run productivity convergence of 25 sample countries between 1975 and 2004 by performing panel unit-root procedure, similar studies can be found that investigate the 34 OECD countries in terms of efficiency and convergence. Chen and Yu (2014) included not only OECD countries but some other country categories as well in order to increase the number of countries in the study. They studied the total factor productivity of 99 countries. Their study examines the capital-using/labor-saving, capital-using/energy-saving, and energy using/labor-saving tendencies of the countries. Their findings suggest that most of the countries profit from advantages of technological innovation. Madsen (2007) examines the imports of technology and total productivity in OECD countries between 1870 and 2004. The study focuses on the transmission of knowledge through trading among the relevant countries over a period of 135 years. His findings prove that patent flows and information reflections that result from the trade route among countries complicate convergence. Rivera-Batiz and Romer's (1991) study on the internal growth approach state that R&D information obtained as a result of trade between developed nations is reflected in such countries by means of a trade route and consequently affects TFP and increases productivity. Some studies on OECD countries discuss sectorial growth and convergence analyses. Convergence analysis of India by Kumar and Managi (2012) and convergence analysis of the pulp and paper industries of OECD countries between 1991 and 2000 by Hseu and Shang (2005) together with Sondermann (2014), (Margaritis (2007), Kumar and Russell (2002), Maudos et al. (2000), and the approach that the productivity growth accounting approach to the ranking of developing and developed nations by Raab and Feroz (2007) are several studies covering the same subject. There are some studies that investigate convergence on an industrial basis. Shestalova's (2002) study examines the productivity of the manufacturing sectors of eleven OECD countries between 1970 and 1990; the findings of the study suggest that only the chemical industry showed strong convergence levels. The relationship between technology transfer and convergence in developed and developing economies is a widely discussed topic as well. Given that sufficient technological transfers are not provided, it might be difficult for developing countries to catch up with developed economies. With respect to technological catch-up and capital deepening, it is essential to emphasize the thesis by Kumar and Russell (2002) that states that technological development is not detached. In other words, convergence leads to contrary divergence. Debreu (1951), Koopmans (1951), and Farrell (1957) are prominent scholars who apply the analysis of efficiency in economic literature and a great deal of research concerning efficiency measurement has been conducted following their work. Use of the frontier function has grown into a significant part of efficiency measurement. The parametric and non-parametric methods in these studies, in which performance evaluation is measured with regards to economic efficiency (EE), technical efficiency (TE), and allocative efficiency (AE), have common research applications. Methodology The parametric approach includes deterministic and probabilistic patterns. In non-parametric analyses, the feature of any exclusive functional form is not required to describe the efficient frontier or surrounding surface, such as in the study by Charnes et al. (1978). Structure of DEA and Efficiency Efficiency could be defined as an effort to get the highest output by choosing the method that uses the input composition in the most productive way. The definition implies that quantity obtained from one output cannot be increased without reducing one of the outputs by changing the input distribution economically. According to Koopmans's description (1951), the production limit is described as f(xt,yt)=0, so f(xt,yt)<0 explains the production limits technically not being efficient. When f(xt,yt)>0, it produces inputoutput compositions which are not possible to produce by using a definite production technique (Kumbhakar and Lovell 2000). Koopmans's description has two states, one is input oriented and the other is output oriented. i. A functionally input oriented TE is indicated as follows: ii. A functionally output oriented TE is indicated as follows: Given that a decision making unit (DMU) produces outputs yi, (i=1,2,…,t) from inputs xk, (k=1,2,…,m), the equation can be explained in the following way by the relevant weights (vi=1,2,…,t; wk=1,2,…,m) applied to variables: "v" and "w" account for the weights on inputs and outputs and variables in the equation. The model provides us with an efficiency value of p th DMU and a set of required weights to obtain the relevant value. Fractional program utilizes the TFP rate. In other words, DEA should be considered as a conceptual model and the linear model is a practical method for efficiency calculations. In DEA, weights are determined pertaining to the DMUs for each input and output. DEA includes inputs (xk) and outputs (yi) in the equation as mentioned above and chooses weights in order to maximize performance of DMU "p" concerning the performances of other units: Solution of the non-parametric efficiency measurement model in the fractional programming form is changed to a linear programming model which is relatively easier to solve (Charnes et al. 1978(Charnes et al. , 1979Banker et al. 1984). Malmquist Index MI is one of the indices investigating change in production (Malmquist 1953). Applied in the DEA of Caves et al. (1982), the index comprises of different functions which stand for multi-output and multi-input technologies that are based on input and output amounts. In short MI, which is defined as CCD with the names of these authors, is the indexing of amounts in terms of distance functions. BERJ (8) 1 2017 Linear programming methods of database studies (Charnes et al. 1978) could be used in productivity performance with regards to DEA. Solution of the relevant problem coincides with Farrell's (1957) TE measurement. A DEA estimation method is presented for Malmquist productivity index in Cooper et al. (2011), which is the combination of the studies of Färe et al. (1994a, 1994b), Farrell (1957, Charnes et al. (1978), and Caves et al. (1982). According to Färe et al. (1994b) in S t production technology, t=1…T for each period, output based MI, which indicates the productivity change, models the conversion of inputs Output distance function defined in "t" period (Färe, 1988) is as follows: This function defines inputs, x t , and maximum output vector, y t . This function is first-degree homogeneous and its value is ≤1. If the technology is over the frontier, Do(x t ,y t )=1. According to Farrell (1957) this situation indicates TE. A similar description could be made for t+1 period as well. Output-based CCD-type Malmquist productivity change index is the geometric mean of MI; This index can be rewritten as a product of two different parts: The part of index outside the brace indicates proportional efficiency change between two periods, and the index inside the brace indicates technical change. This MI equation can be defined briefly in two parts as follows: In this study, for each "t" period, (t=2000, …, 2012), for each "k" country (k=1, …, 34; all OECD countries including recently joined ones) two inputs and one output are used. Convergence The lack of convergence across countries is an unusual finding as it suggests that cross-country income equality tends to increase and that countries that are projected to be richer a few decades from now are the same countries that are rich today. This finding is not compatible with widely accepted neoclassical growth theories (Solow 1956;Swan 1956;Cass 1965;and Koopmans 1965). The idea behind the abovementioned conclusion is the following: the assumption of diminishing returns to capital implicit in the neoclassical production function predicts that the rate of return to capital is quite large when the stock of capital is small and vice versa. If countries only differ in terms of their initial levels of capital then, according to neoclassical growth model, which indicates cross-country betaconvergence, countries with little capital will be poor and will grow faster than rich countries. 6 BERJ (8) 1 2017 This anticipation is not compatible with the endogenous growth model (Romer 1986). Such models depend on the existence of externalities, increasing returns, and the lack of inputs that cannot be accumulated. The fundamental point of such models is the lack of diminishing returns to capital, so these models do not exhibit similarities with the neoclassical model in terms of convergence. When analyzing convergence, the studies by Barro and Sala-i-Martin (1991), Mankiw et al. (1992), Barro and Sala-i-Martin (1995), and Sala-i-Martin (1996a) are usually referred to by economists. The concepts of β-convergence and σ-convergence are prevalent in classical economic growth literature. If poor economies tend to grow faster than wealthy ones then there is β-convergence. In other words, a negative relationship between the growth rate of income per capita and the initial level of income suggests β-convergence in a cross-section of economies. That concept of convergence is usually mistaken for σconvergence as the dispersion of real per capita income across groups of economies tends to fall over time. These two concepts examine conceptually different phenomena: σ--convergence studies the distribution of income over time and β-convergence studies the mobility of income within the same distribution. Despite the differences, these two convergences are related. Suppose that β-convergence holds for a group of regions i, where i = 1 ..... N. In a given time period, which likely corresponds to annual data, the real per capita income for economy i can be approximated 1 by where α and β are constants, with 0 < β < 1, and ui is a disturbance term. The condition β > 0 suggests βconvergence since the annual growth rate log( yit/yi,t-1) is inversely related to the log(yi,t-1). A higher coefficient β indicates a greater tendency for convergence. The disturbance term captures temporary shocks to the production function, the saving rate, and so on. We assume that uit has a mean of zero, the same variance, σu 2 , for all economies, and is independent of time and different economies. If β-convergence holds (β > 0), then σt 2 monotonically approaches its steady-state value, (σ 2* ). The key point is that σt 2 can rise or decline towards the steady-state depending on whether the initial value of σ 2 is above or below the steady-state. In particular, σ could rise along the transition even if β > 0. In other words, β-convergence is a necessary but it is not a sufficient condition for σ-convergence. Departing from the works of Barro and Sala-i-Martin (1991), Mankiw et al. (1992), and Sala-i-Martin (1996b), we can distinguish conditional from absolute convergence. We claim that a series of economies exhibit conditional β-convergence if the partial correlation between growth and initial income is negative. In other words, if cross-sectional regression is applied on initial income we find that the coefficient on initial income is negative, which in return suggests that economies in the data set exhibit conditional βconvergence. If the coefficient of initial income is negative in a univariate regression, we conclude that the data set exhibits absolute convergence. Empirical Evidence In this study, efficiency and productivity variables and convergences of OECD countries between 2000 and 2012 are investigated. Although some countries joined OECD later, they are still included in the calculations as if they were permanent members. While physical capital per worker and human capital per worker are used as inputs, GDP per worker is used as output. All the values are calculated in real 2005 U.S. dollars. For DEA Efficiency, output-oriented Charnes, Cooper, and Rhodes, (1978) (CCR) model with constant return to scale and the output-oriented Banker, Charnes, and Cooper, (1984) (BCC) model with variable return to scale are applied. The output oriented and constant return to scale method is applied to MI. While Kumar and Managi (2012) Technical Efficiency Scores Efficiency scores of OECD countries over thirteen years (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012) according to the CCR model are shown in Table 1. Only the input oriented with constant return to scale DEA technique is used in the model. Chile is the only country among all OECD countries with full technical efficiency throughout the years. The most efficient country is Poland. Poland has periodical full TE after 2003, except for in 2009. Turkey has a full efficiency score between the years of 2001-2003 and 2008-2009, while Mexico has a full efficiency score between 2000 and 2001 and in 2012. Greece, the economy of which has been experiencing a huge crisis lately, had full efficiency in 2012. However, Denmark, Iceland, and particularly Switzerland have the lowest performances over the years. A negative growth occurred in all countries, apart from Poland, at least once (especially in 2009) within 13 years. When we examine the relationship between growth and efficiency, the annual mean growth of Switzerland, which has the lowest efficiency, is 1.84 percent while Denmark's is 0.8 percent and Iceland's is 2.4 percent. Similarly, the annual growth rate of Chile, with the highest efficiency, is 4.46 percent, while that of Poland's is 3.83 percent and Turkey's rate is 4.44 percent. Upon examination of the samples, there exists a same directional tendency between growth and efficiency. The growth rate of a majority of the countries in 2009 is negative. However, the efficiency scores of those countries in 2009 are not the lowest. The efficiency score is below 50 percent in 2001 when the mean efficiency score is the lowest. In Table 2 Total factor productivity analysis with the Malmquist Index Scores of productivity and parameter anticipations of the countries are obtained individually with regards to TFP Analysis with output-oriented MI. Values for TFP Change (tfpch), Technical Change (techch), Efficiency Change (efch), Pure Efficiency Change (pech), and Scale Efficiency Change (sech) of the given countries are estimated in this analysis. The decomposition of MI for TFP is implemented as MI= efch* techch= pech* sech*techch. For instance, the MI value of Australia can be calculated as MIAustralia=TFPAustralia=0.984*0.991=0.991*0.993*0.990=0.975. Changes in efficiency values of each OECD country over 13 years are given in Table 3. TFP of OECD countries increased by 0.4 percent. Within the same period, the TFP values of 22 countries increased over the last year while those of 11 countries decreased. As seen in the table, such an increase results mainly from sech. Surprisingly, none of the countries' techch values exceeded one. In other words, TE values of those countries within the thirteen years are negative. The average loss in the TE is 1.1 percent. techch and pech values are below 1 whereas others are above 1. Investigating the productivity increase in the Eurozone from a different point of view, Sondermann (2014) emphasizes that productivity results from regulations in the service sector (regulatory burden), R&D investments, and the employment of highly-trained personnel. TFP increase around 0.4 percent in OECD countries might result from these factors. Australia is the only country whose scale efficiency is below one. In addition, all the efficiency values of Australia are below one. In the study carried out by Färe et al. (2006), Sweden and Denmark moved from well above average in 1965 to below average by 1998. Ireland showed the most dramatic productivity improvements in the sample. In Krüger's (2003) study, which included OECD countries as well, the universal effect of productivity deceleration is indicated by the comparison of the sub-periods 1960-1973 and 1973-1990. With regards to TFP after 1973, not only TFP but also labor productivity measures decline in the majority of country groups, apart from in Asia. In particular, the fact that the deceleration affects the technological progress ratios indicates that all parts of the frontier function at least stagnate after 1973. When it comes to the positive side, there exists remarkable efficiency development keeping up in all country groups following 1973; however, getting behind movements was extensive before. The low level increase, which is not even one percent, in the TFP in the OECD countries following 2000 coincides with the abovementioned study's findings. Table 4 represents the annual efficiency changes of the relevant countries. From this dynamic analysis, it is seen that tfpch is above one for five years but below one for seven years. In other words, TFP decreased for seven years within the 13 year-period in comparison to the reference year. TFP increased by 10.8 percent in the period of 2008/2009. In fact, the technical change value for the same period reached the highest value of all with an increase of 14 percent. We can conclude that the rise of TFP results from Technical Change. When Technical Change is considered to be positive for only three years, it is found that the period of 2008/2009 navigates in remarkably high levels. In the Table, sech navigates above one for eight years. It can be understood that the most efficient of efficiency changes is sech. Furthermore, when the mean value is observed, it presents the highest growth of an average increase of 2.3 percent. In a study supporting the increase in TFP, Danquah et al. (2014) listed the variables increasing TFP for the OECD countries. According to this study, investment price, consumption share, trade openness, and the labor force are robustly correlated to TFP growth. Figure 2 provides changes in TFP variables and it is observed that a fluctuation exist especially between 2008 and 2011. In the study by Färe et al. (2006), labor and multi-factor productivity improved for most of the countries in their sample. 12 BERJ (8) Table 5 presents certain different results regarding the OECD countries. The percent change in real physical capital per worker (grpc) and mean percentage of change for thirteen years are given in the table (grpc13). Similarly, grpk indicates the percentage of change in real human capital per worker, and ggdp indicates the mean percentage of change of real GDP per worker. The digit 13, added to the values, indicates mean percentages of change over thirteen years. The findings of the table indicate that 12 countries incurred losses of physical capital per worker and the others increased this ratio within thirteen years. Spain, Portugal, and Greece, the latter of which has shown signs of serious economic constriction in recent years, incurred losses of physical capital. However, according to the table, some countries such as Iceland, Japan, and even Germany, England, and the Netherlands suffer from the same situation. At the same time, the increases of physical capital per worker are the highest in the countries like Poland, Hungary, the Czech and Slovak Republics, and Estonia among former Soviet Union countries. The increase in Estonia is particularly high. We can conclude that Scandinavian countries and Turkey are among the countries that increased their physical capital significantly. Estonia, Chile, and Turkey are the top three countries to have increased their physical capital per worker within the given period. If we remember the table of TE scores, we can see that Chile has had the highest efficiency during the whole period; Turkey and Poland are also among the countries with highest efficiency. Iceland is, according to the same data, among countries with the lowest performances. We can conclude that physical capital increase is reflected in efficiency. In the same table, different results are seen upon the examination of human capital. According to the data, it is understood that only four countries (Luxembourg, Canada, Israel, and Sweden) incurred losses of human capital per worker during the time period. Although this is a low ratio, it is noteworthy. The digit is negative only for Italy and Luxembourg when we examine ggdp among countries. The relevant data for Israel is below one (<1). A decrease exists in both grpk and ggdp for Luxemburg; it is a low level, however. As seen in Table 3, while mean TFP values of Luxembourg and Canada are below one, it does not show any changes in Israel. Figure 3 indicates the growth (y) of three countries that increased their physical capital the most and those that lost their human capital. Growth rates of five countries in 2009, except for Israel, are negative as a common feature. However, the growth in 2001 is negative only for Israel and Turkey. In general, growth trends of the three countries incurring losses of human capital navigate below those of the other three countries. Low growth rates might have decreased the attraction to these countries. Convergence Convergence is the approximation of two economies in terms of growth. According to technological innovation theory, diffusion is one of the ways that innovation is transformative. Innovation in a developed nation can spread to developing nations. Thus, developing countries stimulates economic growth, which enables them to converge with developed economies (Abramovitz 1986). Changes in the logarithm of real GDP per worker (dlgdp), TFP (tfpch), and efficiency (efch) variables are used in β-convergence tests. The values in the first row of Table 6 indicate that the F statistic is significant according to dlgdp estimation and the model is significant. On the other hand, the Durbin-Watson value, which represents autocorrelation, is acceptable. The β-convergence value for dlgdp is significant with a significance level of 1 percent, which indicates convergence. BERJ (8) 1 2017 Nevertheless it is not possible to arrive at a similar conclusion for the dependent variables tfpch (in the second row) and efch (in the last row). β-convergence values in both estimations are not statistically insignificant and it is hard to say whether these variables cause convergence or divergence. The following Figure 4 provides enough information on σ-convergence. As stated before σconvergence indicates income dispersion. However, β-convergence is a must for the relevant dispersion. Thus it is stated that the cross-section regression analysis above has β-convergence, yet the same result could not be achieved for tfpch and efch variables. The figure indicates that standard deviation of log GDP per worker (SDgdp) follows a negative pattern. The same dispersion can be seen in the regression equation and the inclination of the graphic. On the other hand, standard deviation of log TFP (SDtfp) value does not show similar tendencies. Even though it is not readily apparent, the curve deviates dramatically in 2009 and tends to decline following this period. Linear regression values present slightly positive parameter values. Figure 4. Standard Deviations of the GDP and TFP It is useful to remember Kumar and Russell (2002), who suggest a different thesis regarding convergence. The authors criticized the empirical studies of convergence by stating that the technological progress was not unbiased in their study, and they emphasized that convergence was only found among developed countries in the production frontier on the world scale by using intensive capital-including technologies. That is out of the question for countries with lower incomes. Only developed nations benefit from technological innovations. The study analyzed 57 countries. As all the OECD countries do not benefit from the same technological innovations and they do not have similar income levels, we should not expect to find convergence in our study. Sondermann (2014) obtained some results by using a time series in convergence analysis of Eurozone countries. In his study, he investigated the agriculture, service, and manufacturing sectors and their sub divisions and tested convergences in the relevant sectors. According to his results, convergence is observed in few sectors and no average convergence is observed. For example, while convergence is observed in agriculture, transportation, and nonmarket services between 1970 and 1998, no convergence is observed for total economies. The findings of Färe et al. (2006) on convergence in OECD countries is that cross-section results support convergence in principle, although when they take advantage of the decomposition of productivity into technical change, capital deepening and catch-up, they find that technical change (especially input biased technical change) is a source of divergence. Conclusion In our study TE scores, productivity, and convergence of the OECD countries between 2000 and 2012 are examined in 34 countries. In determining the efficiency scores of DEA, an output oriented CCR model is applied. Chile is found to be the only country with full efficiency throughout the period followed by Poland, Turkey, and Mexico. Denmark, Iceland, and particularly Switzerland have had the lowest performance over the 13 years. The efficiency score is the lowest in 2001. The GDP of Chile increased 4.5 percent over thirteen years, ranking second among member countries. After ranking first among the member countries, Estonia experienced severe economic shrinkage in 2008 and 2009. Four countries with the lowest performance experienced fluctuations over thirteen years and tended to exhibit stagnation towards the end of the study period. The average growth rate of OECD countries over thirteen years is 1.87 percent, whereas the average growth rate of Denmark is below one. Switzerland is a striking example because the growth rate of the country is around 4 percent in the beginning of the study period but drops to 1.1 percent in the end. TFP, which is calculated in geometric means, showed an increase of 4 percent. While TFP values of 22 countries increased in comparison to reference year, those of 12 countries decreased within the given period. A substantial part of the increase results from scale efficiency Scale Change (sech). Furthermore, it is observed that TFP is above one for five years, but below one for seven years. In other words, TFP decreased for seven years in comparison to the reference year over 13 years. In the period of 2008/2009, TFP increased by 10.8 percent. The technical change value for the same period reached the highest value of all with an increase of 14 percent. We can conclude that the high level of TFP results from Technical Change. In order to investigate the presence of convergence among the OECD countries, it is determined that standard deviations of TFP logarithmic values do not indicate regular decline. In addition, conditional β value is found to be above zero and significant in the panel regression analyses carried out using both TFP values and logarithmic values of GDP per worker. According to these results, no convergence occurs in the OECD countries in the model where capital per worker and human capital per worker are used as inputs. Changes in logarithm of GDP per worker, TFP, and efficiency variables are used to test the βconvergence and σ-convergence in OECD countries. A cross-sectional regression analysis of dlgdp indicates that the β value is negative and significant. However, we could not arrive at a similar conclusion for tfpch and efch variables, the β values of which were not significant. We can conclude that these variables show divergence rather than convergence. The graphic illustration of σ-convergence shows that the standard deviation of the dlgdp variable tends to be downwards and negative in the study period. At the same time, the graphic illustration of σ-convergence indicates that the standard deviations of dlgdp (SDgdp) variable in the study period tend to be downwards and negative. However, we cannot make BERJ (8) 1 2017 similar observations for the standard deviations of logTFP (SDtfp). Total Factor Productivity does not seem to have any impact on σ-convergence or β-convergence. The average annual growth rate of some OECD countries is around 4.5 percent while the relevant figure is reported to be 0.5 percent for Portugal and Greece. Such a difference is expected to have a negative impact on convergence. Likewise, the difference between top 10 percent and bottom 10 percent is 7 percent in 1980s, while the figure is almost 10 percent in 2012 (http://oe.cd/idd) which indicates an unequal distribution of income. In general, Latin America and Eastern European countries, (including Turkey), which participated in OECD community later, have more efficient indications in comparison to other countries. A noteworthy detail is that human capital per worker loss occurs in countries such as Canada, Luxembourg, and Sweden, which have considerably highly-qualified labor and high incomes, and in Israel, where political unrest and fear of war is always on the agenda. In this static comparative analysis regarding convergence in the OECD countries, some complementary results have been found. However; it may be argued that when dynamic processes or DEA-Window analysis are utilized to investigate the convergence, the same results may not be reached. We feel confident to leave that option to another study investigating the convergence phenomenon.
2019-05-19T13:05:49.804Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "eed4cef2daf193e7c7ce7d9f206423fda1e0ba70", "oa_license": null, "oa_url": "https://www.berjournal.com/?file_id=372", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1fc5ee17dffb9e36d242b12fd2b89b97418a9146", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
246235865
pes2o/s2orc
v3-fos-license
Therapeutic Effects of Transcranial Magnetic Stimulation on Visuospatial Neglect Revealed With Event-Related Potentials This study aimed to investigate changes in attention processing after low-frequency repetitive transcranial magnetic stimulation (rTMS) over the left posterior parietal cortex to better understand its role in visuospatial neglect (VSN) rehabilitation. The current study included 10 subacute stroke patients with VSN consecutively recruited from the inpatient stroke rehabilitation center at Xuanwu Hospital (the teaching hospital affiliated with Capital Medical University) between March and November 2019. All patients performed a battery of tasks (including line bisection, line cancellation, and star cancellation tests) two weeks before treatment and at the beginning and end of treatment; the attentive components of the test results were analyzed. In addition, low-frequency rTMS was used to stimulate the left posterior parietal cortex for 14 days and event-related potential data were collected before and after the stimulation. Participants were evaluated using a target-cue paradigm and pencil-paper tests. No significant differences were detected on the battery of tasks before rTMS. However, we found that rTMS treatment significantly improved the response times and accuracy rates of patients with VSN. After rTMS, the treatment side (left) amplitude of P300 following an event-related potential was higher than that before treatment (left target, p = 0.002; right target, p = 0.047). Thus, our findings suggest that rTMS may be an effective treatment for VSN. The observed increase in event-related potential amplitude supports the hypothesized compensational role of the contralesional hemisphere in terms of residual performance. Our results provide electrophysiological evidence that may help determine the mechanisms mediating the therapeutic effects of rTMS. INTRODUCTION Visuospatial neglect (VSN) is a neuropsychological disorder that impairs higher-level cognition, particularly spatial attention (SA). Deficits in SA not only impact the processing of sensory events but also affect global processing (1). Negative impacts on SA often occur after a stroke in the right hemisphere and manifest as a failure to respond to stimuli in the contralateral visual field (2). While spontaneous recovery from VSN can occur, nearly 40% of patients continue to have symptoms (3). Considering that VSN is a highly debilitating condition that seriously affects the patient recovery and quality of life (4), the development of novel therapeutic methods is needed. According to the interhemispheric competition model, direct attention toward the contralateral space is competed by the parietal lobes with each other, resulting in a reciprocal interhemispheric inhibition. Damage to the right parietal cortex will lead to the disinhibition of the intact, left parietal cortex (5); however, reducing this imbalance is possible. Another hypothesis is also considered to be the key mechanism leading to neglect. The destruction of the functional connection between the attention networks of the two cerebral hemispheres. Functional magnetic resonance imaging (fMRI) studies have shown that VSN recovery is related to the recovery and rebalancing of activity between damaged and undamaged hemispheres, especially in the parietal cortex (6). Transcranial magnetic stimulation (TMS) is considered to be a promising treatment for VSN. Based on the interhemispheric rivalry model (5), repetitive TMS (rTMS) inhibits neural networks associated with attention in the intact hemisphere, which can normalize interhemispheric cortical excitability and ameliorate the symptoms of VSN. Emerging evidence suggests that rTMS might be effective for improving the behavioral deficits induced by VSN (7), and functional imaging provides some evidence for changes in the attentional network following TMS. Studies using low-frequency TMS have shown that visuospatial attention is impaired by disruption of the right posterior parietal cortex (8). However, this evidence does not consider interindividual variability or attentional processing speed. The effects of TMS on visuospatial attention processing and cognition function in VSN patients are still poorly understood. To understand these roles, a high temporal resolution approach is required to capture the dynamics of corticocortical interaction and to identify the effects of TMS on the different stages of visual attention processing. An event-related potential (ERP) is an electrophysiological measure of the cortical networks involved in cognitive processes, such as attention and working memory (9). Multiple studies have shown that P300, a positive component of ERPs that peaks at ≥300 ms, responds to the sum of activities of multiple generators located in a wide range of cortical and subcortical areas (10). The cognitive component of an ERP reflects changes in attentional resources as well as environment-related attentional updates regulated by attention (11). In fact, many diseases of the neurologic and psychiatric systems, including schizophrenia, migraine, and depression, reduce P300 amplitude or increase its peak latency, indicating a deficit in cognitive processing. A previous study on healthy individuals showed that TMS stimulus led to an increase in P300 amplitude on the stimulation side in an ERP (12). However, to our knowledge, few studies have assessed the effects of TMS treatment using a visual paradigm. Our previous assessment of patients with VSN revealed that changes in the visual paradigm were a late (rather than early) component of ERPs (13). Based on this, we sought to evaluate ERPs in patients before and after TMS. We hypothesized that TMS would have an impact on VSN and that this would be reflected through P300. Therefore, we aimed to observe the electrophysiological changes of attention processing in patients with VSN before and after TMS treatment. Additionally, we expected improvements in clinical behavioral evaluation outcomes. Study Design The study duration was four weeks (Figure 1), comprising a waiting period of two weeks (i.e., continuing treatment as usual) followed by two weeks of rTMS therapy at four sessions per day. The first behavioral assessment was conducted at the beginning of the waiting period. All patients received physical therapy (PT). ERP and behavior were retested on the first day as well as after two weeks of TMS. A 30-min PT program was applied immediately after stimulation, mainly focusing on upper and lower limb rehabilitation. This study was approved by the Ethics Committee of Xuanwu Hospital (the teaching hospital affiliated with Capital Medical University; approval number [2019]016) and was conducted in accordance with the principles of the Declaration of Helsinki and its later amendments. All patients provided written informed consent before their participation. Participants Ten participants were consecutively recruited from the inpatient stroke rehabilitation clinic of the Department of Rehabilitation at Xuanwu Hospital (the teaching hospital affiliated with Capital Medical University) between March and November 2019. The inclusion criteria were as follows: (1) age 18-80 years; (2) the presence of a right brain stroke (cerebral infarction or hemorrhage) confirmed by computed tomography (CT) or magnetic resonance imaging (MRI), with a clinical course of at least four weeks; (3) right handedness; (4) VSN according to a line bisection test, a star cancellation test, or a clinical examination; and (5) the provision of informed consent by the patient and their family. The exclusion criteria were as follows: (1) the presence of new-onset infarction, hemorrhage lesions, or other worsening conditions; (2) the presence of severe uncorrectable visual impairment and/or visual field disturbance; (3) the presence of hemianopsia (diagnosed with perimetry); (4) a previous history of claustrophobia; (5) an epilepsy diagnosis; (6) the presence of metal implants; (7) a Mini-Mental State Examination score <17; (8) being uncooperative during examination; and (9) having used tricyclic antidepressants drugs at any time within the six months before enrollment. Resting Motor Threshold A Magstim Rapid2 device (Magstim, Sheffield, UK) with a 70 mm figure-eight coil was used for conducting this measurement. In all participants, the left hemisphere motor threshold was determined for the minimum intensity of a single-pulse TMS (>50 V), or if no visible motor evoked potentials (MEPs) were detected in the first interosseous dorsal muscle on at least 5 of the 10 consecutive trials in the primary motor cortex. The two electromyographic (EMG) recording electrodes were placed >2 cm apart. EMG responses were recorded with a Necolet VikingQuest monitor (VIASYS Healthcare, Inc., Wisconsin, USA). rTMS Protocol The same magnetic stimulator was used for this component of the study. We chose an offline low-frequency TMS protocol (1-Hz, 7.5-min, figure-eight coil) to suppress cortical excitability. The stimulation frequency was set at 1.0 Hz (equivalent to 90% of the resting motor threshold [RMT]), with a total of 450 pulses per session (including two trains with 225 pulses each). There was <1 min between the train intervals. A locating cap was used to orient to the posterior parietal cortex (PPC), corresponding to P3 with regard to the 10-20 system of electrode placement. The coil was placed tangentially to the scalp and positioned at 45 • to the midsagittal axis at the left PPC; the coil was fixed with a metal clamp. Patients were asked to sit quietly, close their eyes, and keep their head still. The rTMS was administered twice a day daily for two weeks. The time interval between the two sessions was 12 h. Line Bisection Task On a 295 × 210-mm A4 paper, five parallel line segments were equidistantly distributed, with lengths of 16, 14, 12, 10, and 8 cm. Patients were instructed to mark the midpoint of each line segment. The distance between the marked and actual midpoints was measured as R. The length of the line segment was denoted by L. The neglect degree was expressed by the following formula: Line Cancellation Task Thirty randomly selected black line segments (15-20 mm in length and 1 mm in width) were placed on the left and right quadrants of a 295 × 210-mm A4 paper with 15 lines in each direction. The patient was required to mark all visible line segments. VSN was indicated if more than three line segments were crossed out on the left as compared to the right. Star Cancellation Task In this task, scattered stars, small stars, letters, and words were symmetrically displayed on a 295 × 210-mm A4 paper. The patient was requested to mark all the small stars (27 on the left, 27 on the right, and 2 in the middle) on the test paper. When the left omission was ≥5, the patient was considered to have VSN. The behavioral results were evaluated by two neurologists who were blinded to the treatment. ERP Assessment and Procedure Stimulation was presented by E-Prime 4.5 software (Psychology Software Tools, USA). The participants sat 50 cm away from the 14-inch screen, facing the center of the screen. Responses were given with the left and right mouse button of the laptop computer (cues: ">" and "<"; target: " * "). The cues were located at the center of the screen, and the target stimuli were presented in two 15-mm squares placed 60 mm left and 60 mm right, respectively, from the center of the screen. The ERP task comprised 16 sessions, each of which had 40 trials. Each trial started with a fixed cross in the center. The background was presented for 800-1000 ms, and the targets were preceded by a cue delivered 1,400-1,800 ms before the target onset; the target appeared for 100 ms on either the left or right side of the screen (with equal probability). Participants were asked to press the left or right button as soon as possible to detect the appearance of the target on the same side. The maximum response time was 1,200 ms. After the button was pressed, the screen was cleared, and the next trial began in 1,000 ms. All participants completed 640 trials. Conditions in which the cue correctly indicated the location of the target were recorded as "valid, " and conditions in which the cue pointed to the contralateral side of the target were recorded as "invalid." The valid-to-invalid ratio was 80:20. Before completing the test, participants were informed that both accuracy and response times were equally important. During the testing period, participants were allowed to rest for 1-2 min between sessions, if desired. ERP Recording The ERP was recorded using a Neuroscan system with 64 electrodes placed on the scalp in an EEG cap, according to the international 10-20 system. (Compumedics USA Inc., Charlotte, NC, USA). The reference electrodes were placed on the bilateral mastoids and both links and eye movements were monitored through electrodes placed on the outer canthi of the left and right eyes as well as above and below the left eye. EEG data were sampled at 250 Hz and filtered using a 0.05-80 Hz filter. Impedances were maintained at <5 K Ω. Electrooculogram correction was performed via blink filtering, and visually detectable artifacts were removed before signal averaging. The data over ±100 µ V were automatically rejected as artifacts. The data were initially segmented into 1,000-ms epochs (200-ms pre, 800-ms post). Only the trials for which correct responses were available were analyzed. ERP Analysis The analysis of the P300 components included the presence of waveforms, latency, and amplitude. The average P300 components were obtained at the F3, F4, C3, C4, P3, and P4 electrode sites. The P300 latency was identified manually in the time window of 300-700 ms and amplitude was defined as the maximum peak within the same time window. Statistical Analyses Data analysis was performed using SPSS version 22.0 (IBM, Armonk, NY). Repeated-measures analysis of variance (ANOVA) was used to compare data from the pre-and post-treatment stages in all patient groups, and P values were corrected using Greenhouse-Geisser correction. ERP data were examined using three-way repeated measures ANOVA (target × hemisphere × recording site) to evaluate the main effects of sessions. The assumption of sphericity was tested using Mauchly's test, and adjustments were applied using the Greenhouse-Geisser correction. Pair-wise comparisons were performed for pre-and post-rTMS and were subsequently Bonferroni-corrected. The statistical significance level was set at p < 0.05. Patient Characteristics A total of 12 patients were initially included in the current study; however, one patient failed to complete the ERP evaluation due to fatigue and one patient was excluded due to artifacts. Therefore, 10 patients were included in the final analysis (nine men and one woman). General patient demographics and data from the battery of tasks administered in the current study are summarized in Table 1. The average participant age was 57.90 ± 11.93 years. The average participant course was 68.60 ± 43.95 days. Adverse Events All patients tolerated the intervention well without any adverse events, including mild events such as a slight headache. Frontiers in Neurology | www.frontiersin.org Behavioral Scores VSN patients were assessed at three time points (two weeks before treatment, at the beginning of treatment, and at the end of treatment) using the paper-pencil test. The results are shown in Figure 2. Using a repeated measures ANOVA with Bonferroni correction, we found no significant difference in behavioral scores between baseline and before TMS in the line bisection test (F (2,27) Behavioral Analyses of Response Time and Accuracy Rate The behavioral analyses of the response time (RT) and accuracy rate under different contexts are summarized in Figure 2. The RT was comparable before and after rTMS in the VSN patients, although there was a significant difference after treatment (F ( (Figure 4). FIGURE 3 | Bar graphs depicting the response time before and after therapy in the context of a valid or an invalid target, as well as a left-cue or right-cue target. *p < 0.05. Electrophysiological Analyses of P300 Components P300 Amplitude Table 2 displays the mean amplitude and latency of P300. For P300 components, the signals from the reference electrode were converted to the mean of signals from the bilateral mastoid processes. The amplitude of the maximum crest in the time window was defined as the amplitude of specific ERP components, and the interval between the maximum crest and baseline was defined as the latency. Figure 5 shows the visual ERP grand averages in each group. For P300, we observed a higher mean P300 amplitude evoked over the contralateral visual target. There was a significant effect on amplitude in the left hemisphere (the treatment hemisphere), while using the left target (F (1,18) = 13.434, p = 0.002) and while using the right target (F (1,18) = 4.539, p = 0.047). When the left and right hemispheres were compared, we observed no significant difference between P300 Latency The patients enrolled in this study showed a single peak P300 in the visual paradigm. There were no significant differences in latency before and after treatment (F (1,18) = 0.099, p = 0.757), with no evidence of interaction. DISCUSSION The results of the current study showed that interference with rTMS over the unaffected hemisphere can induce an improvement in VSN accompanied by a higher visual P300 amplitude. Therefore, cognitive compensation in the unaffected hemisphere may play a key role in improving VSN. We found that the performance of VSN patients taking the paperpencil test was significantly improved after rTMS as compared with spontaneous recovery. This electrophysiological evaluation provides direct evidence of attention processes via a measure of brain activity. The proportion of missed left targets in the ERP experiment was considerable, which is consistent with previously published reports (14). We found that participants also missed some of the right targets, but much less often than they missed the left targets. The paper-pencil test was conducted two weeks before rTMS, immediately before rTMS, and at the end of the last rTMS session. Interestingly, performance on this test was not significantly different before rTMS; however, after two weeks of rTMS treatment, there was a dramatic improvement in the paperpencil test results for VSN patients. The deficit in lateralized attention is strong in the early stages of stroke. While this deficit can spontaneously recover to a limited extent over time, a much greater degree of improvement can be achieved through rTMS. At the behavioral level, rTMS improved the symptoms of VSN. However, as this effect could not be attributed to spontaneous recovery, we believe that rTMS has a positive effect on VSN. It was a reasonable decision to select PPC as a site for rTMS stimulation. The PPC is a critical component of attention networks; intact PPC function was essential during the encoding, consolidation, and retrieval of an associability memory enhanced by surprising omissions (15). In a previous study, direct electrical stimulation was performed in seven patients with hemispheric gliomas during surgery with asleep/awake anesthesia, Stimulation of the superior parietal lobule caused a marked rightward deviation in all of the six patients with right hemisphere lesions (16). Thus, the PPC has been proposed to be a crucial node among cortical areas included in the network. Some fMRI studies have shown that favorable recovery from VSN was associated with increased activation in the left prefrontal and right parietal regions (17). Moreover, increasing numbers of fMRI studies have also reported that VSN might involve not only attention networks but also other brain functional networks. According to the theory of the interhemispheric rivalry model, it is believed that a hemispheric imbalance in excitability severely affects functional recovery after stroke (4). Previous findings suggest that interhemispheric excitability is rebalanced by applying low-frequency rTMS to the contralateral hemisphere and transmitting it to the distant site through synapses (18). Consistent with our study, low-frequency (1 Hz) rTMS over the left PPC has been shown to reduce the severity of left spatial neglect (19,20). Imaging evidence also suggests that TMS over the left PPC administered for two weeks in patients with left spatial neglect after stroke reduces the overexcitation of the frontoparietal loop (21). Furthermore, rTMS has been shown to increase activation in the fronto-parietal network and to induce a neuroplastic response leading to long-term potentiation (22). Our study evaluated dynamic neurophysiological changes bilaterally by investigating the effect of inhibiting the right PPC using rTMS. To probe the underlying mechanisms mediating the improvement of VSN, each hemisphere was analyzed separately. As expected, the P300 of the left hemisphere was increased following right PPC rTMS. ERP is used to evaluate the effects of rTMS because it can identify the neural mechanisms underlying task-relevant SA at a finer temporal scale, thereby assessing instantaneous fluctuations. Furthermore, ERP analysis provides a more direct measure of attentional processing than behavioral data alone (23). A recent study showed not only that the P300 amplitude is reduced during early rehabilitation, but also that this reduction could serve as a predictor of negative outcomes in patients with stroke occurring in the region of the middle cerebral artery (24). Previous studies on ERP after TMS have shown that the P300 amplitude significantly increased when a single pulse was applied over the prefrontal area among healthy participants (12). In this study, patients with VSN exhibited an increase in P300 amplitude on the contralateral side of the lesion rather than on the lesion side following rTMS. It is known that P300 is a late positive cognitive component of ERP. It is considered a useful and sensitive tool for evaluating effects on cognition as well as for examining connections to improvements in cognitive function and activation of the cerebral cortex (25). The amplitude of P300 is proportionally related to the amount of attentional resources allocated during a given task, and the associated latency is related to the speed of cognitive processing of attentional resources (26). The change in P300 amplitude in this study may be related to the cue-target paradigm, which highlights cognitive deficits in visual SA. Further, the change in P300 amplitude observed in our experiment confirmed improvement in cognition after the treatment of VSN. This paradigm comprises a series of interspersed trials, including a location cue of a different type followed by a target requiring visual spatial information processing (27); it involves contributions of executive function demands not only in attention but also in visual spatial processing (28), which reflects the allocation of neural resources regulated by cognition. Analysis of the target-induced P300 reflects the dynamic changes in brain activity related to visual SA and has a resolution of milliseconds. Previous studies focusing on auditory paradigms have reported similar performance. A meta-analysis by Jeon et al. found that the P300 elicited by auditory paradigms is relatively more influenced by genetic factors (e.g., in the case of patients with schizophrenia), while the P300 elicited by visual paradigms is more suitable for assessing symptom severity (29). Our results suggest that inhibitory rTMS to the left hemisphere reduces RT by facilitating visual spatial processes. We found that both visual detection and shifts in attention within our paradigm increased the cognitive burden by increasing visual spatial information. This finding provides new insight for the role of visual spatial information processing in cognitive proportional improvement, though many previous studies have reported positive effects of TMS acting to produce cognitive enhancement (30). For example, participants who received a single pulse of TMS over the frontal eye field just before the onset of a stimulus exhibited enhanced performance (31). This suggests that a single pulse of TMS can increase cortical excitability for a brief period. In fact, short trains of high frequency rTMS appear to directly facilitate cortical processing; for example, Sole-Padulles et al. administered 5 Hz rTMS over the prefrontal cortex and found that this substantially enhanced the performance of face-name memory tasks in 40 participants with memory impairment (32). Additionally, functional MRI of the right prefrontal cortex and bilateral posterior cortical regions was associated with increased activity in a previous study, suggesting that rTMS could promote the recruitment of neural compensatory networks. As a further example, Snyder et al. applied TMS to a group of cognitively impaired patients in very restricted areas and found positive effects for literal and non-symbolic tasks (33). Furthermore, a study by Oliveri et al. found that 1 Hz rTMS applied over the parietal cortex increased participants' performance in a visual search task (34). Overall, the current literature suggests that TMS may enhance cognitive skills and might possibly accelerate the learning process. This work assessed the electrophysiological processes involved in visuospatial attention to evaluate the efficacy of TMS in VSN. Although, we aimed to discover trends in order to generate hypotheses for further study. This study should be interpreted in light of its limitations. First, this study compared the electrophysiological changes post-rTMS according to time without a sham group. Therefore, a further study including a sham group will be needed in the future. Second, this was a single-center study with only 10 patients; thus, the small sample size may be a cause of bias and may affect the results. Third, there was no follow-up of the patients who completed the rTMS treatment, making it impossible to determine the persistence of the intervention effect. Further, the P300 amplitude may have been confounded by other clinical variables such as dietary and circadian factors. However, P300 is still considered as a useful tool for evaluating activation of the cerebral cortex associated with cognitive information processing. High density EEG combined with sophisticated signal processing algorithms and TMS-EEG can provide much more information about the neurophysiological characteristics and brain dynamics of cortical brain areas or networks as compared to ERP with 64 channels. Finally, we only evaluated participant responses after an effective prompt, though an unexpected target may generate more obvious responses. Given that patients did not adequately respond to invalid prompts, we were unable to perform statistical analysis on invalid prompts. Therefore, in the future, we plan to include long-term follow-up to evaluate the long-lasting benefits of rTMS and to use imaging techniques to provide theoretical support for the mechanisms underlying recovery after treatment. In conclusion, through the electrophysiological evaluation of patients with VSN before and after TMS treatment, we provided direct evidence of the role of low-frequency rTMS in SA. Specifically, contralateral low-frequency rTMS treatment resulted in an increase in P300 amplitude (the late component of stimulating lateral attention), reflecting an improvement of cognition in VSN. It is possible that rTMS enhances cognitive ability by improving the balance between the hemispheres and plasticity of brain processing, leading to an increase in task allocation on the treatment side. This suggests that rTMS possibly acts through a compensation mechanism for tasks performed by the contralateral hemisphere, supporting the compensation theory of the healthy hemisphere. The parameters used in this study are a valuable reference for the selection of clinical VSN treatment strategies. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Committee of Xuanwu Hospital (the teaching hospital affiliated with Capital Medical University; approval number [2019]016) and was conducted in accordance with the principles of the Declaration of Helsinki and its later amendments. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
2022-01-24T14:20:08.867Z
2022-01-24T00:00:00.000
{ "year": 2021, "sha1": "deefafc1eb55a51b509342c91dd3050909c6de6f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "deefafc1eb55a51b509342c91dd3050909c6de6f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
38798368
pes2o/s2orc
v3-fos-license
The Human IL-13 Locus in Neonatal CD4+ T Cells Is Refractory to the Acquisition of a Repressive Chromatin Architecture* The Th2 cytokine IL-13 is a major effector molecule in human allergic inflammation. Notably, IL-13 expression at birth correlates with subsequent susceptibility to atopic disease. In order to characterize the chromatin-based mechanisms that regulate IL-13 expression in human neonatal CD4+ T cells, we analyzed patterns of DNase I hypersensitivity and epigenetic modifications within the IL-13 locus in cord blood CD4+ T cells, naive or differentiated in vitro under Th1- or Th2-polarizing conditions. In naive CD4+ T cells, hypersensitivity associated with DNA hypomethylation was limited to the distal promoter. Unexpectedly, during both Th1 and Th2 differentiation, the locus was extensively remodeled, as revealed by the formation of numerous HS sites and decreased DNA methylation. Obvious differences in chromatin architecture were limited to the proximal promoter, where strong hypersensitivity, hypomethylation, and permissive histone modifications were found selectively in Th2 cells. In addition to revealing the locations of putative cis-regulatory elements that may be required to control IL-13 expression in neonatal CD4+ T cells, our results suggest that differential IL-13 expression may depend on the acquisition of a permissive chromatin architecture at the proximal promoter in Th2 cells rather than the formation of locus-wide repressive chromatin in Th1 cells. The Th2 cytokine IL-13 is a major effector molecule in human allergic inflammation. Notably, IL-13 expression at birth correlates with subsequent susceptibility to atopic disease. In order to characterize the chromatin-based mechanisms that regulate IL-13 expression in human neonatal CD4 ؉ T cells, we analyzed patterns of DNase I hypersensitivity and epigenetic modifications within the IL-13 locus in cord blood CD4 ؉ T cells, naive or differentiated in vitro under Th1-or Th2-polarizing conditions. In naive CD4 ؉ T cells, hypersensitivity associated with DNA hypomethylation was limited to the distal promoter. Unexpectedly, during both Th1 and Th2 differentiation, the locus was extensively remodeled, as revealed by the formation of numerous HS sites and decreased DNA methylation. Obvious differences in chromatin architecture were limited to the proximal promoter, where strong hypersensitivity, hypomethylation, and permissive histone modifications were found selectively in Th2 cells. In addition to revealing the locations of putative cis-regulatory elements that may be required to control IL-13 expression in neonatal CD4 ؉ T cells, our results suggest that differential IL-13 expression may depend on the acquisition of a permissive chromatin architecture at the proximal promoter in Th2 cells rather than the formation of locus-wide repressive chromatin in Th1 cells. The cytokine IL-13 has received considerable attention in recent years because of its critical role as an effector molecule of Th2-mediated disease. IL-13 appears to be necessary and sufficient to induce bronchial hyperresponsiveness, airway eosinophilia, epithelial cell damage, and goblet cell hyperplasia with mucus hyperproduction, key signatures of allergic inflammation in experimental asthma (1)(2)(3). Furthermore, IL-13 instigates chronic airway alterations through its ability to induce fibrosis, parenchymal and vascular remodeling, and accumulation of macrophages in the lung (4 -7). Mounting evidence from animal models also implicates IL-13 in the pathogenesis of other Th2-associated disorders, such as atopic dermatitis (8,9) and matrix metalloproteinase-and cathepsin-dependent emphysema (10). Of note, numerous studies in humans have documented increased expression of both IL-13 and its receptors at sites of allergic inflammation (11) and strong associations between genetic variation in IL-13 and increased risk of allergic disease (12). Analysis of chromatin structure has provided considerable insights into the role played by epigenetic mechanisms in the regulation of gene expression throughout the Th2 cytokine cluster that includes IL-5, IL-13, and IL-4 (13). Although focused mostly on IL-4, studies using adult murine CD4 ϩ T cells showed that during Th2 differentiation, the IL- 13 locus acquires an open chromatin configuration characterized by increased DNase I hypersensitivity (14 -16), extended demethylation (17), and permissive histone modifications (18,19). In contrast, the chromatin within the IL-13 locus in Th1 cells was found to be in an inaccessible state similar to that found in naive CD4 ϩ T cells, with the exception of a single constitutive DNase I hypersensitive (HS) 2 site in the IL-13/IL-4 intergenic region. Information about IL-13 locus remodeling during differentiation of human adult T cells is comparatively limited (20,21) but essentially consistent with that obtained from mouse models (13). We chose to analyze the chromatin-based mechanisms regulating human IL-13 gene expression in neonatal CD4 ϩ T cells. Indeed, immunological events in early life are essential determinants of Th2 allergic inflammation in adults (22,23), and robust correlations exist between IL-13 expression at birth or within the first year of life and subsequent susceptibility to allergic disease (24 -27). In an initial effort to characterize the molecular mechanisms controlling IL-13 expression in the neonatal period, we analyzed the patterns of DNase I hypersensitivity and CpG methylation across the locus in human cord blood CD4 ϩ T cells, naive or differentiated in vitro under Th1-or Th2-polarizing conditions. We show herein that the IL-13 chromatin in neonatal CD4 ϩ T cells undergoes extensive remodeling during differentiation into a polarized T helper phenotype. Surprisingly, overall similar patterns of chromatin accessibility and methylation were observed in Th2 and Th1 populations, although IL-13 was not expressed by Th1 cells. Substantial differences in accessibility were limited to the proximal promoter, where a strong DNase I HS site and a permissive epigenetic state were generated selectively in Th2 cells. Our results suggest that the IL-13 locus in neonatal CD4 ϩ T cells is refractory to the acquisition of a repressive chromatin structure. Control of proximal promoter accessibility may therefore be a critical determinant of IL-13 expression. Real Time Reverse Transcriptase-PCR-After lysis of the cell pellets, total RNA was extracted using TRIzol Reagent (Gibco) and reverse transcribed using Omniscript (Qiagen) and oligo(dT) primers (Gibco). Real time PCR was performed using predeveloped primer and probe sets for IL-13, IFN-␥, and glyceraldehyde-3-phosphate dehydrogenase and universal PCR master mix (Applied Biosystems) in a model 7900HT real time PCR machine (Applied Biosystems). Cytokine cDNA copy number was normalized to glyceraldehyde-3-phosphate dehydrogenase cDNA copy number. DNase I HS Site Mapping-DNase I hypersensitivity was assessed using methods adapted from Elder et al. (28) and Burch and Weintraub (29). All procedures were performed on ice unless otherwise indicated. Freshly isolated naive or in vitro differentiated CD4 ϩ T cells were washed once in PBS and once in a 1:1 mix of PBS and RSB buffer (10 mM Tris, pH 7.4, 10 mM NaCl, 3 mM MgCl 2 ). Cells were then resuspended in 100% RSB buffer at 20 -40 ϫ 10 6 /ml. While slowly vortexing, a 10% solution of Nonidet P-40 was gradually added to the cell suspension until a final concentration of 0.23% was reached. The nuclei thus released were pelleted and washed once with RSB. DNA concentration was estimated (A 260 ), and nuclei were resuspended in RSB buffer containing Ca 2ϩ (100 M) at a DNA concentration of 0.3 mg/ml. Aliquots of nuclei were incubated with increasing concentrations of DNase I (5-120 units/mg DNA; Gibco) for 10 min at 37°C. An equal amount of nuclei lysis buffer (20 mM Tris, pH 8.0, 1% SDS, 0.25 M EDTA, 20 g/ml RNase) was added to stop the reaction. Following a 1-h incubation at 37°C, Proteinase K (Sigma) was added (100 ng/ml, final concentration), and samples were incubated at 50°C for 3 h with occasional mixing. Four serial phenol/chloroform extractions were then performed, followed by dialysis against two changes of TE buffer. DNA samples (15 g each) were digested overnight at room temperature with restriction enzymes (100 -200 units; New England Biolabs) and ethanol-precipitated. DNA was reconstituted in TE, electrophoresed in a 0.8 -1.2% agarose gel in TBE buffer, and transferred to nylon. Target DNA was visualized by indirect end labeling with a radiolabeled probe annealing to one end of the restriction fragment. Probe templates (200 -500 bp) were radiolabeled by random priming as described (30). DNA Methylation Analysis-Genomic DNA was isolated from either freshly isolated or in vitro differentiated T cells as described for DNase I HS site mapping. Following digestion with EcoRI and ethanol precipitation, DNA (2 g) was resuspended in water, bisulfite-treated as per the manufacturer's instructions (EZ DNA Methylation Kit; Zymo Research), and resuspended in nuclease-free water (Ambion). Primers were designed using Primer3 software (31) following conversion of sequence obtained from GenBank TM (accession number AC004039). PCRs for each primer set were optimized for Mg 2ϩ concentration and annealing temperature. DNA template (50 ng) was amplified with primers complementary to bisulfiteconverted DNA sequence (see supplemental material) and Taq polymerase (Invitrogen) for 40 cycles. Amplified DNA was gelpurified using spin columns (Corning-Costar) and cloned using the TOPO TA cloning kit (Invitrogen). For each PCR amplification, 48 colonies were picked, transferred to 96-well plates, and lysed using the Colony Fast Screen kit (PCR Screen; Epicenter). To rule out any bias related to bisulfite conversion and PCR, the DNA samples analyzed in Fig. 7 were subjected to two bisulfite conversions, followed by two PCR amplifications and cloning events per conversion. Small aliquots (3.0 l of a 1:10 dilution) were removed from each lysate and PCR-amplified with the M13 forward and reverse primers (1.0 pmol) using 1.5 mM MgCl 2 , 1ϫ Taq buffer, and 0.05 units of platinum Taq at an annealing and extension temperature of 62.8°C for 40 cycles. PCR products were run on 1.5% agarose gels to assess insert size and yield. Products were then diluted 1:2 to 1:6 with water. Cycle sequencing reactions consisted of diluted PCR product (2 l), Big Dye 3.0 nucleotide mix (0.4 l; ABI) and diluted (1:20) 5ϫ sequencing buffer (1.8 l), M13 primers (7 pmol; 0.17 l), and water (5.6 l). Cycle sequencing products were purified using the Agencourt bead system and an FX robot (Biomeck). Samples were run on a 3730XL DNA Analyzer (ABI), using a 36-cm capillary system. Data were assessed for initial quality using the SQPR program (32) and assembled using CONSED software (33). Results are presented as percentage methylation at each CpG position: ((number of protected cytosines)/(number of unprotected ϩ number of protected cytosines) ϫ 100). Between 25 and 45 colonies (typically above 40) from each PCR amplification provided readable sequence. Supplemental Fig. 2 shows the CpG site-by-CpG site analysis of methylation at the whole IL-13 locus in Th2 cells and demonstrates the absence of PCR and clonal bias. Chromatin Immunoprecipitation-The chromatin immunoprecipitation procedure was adapted from Litt et al. (34). Essentially, cells were fixed by adding 37% formaldehyde directly to the culture media and incubating for 10 min at room temperature. Fixation was stopped by adding one-tenth volume of 1.25 M glycine and incubating at room temperature for 5 min. Cells were collected, washed twice with ice-cold PBS, and resuspended at 20 -40 ϫ 10 6 cells/ml in SDS lysis buffer (1% SDS, 10 mM EDTA, 50 mM Tris (pH 8.0). Cells in a volume of 700 l in an ice bath were sonicated (one-eighth inch tip; Misonix) at 14 watts for six 10-s pulses over a 6-min period. Cellular debris was pelleted, and chromatin preparations were frozen in liquid nitrogen. Aliquots of chromatin (2 ϫ 10 6 cell equivalents) were diluted in chilled ChIP buffer (0.01% SDS, 1.1% Triton X-100, 1.2 mM EDTA, 17.7 mM Tris (8.1), 167 mM NaCl) to a volume of 1 ml, precleared with Protein G-agarose (50 l; Upstate Biotechnology, Inc.) for 2 h, and incubated overnight with 10 g of anti-acetylated H3 or H4 antibodies, anti-histone H3, or normal rabbit IgG (Upstate Biotechnology). Following a 2-h incubation with Protein G-agarose beads, the beads were washed five times with cold ChIP buffer. Antibody-chromatin complexes were eluted first with 250 l of ChIP buffer plus 3.0% SDS and then with 250 l of ChIP buffer plus 1.0% SDS. Fixation was reversed by incubating overnight at 65°C. DNA was purified by treating sequentially with RNase (New England Biolabs) and proteinase K (New England Biolabs). Following phenol/chloroform and a chloroform extraction, DNA was precipitated with 0.2 M NaCl 2 , 30 g of glycogen, and EtOH (70%). DNA yield was determined using Picogreen (Invitrogen). Enrichment for target templates was assessed using Sybr Green real time PCR (Applied Biosystems) with 2 ng of sample DNA and a 0.5 M concentration of each primer (see supplemental material) in a volume of 20 l. Reactions were run, and data were collected using a model 7900HT real time PCR machine (Applied Biosystems). Copy number was calculated using the following formula: 10 [(CtϪ40)/Ϫ3.33] Results were normalized to the copy number of isotype immunoprecipitated samples to correct for differences in primer efficiency and then normalized to a RAG2-negative control copy number within each sample. Comparative Sequence Analysis-Alignments between the sequence of the human and mouse IL-13 locus were performed, and the extent of DNA sequence homology was computed using the World Wide Web-based program Genome Vista (available on the World Wide Web at pipeline.lbl.gov/cgi-bin/ GenomeVista). Regions with a length of at least 50 bp, which showed at least 75% sequence identity at each segment of the alignment between successive gaps, were identified as conserved noncoding sequences (CNS). IL-13 Chromatin Is Accessible in Neonatal Naive CD4 ϩ T Cells-Nuclease HS sites are believed to reflect the DNA binding activity of sequence-specific trans-acting factors that induce destabilization or displacement of local nucleosomes (35). This chromatin-modifying activity can result in the exposure of additional binding sites, potentially increasing the transcriptional competence of a promoter or the activity of a distal regulatory element (36,37). The mapping of HS sites therefore identifies the locations of putative cis-regulatory elements and determines their state of accessibility to trans-acting factors at specific developmental stages. As a first step toward characterizing IL-13 regulation in neonatal T cells, we mapped the locations of DNase I HS sites across the IL-13 locus in resting human naive (CD45RA ϩ CD45RO Ϫ ) CD4 ϩ T cells freshly isolated from umbilical cord blood (Fig. 1A). In a 24.1-kb KpnI restriction fragment spanning the IL-13 transcription unit and RAD50/ IL-13 intergenic region, two HS sites were detected within the IL-13 distal promoter (HS4 and HS5; Fig. 2A, NAIVE). HS4 is just proximal to a CpG island and a conserved CGRE element (18), whereas HS5 is located ϳ1.2 kb upstream of the ATG. In contrast, no HS sites were detected within the proximal promoter, transcription unit, or 6.5 kb of 3Ј-flanking chromatin (Fig. 2, A and B, NAIVE). The IL-13 locus chromatin is therefore largely inaccessible to DNase I in neonatal naive CD4 ϩ T cells, except for the distal promoter. Covalent modifications of chromatin constituents can also have regulatory significance. Cytosine methylation at CpG dinucleotides can increase the repressive nature of chromatin by providing docking sites for methyl-binding proteins, which in turn recruit complexes with histone deacetylase and histone methyltransferase activity (38). In contrast, hypomethylated regions frequently correlate with an open or permissive chromatin conformation, typically thought to result from competitive inhibition of maintenance methylation activity due to transcription factor binding during S-phase of the cell cycle, a process called passive demethylation (39). We therefore examined the levels of CpG methylation within the IL-13 locus using genomic DNA purified from freshly isolated neonatal naive CD4 ϩ T cells. Following bisulfite treatment, overlapping amplicons spanning ϳ2.0 kb of the 5Ј-flanking region were cloned and sequenced. Many of the CpG dinucleotides located within this region were close to 100% methylated. In contrast, CpG dinucleotides, which co-localize with HS4 and HS5, and the proximal end of the CpG island, were found to be markedly hypomethylated (Fig. 3, Naive). The co-localization of nuclease hypersensitivity and hypomethylation at HS4 and HS5 provides strong independent support for the presence of occupied cisregulatory elements in neonatal naive CD4 ϩ T cells. We also analyzed ϳ3.9 kb of 3Ј-flanking sequence and found two discrete regions of hypomethylation (Fig. 4, Naive). CpG dinucleotides located at the 3Ј end of amplicon 08 (positions ϩ3104, ϩ3156, ϩ3253, and ϩ3302 relative to the IL-13 ATG; Fig. 4, asterisks) were partially unmethylated, and most CpG dinucleotides within a 340-bp region in amplicons 21 and 11 (ϩ4775 to ϩ5113) were hypomethylated. Notably, in contrast to the hypomethylated sites that co-localize with HS4 and HS5, those located in the 3Ј-flanking sequence did not correlate with DNase I hypersensitivity, suggesting that a permissive epigenetic state may coexist with chromatin otherwise resistant to nuclease activity. Near Equivalent States of IL-13 Locus Accessibility in Neonatal Th1 and Th2 Cells-According to the paradigm proposed for adult murine and human CD4 ϩ T cells, silencing of Th2 cytokine gene transcription in Th1 cells relies on the development of a repressive chromatin structure during differentiation and its maintenance in effector cells (14,15,40). Whether the same mechanism silences IL-13 transcription in human neonatal Th1 cells is not known. We therefore analyzed the patterns of DNase I hypersensitivity in neonatal naive CD4 ؉ T cells differentiated in vitro under Th1-and Th2-polarizing conditions. After 1-2 weeks of culture under Th1 conditions, significant levels of IFN-␥, but little or no IL-4 and IL-13, were detected ( Fig. 1, B and C, Th1). Conversely, after 2 weeks of culture under Th2 conditions, high IL-13 expression was observed, in the virtual absence of IFN-␥ (Fig. 1, B and C; Th2). IL-4 expression in Th2 cells was typically modest, consistent with the low propensity of neonatal CD4 ϩ T cells to express this cytokine (41). Differentiation therefore induced polarized Th1 and Th2 cytokine expression patterns in neonatal CD4 ϩ T cells. An examination of nuclease accessibility in activated Th1 and Th2 cells revealed the persistent presence of HS4 and HS5 ( Fig. 2A), suggesting that these sites are constitutively occupied by transcription factors. In contrast, the proximal promoter was found to contain an HS site (HS6) considerably stronger in Th2 cells, a result consistent with preferential IL-13 promoter activity in these cells. A single, weak HS site (HS7) was induced in the first intron preferentially in Th2 cells. Immediately downstream of the 3Ј-untranslated region, a cluster of three novel HS sites was induced in both Th1 and Th2 cells (HS8 to -10; Fig. 2B). Remarkably, the position of HS8 was predicted by hypomethylation of a CpG site first observed in naive cells (Fig. 4, Naive), raising the possibility that trans-acting factors may reside at this location in naive cells in preparation for differentiation and/or activation (42). In addition, two closely spaced HS sites (HS11 and HS12; Fig. 2B) were detected in the distal 3Ј-flanking chromatin. Again, the pattern of CpG hypomethylation in naive cells predicted the location where HS11 formed. Further downstream, the CNS-1 enhancer, an element critical for high level Th2 cytokine expression (43,44), became hypersensitive in both Th1 and Th2 cells (HS13; Fig. 2B). The differential expression of a subset of HS sites became more readily apparent when visualized at higher resolution. The most obvious difference was the preferential expression of HS6 in Th2 cells (Fig. 5A). HS9, which resolves into a doublet at this resolution (Fig. 5B), also appeared to be more intense in Th2 cells. Most of the CNS-1 enhancer element was accessible to nuclease (Fig. 5C), but obvious differences between Th1 and Th2 cells were not detectable. A Permissive Epigenetic State Exists at the Proximal Promoter in both Resting and Activated Th2 but not Th1 Cells-We also analyzed CpG methylation profiles in Th1 and Th2 cells, resting or activated with immobilized anti-CD3 for 24 h. Figs. 3 and 4 show that, overall, the patterns of CpG methylation detected throughout the IL-13 locus remained surprisingly stable in these cells. The hypomethylated sites in the distal 5Ј promoter were essentially invariant in all cell types (Fig. 3), supporting the conclusion HS4 and HS5 are constitutively occupied by nucleoprotein complexes in the neonatal CD4 ϩ T cell lineage. The hypomethylated site co-localizing with HS8 was present in all cell types, but the CpG located at ϩ3253 relative to the IL-13 ATG (Fig. 4, asterisk) was further demethylated in differentiated cells regardless of activation state, possibly reflecting changes linked to Th cell differentiation. The hypomethylated region co-localizing with HS11 expanded in the 3Ј direction in differentiated cells to include CpG dinucleotides co-localizing with HS12 (positions ϩ5113/ϩ5435). Further 3Ј, the CNS-1 enhancer, which contains a limited number of CpG dinucleotides in the distal half, became moderately demethylated in all differentiated cell types (HS13; Fig. 4). In contrast, analysis of proximal promoter CpG methylation revealed substantial differences between CD4 ϩ T cells polarized under opposing conditions. Indeed, the CpG methylation levels at positions Ϫ67, Ϫ102, Ϫ192, and Ϫ280 were significantly reduced in Th2 cells compared with Th1 cells, both resting and activated (Fig. 6A). Immediately downstream of the transcription start site (positioned at Ϫ56), CpG dinucleotides located at Ϫ34, ϩ6, and ϩ21 were also hypomethylated in Th2 compared with Th1 cells, particularly upon activation. Moreover, anti-CD3-mediated cross-linking led to further hypomethylation of several proximal promoter CpG sites in Th2 cells but had negligible effects in Th1 cells (Fig. 6B). These data suggest that a permissive epigenetic state exists at the IL-13 proximal promoter selectively in Th2 cells. To further support this conclusion, we examined whether histones at that location bore post-translational modifications typically associated with active genes. Fig. 6D shows that histone H4 acetylation levels at the IL-13 proximal promoter were considerably higher in Th2 cells. An opposite pattern was observed for the IFN-␥ promoter. These results suggest that covalent histone modifications that favor rapid nucleosome displacement and subsequent recruitment of transcription factors in response to activation signals are established at the proximal promoter in neonatal Th2 but not Th1 cells. DISCUSSION The Th2 cytokine IL-13 orchestrates multiple facets of human allergic inflammation through processes typically occurring in early life. Since a large amount of evidence supports a regulatory role for chromatin in controlling profiles of Th2 cytokine gene expression elicited by immune challenges (45), we analyzed the state of IL-13 DNase I hypersensitivity and CpG methylation during the in vitro differentiation of human IL-13 Locus Accessibility in Neonatal CD4 ؉ T Cells neonatal naive CD4 ϩ T cells under Th1-or Th2-polarizing conditions. Except for the distal 5Ј promoter region, which was found to be constitutively accessible and hypomethylated, the locus was extensively remodeled during T helper differentiation, as revealed by the formation of multiple HS sites and decreased levels of CpG methylation. Although IL-13 expression was restricted to the Th2 cell population, surprisingly, the locus acquired nearly equivalent patterns of hypersensitivity and CpG methylation during Th1 and Th2 differentiation throughout the locus but not at the proximal promoter, which became preferentially hypersensitive and hypomethylated in Th2 cells. These results suggest that regulation of IL-13 transcription in neonatal CD4 ϩ T helper cells may rely less on the formation of locus-wide repressive chromatin than on mechanisms controlling proximal promoter accessibility. A Permissive Epigenetic State Exists at the IL-13 Locus in Neonatal Naive CD4 ϩ T Cells-The state of accessibility at the IL-13 locus in human neonatal naive CD4 ϩ T cells contrasts with that observed in the mouse locus. Examination of freshly isolated murine naive CD4 ϩ T cells revealed an IL-13 chromatin architecture devoid of HS sites except for a single constitu-tive HS site mapping within the IL-13/IL-4 intergenic region (14). The absence of HS sites was taken to imply the existence of a repressive chromatin architecture acting to suppress cytokine expression in naive CD4 ϩ T cells (45). However, human neonatal naive CD4 ϩ T cells express significant amounts of IL-13 in response to T cell receptor cross-linking (41), and the presence of HS4 and HS5 in these cells may represent a mechanism for rapid activation of IL-13 production in neonates. Complex Relationships between Nuclease Hypersensitivity and DNA Methylation at the IL-13 Locus-Locus-wide high resolution mapping of cytosine methylation in neonatal naive CD4 ϩ T cells revealed two regions in the distal promoter that were constitutively hypersensitive and constitutively hypomethylated (HS4 and HS5) and two discrete hypomethylated sites in the 3Ј-flanking chromatin that lacked detectable DNase I hypersensitivity. The latter locations eventually acquired HS sites (HS8 and HS11) during differentiation. Pioneer transcription factors, which can bind to their cognate sites without disrupting local nucleosomes (42), could inhibit maintenance methylation (39) and create a pattern of localized hypomethylation. As differentiation proceeds, pioneer factors recruit chromatin-remodeling complexes to prepare the locus for gene activation, resulting in the generation of nuclease HS sites. The 3Ј-flanking chromatin in naive T cells could thus be maintained in a state poised for rapid remodeling by resident pioneer factors. At other locations, Th differentiation induced demethylation of CpG dinucleotides co-localizing with HS8, HS12, and HS13, whereas Th2, but not Th1, differentiation induced demethylation at the proximal promoter (HS6). Only three HS sites were not found to correlate with hypomethylation: HS9, which appears to be preferentially expressed in Th2 cells when examined at high resolution; HS10, which mapped to a region lacking CpG dinucleotides; and HS14. Further analysis will be required to elucidate the basis for the apparent dissociation between the state of nuclease accessibility and DNA methylation at these HS sites. In the Absence of a Locus-wide Repressive Chromatin Configuration, Control of Proximal Promoter Accessibility May Be Rate-limiting-The results of our analysis of the state of IL-13 chromatin in differentiated neonatal Th cells contrast with those obtained from adult mice (14 -19) and humans (21), where the Th1/Th2 paradigm was reflected in the chromatin architecture. In those studies, IL-13 expression in Th2 cells correlated with an open chromatin structure, whereas silencing in Th1 cells correlated with an inaccessible state across the locus. Rather, our data are reminiscent of those recently reported for the murine IL-10 gene, where differences in chromatin architecture between Th1 and Th2 cell clones were most apparent at the proximal promoter (46). Our results likewise suggest the IL-13 locus is maintained in an accessible state in neonatal Th cells differentiated under Th1 conditions and may be a reflection of the propensity of these cells to produce IL-13 (41). Differential accessibility was limited to the proximal promoter in polarized neonatal Th cells, suggesting that control of IL-13 expression depends on regulating transcription factor access at this location. For example, activationinduced remodeling of the IL-2 and IL-12 promoters results in displacement of a local nucleosome, which renders additional essential transactivator sites accessible (47,48). Apart from the global repression conferred by a higher order chromatin structure, gene repression can also result from an active process. A formal model of gene repression mediated by trans-regulators emerged from studies examining the mechanisms limiting the activity of the even-skipped stripe 2 enhancer involved in specifying the anteroposterior axis during Drosophila neurogenesis (49). In this model, short range repression was mediated by the binding of repressor complexes close to important activator sites, either in distal enhancers or near the transcription start site. Whereas binding of a short range repressor within a distal enhancer acted locally, thus permitting other activators and enhancers located at a distance to function (50,51), binding of a repressor near the transcription start site had a dominant repressive effect, inhibiting the initiation of gene transcription by multiple distal positive elements (51,52). Whether similar processes inhibit the activity of critical Th2 cytokine gene promoter trans-activators, such as GATA3 (53) and NFAT (54), needs to be determined through functional dissection of Fig. 2 were digested with combinations of restriction enzymes to limit the size of the region analyzed. A, a 4.8-kb EcoRI/BamHI restriction fragment within the 5Ј-flanking sequence and transcription unit and encompassing the positions of HS4 to HS7 was visualized with probe C. B, a 2.1-kb EcoRI/BamHI restriction fragment encompassing 3Ј-proximal flanking sequence and the positions of HS8 to HS10 was visualized with probe D. C, a 2.2-kb SpeI/PvuII restriction fragment located in the distal 3Ј-flanking sequence and encompassing CNS-1 and part of the HS11 and HS12 position was visualized with probe E. The Th2 blot was exposed longer than the Th1 blot. D, schematic representation of HS sites mapped in the IL-13 locus (downward arrows). Restriction sites were EcoRI (E), KpnI (K), BamHI (B), PvuII (P), and SpeI (S) (PvuII and SpeI sites are not shown exhaustively). The positions of probe templates are marked by letters and short horizontal arrows below the diagram. both distal elements and the proximal promoter. Much of the empirical data to date has focused on the role of distal elements in controlling gene activity. In mice, the IL-13/IL-4 intergenic constitutive HS site, HS3, has recently been implicated in mediating the spread of a repressive histone modification across the locus during Th1 differentiation (55). Whether the syntenic region (HS11) in human CD4 ϩ T cells likewise acts to repress IL-13 expression is unknown, but our data provided no evidence of a barrier to nucleases at HS11 or most of the locus. High level Th2 cytokine expression in murine T cells requires the CNS-1 enhancer (43,44), but we failed to find examples in the literature suggesting that enhancers act to modify proximal promoter DNA methylation in a dominant fashion. Instead, an elegant study revealed that promoter and enhancer DNA methylation states can be regulated independently (56). CNS-1 may therefore be required for high level cytokine expression but not for inducing a stable permissive epigenetic state at the proximal promoter. Another class of distal elements, locus control regions (LCRs), act to inhibit position effects at sites of transgene integration. The recently characterized Th2 cytokine LCR is located within the 3Ј end of the RAD50 gene, within a 25-kb region positioned 15-20 kb upstream of the murine IL-13 promoter (57,58). Three of the LCR HS sites were required for copy number-dependent expression of Th2 cytokines in transgenic mice (59). One of these HS sites (RHS7) was required for Th2 cytokine expression and long range physical interactions between Th2 cytokine gene promoters and the LCR (60). Th2-specific interactions between the IL-13 promoter and LCR HS sites were limited to RHS4, whereas RHS7 maintained physical interactions with the IL-13 promoter in naive CD4 ϩ T cells and Th1 cells (58). Stable long range interaction with the LCR could help to maintain a permissive epigenetic state at the proximal promoter. However, a more direct role of proximal promoter elements cannot be excluded, because the chromatin opening activity of the 4). Brackets with asterisks mark CpG positions at which statistically significant differences in methylation were found between resting Th1 and Th2 cells or stimulated Th1 and Th2 cells (Mann-Whitney U Test, one-tailed, p Ͻ 0.05). The percentage of methylation is indicated by the scale to the left, and the position of CpG dinucleotides relative to the IL-13 ATG is indicated at the bottom of each graph. D, analysis of histone acetylation levels at the IL-13 and IFN-␥ promoters in neonatal Th1 and Th2 cells. Results shown are from two independent chromatin immunoprecipitations from each of two donors. In vitro differentiated neonatal Th1 and Th2 cells were fixed with formaldehyde after 4 days of rest, sonicated, and immunoprecipitated with anti-acetylated histone H3, anti-acetylated histone H4, or control antibodies. Enrichment for target template was assessed using Sybrா Green-mediated real time PCR. Values represent relative copy number after correcting for primer efficiency and normalizing to the negative control RAG2. FIGURE 7. Comparison between accessibility, sequence conservation, and DNA methylation profiles across the IL-13 locus. Top, schematic representation of the IL-13 locus, indicating the locations of constitutive (black ovals), major Th2-specific (gray ovals), and inducible (white ovals) HS sites. The right pointing arrow indicates the translation start site. Middle, sequence comparison between the human and mouse IL-13 locus, generated by VISTA analysis. Dark blue, protein coding sequence; light blue, the 3Ј-untranslated region. Regions with a length of at least 50 bp, which have at least 75% sequence identity at each segment of the alignment between successive gaps, were identified as CNS and are shown in red. Bottom, profiles of DNA methylation detected in neonatal CD4 ϩ Th2 cell populations. IL-13 Locus Accessibility in Neonatal CD4 ؉ T Cells ␤-globin LCR was compromised when transgene promoters were deleted (61), suggesting that LCRs, enhancers, and promoters work in concert to control accessibility. Evidence for a Phylogenetically Divergent IL-13 Cis-regulatory Architecture-The tools of comparative genomics and bioinformatics have provided insights into the IL-4/IL-13 regulatory architecture. In several cases, the results of sequence comparisons have been used successfully to predict the locations of functional cis-regulatory elements and guide their genetic manipulation (40,43,62). Of the 11 HS sites mapped within the IL-13 locus, five co-localize with peaks of human/ mouse sequence conservation (Fig. 7): HS5, which may be homologous to murine HSI (14); HS6, which maps within the proximal promoter (53,54,63); HS7, which may be homologous to the intronic murine mast cell-specific HS site (64); HS8, which has no known murine counterpart; and HS13, which corresponds to the CNS-1 enhancer. The combination of phylogenetic sequence comparisons, analysis of the native chromatin structure in primary CD4 ϩ T cells, and functional studies provides strong independent evidence that HS5, HS6, and HS13 represent conserved cis-regulatory elements controlling IL-13 expression in neonatal CD4 ϩ T cells. The role of HS7 and HS8 remains to be determined. Among the other HS sites, only HS11 may have a murine counterpart, the constitutive HS3 (14). HS4, HS9, HS10, HS12, and HS14 do not co-localize with peaks of strong sequence conservation and may therefore represent evolutionarily more recent, species-specific cis-regulatory elements. Our results therefore suggest that additional elements not identifiable through phylogenetic sequence comparisons may be required for proper regulation of IL-13 expression in human neonatal CD4 ϩ T cells.
2018-04-03T04:24:22.825Z
2007-01-05T00:00:00.000
{ "year": 2007, "sha1": "720c422b426f35953700538127da316cb29f7794", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/282/1/700.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "c3d0f1b81eafb830955f4163c1af43b41d376e93", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
34875431
pes2o/s2orc
v3-fos-license
Sharp bounds for the p-torsion of convex planar domains We obtain some sharp estimates for the $p$-torsion of convex planar domains in terms of their area, perimeter, and inradius. The approach we adopt relies on the use of web functions (i.e. functions depending only on the distance from the boundary), and on the behaviour of the inner parallel sets of convex polygons. As an application of our isoperimetric inequalities, we consider the shape optimization problem which consists in maximizing the $p$-torsion among polygons having a given number of vertices and a given area. A long-standing conjecture by P\'olya-Szeg\"o states that the solution is the regular polygon. We show that such conjecture is true within the subclass of polygons for which a suitable notion of"asymmetry measure"exceeds a critical threshold. Introduction Let Ω ⊂ IR 2 be an open bounded domain and let p ∈ (1, +∞). Consider the boundary value problem where ∆ p u = div(|∇u| p−2 ∇u) denotes the p-Laplacian. The p-torsion of Ω is defined by being u p the unique solution to (1) in W 1,p 0 (Ω). Notice that the second equality in (2) is obtained by testing (1) by u p and integrating by parts. Since (1) is the Euler-Lagrange equation of the variational problem min u∈W 1,p 0 (Ω) there holds τ p (Ω) = p 1 − p min u∈W 1,p 0 (Ω) J p (u) . A further characterization of the p-torsion is provided by the equality τ p (Ω) = S(Ω) 1/(p−1) , where S(Ω) is the best constant for the Sobolev inequality u p L 1 (Ω) ≤ S(Ω) ∇u p L p (Ω) on W 1,p 0 (Ω). The purpose of this paper is to provide some sharp bounds for τ p (Ω), holding for a convex planar domain Ω, in terms of its area, perimeter, and inradius (in the sequel denoted respectively by |Ω|, |∂Ω|, and R Ω ). The original motivation for studying this kind of shape optimization problem draws its origins in the following long-standing conjecture by Pólya and Szegö: Among polygons with a given area and N vertices, the regular N -gon maximizes τ p . (4) A similar conjecture is stated by the same Authors also for the principal frequency and for the logarithmic capacity, see [13]. For N = 3 and N = 4 these conjectures were proved by Pólya and Szegö themselves [13, p. 158]. For N ≥ 5, to the best of our knowledge, the unique solved case is the one of logarithmic capacity, see the beautiful paper [14] by Solynin and Zalgaller; the cases of torsion and principal frequency are currently open. In fact let us remind that, for N ≥ 5, the classical tool of Steiner symmetrization fails because it may increase the number of sides, see [9,Section 3.3]. The approach we adopt in order to provide upper and lower bounds for the p-torsion in terms of geometric quantities, is based on the idea of considering a proper subspace W p (Ω) of W 1,p 0 (Ω) and to address the minimization problem for the functional J p on W p (Ω). More precisely, we consider the subspace of functions depending only on the distance d(x) = dist(x, ∂Ω) from the boundary: Functions in W p (Ω) have the same level lines as d, namely the boundaries of the so-called inner parallel sets, Ω t := {x ∈ Ω : d(x) > t}, which were first used in variational problems by Pólya and Szegö [13,Section 1.29]. Later, in [8], the elements of W p (Ω) were called web functions, because in case of planar polygons the level lines of d recall the pattern of a spider web. We refer to [5,6] for some estimates on the minimizing properties of these functions, and to the subsequent papers [3,4] for their application in the study of the generalized torsion problem. Actually, the papers [3,4] deal with the problem of estimating how efficiently τ p (Ω) can be approximated by the web p-torsion, defined as While the value of τ p (Ω) is in general not known (because the solution to problem (1) cannot be determined except for some special geometries of Ω), the value of w p (Ω) admits the following explicit expression in terms of the parallel sets Ω t : where q = p p−1 is the conjugate exponent of p, and R Ω is the inradius of Ω (see [4]). Clearly, since W p (Ω) ⊂ W 1,p 0 (Ω), w p (Ω) bounds τ p (Ω) from below. On the other hand, when Ω is convex, τ p (Ω) can be bounded from above by a constant multiple of w p (Ω), for some constant which tends to 1 as p → +∞. In fact, in [4] it is proved that, for any p ∈ (1, +∞), the following estimates hold and are sharp: where C denotes the class of planar bounded convex domains; moreover the right inequality holds as an equality if and only if Ω is a disk. Note that, if p → +∞, then q → 1 and the constant in the left hand side of (6) tends to 1. In this paper, we prove some geometric estimates for τ p (Ω) in the class C, which have some implications in the conjecture (4). More precisely, we consider the following shape functionals: Let us remark that the above quotients are invariant under dilations and that convex sets which agree up to rigid motions (translations and rotations) are systematically identified throughout the paper. Our main results are Theorems 1 and 6, which give sharp bounds for the functionals (7) when Ω varies in C. We also exhibit minimizing and maximizing sequences. These bounds are obtained by combining sharp bounds for the web p-torsion (see Theorem 2 and the second part of Theorem 6) with (6). As a consequence of our results we obtain the validity of some weak forms of Pólya-Szegö conjecture (4). On the class P of convex polygons we introduce a sort of "asymmetry measure" such as where Ω ⊛ denotes the regular polygon with the same area and the same number of vertices as Ω. Then, if the p-torsion τ p (Ω) is replaced by the web p-torsion w p (Ω), (4) holds in the following refined form: Consequently, on the class P N of convex polygons with N vertices, conjecture (4) holds true for those Ω which are sufficiently "far" from Ω ⊛ , meaning that γ(Ω) exceeds a threshold depending on N and p: The value of the threshold Γ N,p can be explicitly characterized (see Corollary 4) and tends to 1 as p → +∞. The paper is organized as follows. Section 2 contains the statement of our results, which are proved in Section 4 after giving in Section 3 some preliminary material of geometric nature. Section 5 is devoted to some related open questions and perspectives. Results We introduce the following classes of convex planar domains: C = the class of bounded convex domains in IR 2 ; C o = the subclass of C given by tangential bodies to a disk; P = the class of convex polygons; P N = the class of convex polygons having N vertices (N ≥ 3). Tangential bodies to a disk are domains Ω ∈ C such that, for some disk D, through each point of ∂Ω there exists a tangent line to Ω which is also tangent to D. Domains in P ∩ C o are circumscribed polygons, whereas domains in C o \ P can be obtained by removing from a circumscribed polygon some connected components of the complement (in the polygon itself) of the inscribed disk. In particular, the disk itself belongs to C o . Our first results are the following sharp bounds for the p-torsion of convex planar domains. We recall that, for any given p ∈ (1, +∞), q := p p−1 denotes its conjugate exponent. Theorem 1. For any p ∈ (1, +∞), it holds Moreover, • the left inequality holds asymptotically with equality sign for any sequence of thinning rectangles; • the right inequality holds asymptotically with equality sign for any sequence of thinning isosceles triangles. By sequence of thinning rectangles or triangles, we mean that the ratio between their minimal width and diameter tends to 0. We point out that, in the particular case when p = 2, the statement of Theorem 1 is already known. Indeed, the left inequality in (10) holds true for any simply connected set in IR 2 as discovered by Pólya [12]; the right inequality in (10) for convex sets is due to Makai [11], though its method of proof, which is different from ours, does not allow to obtain the strict inequality. Our approach to prove Theorem 1 employs as a major ingredient the following sharp estimates for the web p-torsion of convex domains, which may have their own interest. Theorem 2. For any p ∈ (1, +∞), it holds Moreover, • the left inequality holds asymptotically with equality sign for any sequence of thinning rectangles; • the right inequality holds with equality sign for Ω ∈ C o . Let us now discuss the implications of the above results in the shape optimization problem which consists in maximizing τ p in the class of convex polygons with a given area and a given number of vertices: We recall that, for any Ω ∈ P, Ω ⊛ denotes the regular polygon with the same area and the same number of vertices as Ω. Moreover, we set notice that by the isoperimetric inequality for polygons (see Proposition 7), γ(Ω) ∈ [1, +∞) and γ(Ω) > 1 if Ω = Ω ⊛ . With this notation, it is straightforward to deduce from Theorem 2 the following Corollary 3. The regular polygon is the unique maximizer of w p over polygons in P with a given area and a given number of vertices. More precisely, the following refined isoperimetric inequality holds: As a consequence, using (6), we obtain some information on the shape optimization problem (12): In particular, the p-torsion of the regular N -gon is larger than the p-torsion of any polygon in P N having the same area and an asymmetry measure larger than the threshold Γ N,p : Some comments on Corollary 4 are gathered in the next remark. Remark 5. (i) Using again (6) we infer Hence, asymptotically with respect to p, the condition γ(Ω) ≥ Γ N,p appearing in (14) becomes not restrictive. Moreover, if p = 2, we have Γ N,2 ≤ 2/ √ 3 ≈ 1.15 and the dependence on N of Γ N,2 can be enlightened by using the numerical values given in [6]: (ii) Though the validity of (4) is known for triangles, in order to give an idea of the efficiency of Corollary 4, consider the case N = 3 and p = 2. The equilateral triangle The solution to (1) is explicitly given by (27) below we find w 2 (T ⊛ ) = √ 3/768 and, in turn, that Γ 3,2 = √ 10/3 ≈ 1.054. Consider now the isosceles triangles T k having the basis of length k > 0 and the two equal sides of length and γ(T k ) ≥ Γ 3,2 if and only if 2 √ 10 k 3 − 10 k 2 + 3 ≥ 0, which approximatively corresponds to k ∈ (0.760, 1.301). We conclude this section with a variant of Theorems 1 and 2. Theorem 6. For every p ∈ (1, +∞), it holds Moreover, • the left inequality in (15) holds with equality sign for balls; • the left inequality in (16) holds with equality sign for Ω ∈ C o ; • the right inequality in (16) holds asymptotically with equality sign for a sequence of thinning rectangles. The right inequality in (15) is not sharp. In fact, for p = 2, one has the sharp inequalities see [13, p. 100] for the left one, and [11] for the right one. Using the isoperimetric inequalities (15) and (16), one can also derive statements similar to Corollaries 3 and 4, where γ(Ω) is replaced by another "asymmetry measure" given by Geometric preliminaries In this section we present some useful geometric properties of convex polygons, which will be exploited to prove Theorem 2. First, we recall an improved form of the isoperimetric inequality in the class P, whose proof can be found for instance in [3,Theorem 2]. For any Ω ∈ P, we set Proposition 7. For every Ω ∈ P, it holds with equality sign if and only if Ω ∈ P ∩ C o , namely when Ω is a circumscribed polygon. Next, we recall that, denoting by R Ω the inradius of any Ω ∈ P, for every t ∈ [0, R Ω ], the inner parallel sets of Ω are defined by Ω t := {x ∈ Ω : dist(x, ∂Ω) > t} (notice in particular that Ω R Ω = ∅). Then we focus our attention on the behaviour of the map t → C Ωt on the interval [0, R Ω ], and on the related expression of Steiner formulae. For every Ω ∈ P, we set : Ω t has the same number of vertices as Ω . Clearly, if r Ω < R Ω , the number of vertices of Ω t is strictly less than the number of vertices of Ω for every t ∈ [r Ω , R Ω ). Proposition 8. For every Ω ∈ P and t ≥ 0, Ω t ∈ P and the map t → C Ωt is piecewise constant on [0, R Ω ). Moreover, for every t ∈ [0, r Ω ], it holds Finally, for every t ∈ [0, Proof. For t small enough, the sides of Ω t are parallel and at distance t from the sides of Ω, and the corners of Ω t are located on the bisectors of the angles of Ω. r Ω is actually the first time when two of these bisectors intersect at a point having distance t from at least two sides, see Figure 1. r Ω Figure 1: Intersection of bisectors Therefore, for t < r Ω , Ω t has the same angles as Ω (so C Ωt = C Ω by (17)), and we notice that the perimeter of grey areas in Figure 2 is 2t cotan(θ i /2), and their areas are t 2 cotan(θ i /2), which gives (19) (still valid for t = r Ω by continuity). Let us now show that the map t → C Ωt is piecewise constant on [0, R Ω ), assuming that r Ω < R Ω . Once t = r Ω , Ω t still has sides parallel to the ones of Ω but loses at least one of them. Again, C Ωt is constant for t ≥ r Ω until the next value of t such that another intersection of bisectors appears (we now consider bisectors of Ω r Ω ). The number of discontinuities of t → C Ωt is finite since Ω has a finite number of sides, and therefore iterating the previous argument, we get that t → C Ωt is piecewise constant. Finally, from (17) we infer that C Ω ≥ π for any Ω ∈ P, so that (20) follows from the concavity of the map t → |∂Ω t | on [0, R Ω ] (see [ 1, Sections 24 and 55]). A special role is played by polygons Ω ∈ P such that r Ω = R Ω , namely polygons Ω whose inner parallel sets all have the same number of vertices as Ω itself. These are polygonal stadiums, characterized by the following Definition 9. We call S the class of polygonal stadiums, namely polygons P ℓ ∈ P such that there exist a circumscribed polygon P ∈ P ∩ C o having two parallel sides, and a nonnegative number ℓ such that, by choosing a coordinate system with origin in the center of the disk inscribed in P and the x-axis directed as two parallel sides of P , P ℓ can be written as where P − (resp. P + ) denotes the set of points (x, y) ∈ P with x < 0 (resp. x > 0), and R P is the inradius of P , see Figure 3. Proof. We use the same notation as in Definition 9. Assume that Ω = P ℓ ∈ S. Then the bisectors of the angles of Ω intersect either at (− ℓ 2 , 0) or at ( ℓ 2 , 0), which are at distance R Ω from the boundary, see Figure 4. In particular, if Ω is circumscribed to a disk, namely if ℓ = 0, then the bisectors of the angles of Ω all intersect at the center of the disk. Therefore, Ω t has the same number of sides as Ω if t < R Ω . R Ω ℓ Ω t Figure 4: Parallel sets of a polygonal stadium P ℓ Conversely, assume that R Ω = r Ω . The set {x ∈ Ω : d(x) = R Ω } is convex with empty interior, so either it is a point, or a segment. If it is a point, then its distance to each side is the same, and therefore the disk having this point as a center and radius R Ω is tangent to every side of Ω, so that Ω is circumscribed to a disk. If it is a segment, we choose coordinates such that this segment is − ℓ 2 , 0 ; ℓ 2 , 0 for some positive number ℓ. Every point of this segment is at distance R Ω from the boundary, so Ω contains the rectangle − ℓ 2 , ℓ 2 × (−R Ω , R Ω ). Considering we have that P is circumscribed and Ω = P ℓ . Remark 11. Thanks to Proposition 10, for any polygonal stadium P ℓ , the validity of the Steiner formulae (19) extends for t ranging over the whole interval [0, R P ℓ ]. Moreover, the value of the coefficients |P ℓ |, |∂P ℓ | and C P ℓ appearing therein, can be expressed only in terms of |P |, R P , and ℓ (see Section 4). It is enough to use the following elementary equalities deriving from decomposition (21) |P ℓ | = |P | + 2ℓR P , |∂P ℓ | = |∂P | + 2ℓ , C P ℓ = C P , R P ℓ = R P , and the following identities holding for every P ∈ P ∩ C o Finally, we show that the parallel sets of any convex polygon Ω are polygonal stadiums for t sufficiently close to R Ω : Proposition 12. For every Ω ∈ P, there exists t ∈ [0, R Ω ) such that the parallel sets Ω t belong to S for every t ∈ t, R Ω . Proof. We define t as the last time t < R Ω such that Ω loses a side (we may have t = 0). Therefore ∀t ∈ t, R Ω , Ω t has a constant number of sides, and so is in the class S by Proposition 10. Proof of Theorem 2 We first prove Theorem 2 for Ω ∈ P, then we prove it for all Ω ∈ C. • Step 1: comparison with inner parallel sets. For a given Ω ∈ P, we wish to compare the value of the energy with the one of its parallel set Ω ε for small ε. To that aim, we use the representation formula (5) for w p (Ω), and Steiner's formulae (19). In applying them we recall that, by Proposition 8 the map t → C Ωt is piecewise constant for t ∈ [0, R Ω ), and in particular it equals C Ω on [0, r Ω ]. Taking also into account that (Ω ε ) t = Ω ε+t , as ε → 0 we have As we shall see in the next steps, formula (24) will enable us to reach a contradiction if (11) fails. • Step 2: if (11) fails for some convex polygon then it also fails for a polygonal stadium. Let Ω ∈ P \ S, and assume that (11) fails. We have to distinguish two cases. First case: Assume that Using the isoperimetric inequality (18) and (25), one gets Inserting this information into (24) shows that for sufficiently small ε. In fact, more can be said. By Proposition 8 we know that C Ωt = C Ω for all t ∈ [0, r Ω ). By extending the above argument to all such t, we obtain that, if (25) holds, then the map t → wp(Ωt)|∂Ωt| q |Ωt| q+1 is strictly increasing for t ∈ [0, r Ω ). In particular, by (25), So, if Ω r Ω ∈ S , we are done since it violates (11). At t = r Ω the number of sides of Ω t varies. If Ω r Ω / ∈ S, we repeat the previous argument to the next interval where C Ωt remains constant. Again, the map t → wp(Ωt)|∂Ωt| q |Ωt| q+1 is strictly increasing on such interval. In view of Proposition 12, this procedure enables us to obtain some polygonal stadium such that (25) holds. Second case: Assume that Hence, Inserting this into (24) and arguing as in the previous case, we see that the map t → wp(Ωt)|∂Ωt| q |Ωt| q+1 is strictly decreasing for t ∈ [0, R Ω ). In view of Proposition 12, this proves that there exists some polygonal stadium such that (26) holds. • Step 3: explicit computation for a polygonal stadium. Let Ω = P ℓ ∈ S be a polygonal stadium. We are going to derive an explicit expression for the function We point out that, in the special case ℓ = 0, Ω ∈ P ∩ C o (namely Ω is a circumscribed polygon), and it is proven in [4,Proposition 2] that In particular, formula (27) shows that the upper bound in (11) is achieved when Ω ∈ C o . We now show that the above formula can be suitably extended also to the case ℓ > 0. Our starting point is the representation formula (5). Therein, we use the Steiner formulae (19); in particular, by Propositions 8 and 10, we know that C Ωt ≡ C Ω for every t ∈ [0, R Ω ). Moreover, since P ∈ P ∩ C o , we can exploit identities (22). Setting for brevity we obtain Of course, taking x = 0 in (28) gives again (27); on the other hand, taking x → ∞ gives the asymptotic behaviour for thinning polygonal stadiums. In order to prove the right inequality in (30), consider the function (1 + 2y) q y ∈ (0, +∞) and we need to prove that Φ(y) < 0 for all y > 0. This is a consequence of the two following facts: In order to prove the left inequality in (30), consider the function (1 + 2y) q y ∈ (0, +∞) and we need to prove that Ψ(y) > 0 for all y > 0. This is a consequence of the two following facts: Both inequalities in (30) are proved and (29) follows. We point out that, in the case q = 2, some explicit computations give the stronger result that the map is decreasing. We believe that this is true for any q, but we do not have a simple proof of this property. • Step 5: conclusion. Let Ω ∈ P and assume for contradiction that Ω violates (11). Then by Step 2 we know that there exists a polygonal stadium which also violates (11). This contradicts Step 4, see (29). We have so far proved that (11) holds for all Ω ∈ P. By a density argument we then infer that Therefore, in order to complete the proof we need to show that the left inequality in (31) is strict. Assume for contradiction that there exists Ω ∈ C such that Take any sequence Ω k ∈ P such that Ω k ⊃ Ω and Ω k → Ω in the Hausdorff topology. Similar computations as in (23), combined with (20), enable us to obtain where α is some positive constant, depending on Ω but not on k. Therefore, since Ω k t → Ω t for all t ∈ [0, R Ω ], we have where o(1) are infinitesimals (independent of ε) as k → ∞. Hence, by letting k → ∞ and taking ε sufficiently small, we obtain wp(Ωε)|∂Ωε| q |Ωε| q+1 < 1 q+1 , which contradicts (31). Proof of Theorem 1 The inequalities (10) follow directly from (11) and (6) so we just need to show that they are sharp. For the right inequality, take a sequence of thinning isosceles triangles T k . Then, by Theorem 2 we have On the other hand, by [4,Proposition 3] and (6) we know that . Proof of Theorem 6 Since it follows closely the proof of Theorem 1, we just sketch it. We first prove the counterpart of Theorem 2 and we follow the same steps. • Step 1. Given Ω ∈ P and using R Ωε = R Ω − ε + o(ε) we prove: (33) • Step 2. We prove that, if (16) fails for some Ω ∈ P, then it also fails for a polygonal stadium. To that end, we estimate the sign in (33) with the help of the following classical geometric inequalities (see [1]) R Ω . • Step 5. The previous steps leads to (16) for polygons and by density for convex domains. The strict right inequality in (16) can be obtained by reproducing carefully the computations in Step 1, similarly as done in Step 5 of Section 4.1. Now the counterpart of Theorem 2 is proved, and we may use (6) in order to get (15) from (16). Balls realize equality in the left inequality of (15) because the are at the same time circumscribed and maximal for the quotient w p /τ p . Some open problems We briefly suggest here some perspectives which might be considered, in the light of our results. Sharp bounds for the p-torsion in higher dimensions. In higher dimensions the shape functionals τ p and w p can be defined in the analogous way as for n = 2. In [2], Crasta proved the following sharp bounds: ∀ Ω bounded convex set ⊂ IR n , n + 1 2n < w 2 (Ω) τ 2 (Ω) ≤ 1 .
2011-12-21T15:12:14.000Z
2011-12-21T00:00:00.000
{ "year": 2013, "sha1": "df91031021611d12e38900146950a191a6d149bd", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1112.5050", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "df91031021611d12e38900146950a191a6d149bd", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
233769854
pes2o/s2orc
v3-fos-license
Designing Tasks for Introducing Functions and Graphs within Dynamic Interactive Environments In this paper, we elaborate on theoretical and methodological considerations for designing a sequence of tasks for introducing middle and high school students to functions and their graphs. In particular, we present didactical activities with an artifact realized within a dynamic interactive environment and having the semiotic potential for embedding mathematical meanings of covariation of independent and dependent variables. After laying down the theoretical grounds, we formulate the design principles that emerged as the result of bringing the theory into a dialogue with the didactical aims. Finally, we present a teaching sequence, designed and implemented on the basis of the design principles and we show how students’ efforts in describing and manipulating the different graphs of functions can promote their production of specific signs that can progressively evolve towards mathematical meanings. Introduction The concept of function is central in modern mathematics and is one of the basic concepts in mathematics teaching and learning. The Indicazioni Nazionali of the Italian Ministry of Education establishes that at the end of high school (grade 13), students should have learned functions, differential calculus, integrals, and they should be able to interpret the graph of a function and to represent a real-valued function of a real variable through its graph, even by using technological artifacts for representing data [1]. Indeed, the competence related to reading a graph and representing a phenomenon through a simple function and its graph is recognized as one of the main mathematical competencies for citizenship. The Unione Matematica Italiana suggests that one of the main didactical aims in learning mathematics is the acquisition of functional thinking by students [2] (p. 206). This type of thinking can be fostered through a strong connection between the function and its graph, the interpretation and the analysis of its behavior, and the link to the analytical expression. In other words, in addition to the techniques of calculus, is the importance of qualitative analysis, which should become a habit of mind for students and teachers. However, from a didactical point of view, it should be considered that the conceptual area of functions and their representations involves significant cognitive complexities. There is a wide literature describing a variety of difficulties related to the learning of functions [3][4][5][6][7]. Moreover, many researchers reported on possible problems encountered by students when dealing with the graphical representation of a function [8,9]. In particular, it is well known that students often see a Cartesian graph of a function as an object, a static picture of a physical situation, without relating the Cartesian graph to the underlying functional relationship and without imaging the trajectory of a point moving on the plane according to the covariation of two quantities, one depending on the other. This is pointed out by Carlson [10] and Thompson [11], who stressed the importance of conceiving functions as asymmetric relations between two variables. In order to address this type of difficulty, some researchers implemented mathematical tasks involving dynamic representations of covarying quantities. This approach seems to have supported middle and high school students in considering functional dependency when dealing with graphical representations of functions [8,[12][13][14][15]. The aim of the study presented in this paper is to design a sequence of tasks to introduce middle and high school students to functions as covariation. Moreover, the paper is intended to theoretically motivate the choices underlying the construction of the tasks based on the identification of the cognitive processes to be promoted and the ways in which they can be promoted. After laying down our theoretical grounds, the first fundamental step towards the realization of this aim consists of formulating the design principles. They emerged as the result of an attempt to bring all the theoretical lenses adopted into a dialogue with our educational objective. Then we present a teaching sequence on real-valued functions of a real variable and their graphical representations, designed and implemented on the basis of the design principles. The paper ends with an a priori analysis of the students' cognitive processes that can be supported by the activities of the sequence. The a priori analysis is conducted in the light of the theoretical frameworks that will be reported in the next section, and it is also corroborated by some qualitative data collected in previous studies [16][17][18], where cognitive processes enacted by students, carrying out some activities that are part of our sequence, are analyzed from different theoretical perspectives. Theoretical Framework This study is situated within the framework of design-based research (DBR). Generally speaking, the aim of DBR is to promote specific teaching-learning processes through both the theoretical elaboration, that leads to the formulation of design principles and the development of teaching materials based on these principles. Theories play a central role in DBR and different levels of generality of theories are considered [19,20]. The following theoretical levels are taken into account in the literature [21] and we list them from the most general to the most specific: orienting frameworks or background theories; domain-specific instruction theories; local instruction theories. • Orienting frameworks [20] or background theories [22]: according to Prediger, Gravemeijer, and Confrey [21] (pp. 884-885), these theories are the foundation of the research that significantly influence both the design and the way the data are interpreted. In this study, we adopt a sociocultural perspective [23]. In this frame, learning arises through the collaboration between individuals that cooperate to accomplish a task with a common aim. The shared social experience promotes interpersonal (external) processes that can be internalized becoming an intrapersonal (internal) process. In the process of internalization, the semiotic production (i.e., the production of signs, in particular of language) assumes a fundamental role. Signs have a twofold function as mediating tools: as a communicative or cultural tool, used for the collaborative construction and for sharing of knowledge; and as a psychological tool, used for individual thought and reflection. • Domain-specific instruction theories: according to Prediger, Gravemeijer, and Confrey [21] (p. 885) they are theories that "are specific for the school subject, in our case mathematics education, and offer a general framework for action". This study is rooted in the Vygotskian perspective, with particular regard to the social construction of knowledge and semiotic mediation accomplished through cultural artifacts. The designed tasks involve the use of digital artifacts to promote students' construction of mathematical meanings through the production of signs, within the Theory of Semiotic Mediation [24]. In this theory, the activities with an artifact aim to promote "the evolution of signs expressing the relationship between the artifact and tasks into signs expressing the relationship between artifact and knowledge" [24] (p. 753). In this way, the artifact signs can evolve into mathematical signs, and then personal meanings can evolve into desired mathematical meanings, where the artifact assumes the role of the tool of semiotic mediation. This evolution of meaning is not necessarily spontaneous. Social interactions, in particular verbalization, and the specific teacher's interventions, as in Mathematical Discussion [25], can promote the production of collective signs, the awareness of the meanings of different signs, and their evolution towards mathematical meanings. • Local instruction theories, that address the learning of a specific topic, in this case, functions and graphs. They are "theories about a possible learning process, together with theories about possible means of supporting that learning process [ . . . ]. These means of support include the classroom social norms and the socio-mathematical norms that have to be in place" [21] (p. 885). In this paper, we are interested in real functions and their graphs and we discuss these concepts from an epistemological point of view. It is possible to think about functional dependency as a correspondence, which means functions as entities that accept an input and produce an output. However, functional dependency can be seen also as covariation, as a process involving two quantities varying together [11,26,27]. A representation of functions could highlight one of these points of view and hide the other one. In mathematics education, the covariational view of functions is essential for understanding more advanced concepts of calculus that are related to functions, such as limits and derivatives [12,28,29]. More specifically, in this study, we consider a qualitative description of covariation as a dynamic and asymmetric relationship between the variations of the two variables. Given these theoretical lenses, addressing different levels of generality, we first worked to make a bricolage. The term was suggested by Gravemeijer and Cobb [22] and it refers to the work of tinkering with these different theoretical resources in order to formulate the design principles. In particular, we observe that the asymmetric relation between variables can be mediated (in the sense of Theory of Semiotic Mediation) by a dynamic interactive environment (DIE), as explored by Falcade, Laborde, and Mariotti [13]. A DIE makes it possible to construct geometrical objects and to move them by dragging some elements of the construction and maintaining the mathematical relationships established during the construction process. Therefore, it is possible to distinguish two types of movement: direct, when the user acts directly on a base object, i.e., a point that gives rise to the construction; and indirect if the observed movement is obtained as a consequence of dragging another object [30]. Indeed, it is thanks to the use of dragging that the dependence relation between variables, that characterizes functional relationships, can be experienced in terms of these two different types of motion. In this context, dragging assumes the role of a psychological tool [31,32]. In the following sections, after the presentation of the design principles, we describe a sequence of activities that we design to introduce students to the real-valued function of a real variable and their graphs, focusing on the evolution of signs in different representations of covariation within a DIE. The Design Principles In the DBR methodology, the task design emerges as a result of a dialogue between theoretical perspectives and educational objectives. The first part of this study concerns the formulation of design principles as the result of such a dialogue. The role played by these different principles is to guide the design of the task sequence. As we did in the previous section with the theories involved in this study, we formulate the design principles addressing different levels of generality and different focuses, which we identify as methodological, epistemological, and related to the artifact. Methodological Principles Methodological principles emerge by orienting frameworks and domain-specific instruction theories, the Vygotskian perspective, and the Theory of Semiotic Mediation. • Minimize teacher's interventions during classroom activities in order to pose particular attention to students' interactions and to promote the production of individual and collective signs and meanings. The teacher orchestrates the discussion so that the development of students' meanings towards mathematical meanings is not forced but emerges in the construction of a semiotic chain. • Make students work alone, in pairs, in small groups, and in the whole-class group. • Foster students to discuss as well as ask students for written explanations to support their production of signs, to communicate and to become aware of the personal, collective, and mathematical meanings. • Use (or not) some mathematical formal terms in the text of the task depending on the goal of the activity and concerning students' words used in previous lessons. • Support the development of a suitable language, from a mathematical point of view, to communicate and describe the representations proposed. • Create conflictual situations for students who experience a mismatch between what they see and what they expect to see. • Do not give definitions a priori. The aim is not to explain certain properties of functions but to promote the production of a language that can evolve towards a mathematical language about functions and their graph. • Use artifacts to support the development of meanings from personal to mathematical meanings. Epistemological Principles Epistemological principles concern different aspects of the mathematical meanings that are related to the educational objectives. Their formulation is influenced by our local instruction theories, and especially, by the theories on covariational reasoning: • Focus on students' exploration of covariation. • Focus on qualitative aspects, to study the functions' behavior and the relationships between changes in the variables. • Ask for a description of the possible changes of the two variables, instead of a description of what specific values they can assume. • Represent the dependence relation of f (x) to x in terms of an asymmetric relation between the two variables. • Use dynamic representations of functions to foster comparisons between the variations of the two variables in the domain and the codomain. Artifact-Related Principles The artifact-related principles guide us both in the choice of the artifact and in the design of the activities with the artifact. These principles arise from the domain-specific instruction theories and the local instruction theories: • Use an artifact that effectively implements the above epistemological design principles. In particular, the artifact has to allow the construction of both static and dynamic representations of functions. Moreover, through the interaction with the artifact, students should explore covariation of variables. • Represent the dependence relation of f (x) to x in terms of the asymmetric relationship between the movements of the two variables. • Build and reinforce the relations between the different representations of functions, especially between dynamic and static ones. • Ask for transitions between dynamic and static representations of functions and work on the differences and similarities between these different representations. • Define ad hoc functions, with a specific behavior or property that can be embedded in the artifact. • Give students previously constructed dynamic interactive files with dynamic graphs that they can manipulate and explore by dragging and by activating the trace mark. • Use different dynamic graphs characterized by different reciprocal positions of the axes. In particular, use one-dimensional graphs where both variables move in the same direction and two-dimensional graphs where the two variables move along the Cartesian axes. • Do not use numbered axes in the initial stages, in order to put the focus on the movement of variables instead of on their values. • Disable the magnetism in the files. This is a property that DIEs allow to a point that makes it move on the line representing the real axis as if it has a magnet that attaches it to the whole numbers. Disabling this tool, the dragging of the point is more uniform. • Use ticks instead of points, which is the default construction offered by DIEs, to represent the variables, in order to highlight the distinction between the meanings of "one value" and "a pair of values". In the next section, we discuss some epistemological and cognitive issues in order to design a didactical sequence of activities based on the design principles. From the Design Principles to the Didactical Sequence The usual representation of real-valued functions of a real variable is the Cartesian graph, which mathematically is the set of points of (x, f (x)) where the independent variable x belongs to the domain of the function. The Cartesian graph is a powerful representation of functions since it offers an immediate and global view of the "behavior of the function". From a cognitive point of view, interpreting and manipulating a graph requires the reconstruction of the relationship between the two real variables x and f (x), starting from the set of points P of coordinates (x, f (x)). ONE point P of the graph is a point in the Cartesian plane that represents the functional relation between TWO numbers (x and f (x)). These TWO numbers belong to the same set of real numbers, but they correspond to TWO points belonging to TWO different straight lines, that are orthogonal to each other, respectively the x-axis and the y-axis. The choice of representing the same set of real numbers as two different and orthogonal lines makes the construction of the graph possible, but it brings high complexity from a cognitive point of view, as highlighted in the literature, e.g., [4,5,7]. In order to "see" the covariation of variables in a graph, it is necessary to image the variation of the (independent) variable x and the consequent variation of the (dependent) variable f (x), to coordinate the two variations and to image the relationship between them. In other words, starting from a graph, that is a static curve in the Cartesian plane, the identification of the covariation of the two variables requires a dynamic reconstruction by the subject that has to reconstruct and coordinate the movements expressing the variations. As already mentioned in the paper, this reconstruction is not straightforward for many students, who make several mistakes in the interpretation and construction of Cartesian graphs. Appropriate use of specific artifacts makes it possible to perceive variation as movement and to act on the movement itself, consistently with the constraints imposed by the dependency between the two variables. In fact, a dynamic representation cannot be realized within a paper and pencil environment, and it is necessary to make use of appropriate supports, such as dynamic geometry software [33]. Moreover, the use of artifacts can also go far beyond the possibility of perceiving and acting on the variation of the variables. The Theory of Semiotic Mediation [24] considers an artifact to be an instrument of semiotic mediation, as it can have a dual relationship with personal meanings and mathematical meanings. It can, therefore, play a powerful role in the construction of mathematical meanings and, thus, in the learning process: " . . . on the one hand, personal meanings are related to the use of the artifact, in particular in relation to the aim of accomplishing the task; on the other hand, mathematical meanings may be related to the artifact and its use. This double semiotic relationship will be named the semiotic potential of an artifact. Because of this double relationship, the artifact may function as a semiotic mediator and not simply as a mediator, but such a function of semiotic mediation is not automatically activated; we assume that such a semiotic mediation function of an artifact can be exploited by the expert (in particular the teacher) who has the awareness of the semiotic potential of the artifact both in terms of mathematical meanings and in terms of personal meanings" [24] (p. 754, original emphasis). According to the Theory of Semiotic Mediation, we analyze the semiotic potential of DIEs regarding the construction of mathematical meanings related to the concept of function. In this paper, we focus on representations of functions realized through a DIE and known as DynaGraphs [34], which are dynamic representations in which the independent variable is dynamically draggable on a line and is presented separately from its image. Both the x-axis and y-axis are horizontal; originally, they were referred to as "the x-Line" and "the f (x)-Line" (see Table 1). Table 1. The graphs used in the activities. SGc is the well-known Cartesian graph. DGp, DGpp, and DGc are dynamic graphs where x and f (x) are represented by ticks. The tick representing x is draggable (as indicated by the mouse cursor in the screenshots) and its movement causes the movement of the tick representing f (x), according to the functional dependency. Acronym Description Screenshot DGp Dynamic graph with one (horizontal) line. The two ticks representing x and f (x) are on the same line. In summary, through a sequence of activities, students construct the SGc of a function as the product of a process of interaction between the DGc and the curve in the Cartesian plane. It is from the relationship that binds together these two elements that the Cartesian graph assumes its meaning as a representation of a function. This interplay can be revealed by considering simultaneously the curve drawn in the plane and the underlying covariation between the two real variables, one depending on the other. In the sequence, we made it possible through the use of the trace mark. In particular, by constructing the point of coordinates ( , ( )) and by activating the trace on it, as x is dragged along its line, the trajectory of ( , ( )) remains plotted and visible; then the well-known Cartesian graph can be displayed (Figure 1). Results: The Didactical Activities In this section, we present a didactical sequence on the introduction of real-valued functions of a real variable and their graphical representations. The sequence is designed based on the design principles presented above and it can be proposed to both middle and high school students. Similar activities have been experimented with and analyzed [16][17][18]38,39] through different theoretical frameworks. Before getting into the heart of some activities, we make some observations about the structure of our sequence that starts from a DGp and brings us to the construction of an SGc. The independence of the x variable is realized by the possibility of freely dragging a point, bound to a line (the x-Line); the resulting movement visually mediates the variation of the point within a specific domain. Whereas the dependence of the f (x) variable is realized by an indirect motion: the dragging of the independent variable along its axis causes the motion of the dependent variable, bound to another line (the f (x)-Line), that cannot be directly dragged. In other words, the experience of dragging and of the two types of motion is related to functional dependency. Moreover, this dependency is then interpreted as invariant under dragging, when exploring different dynamic representations of a function in a DIE [35,36]. This is used to lead students from the production of a language in the context of the artifact to a mathematical language, for example, in the transition from the description of the different motions to the description of the relationship in terms of logical dependency between variables. The relation between the two movements represents, in fact, the covariation between the two variables. For example, if the two variables in the DynaGraph move in the same direction, the function is increasing, if they move in opposite directions, the function is decreasing. A constant function is represented with the independent variable that can be dragged by a direct motion and the other variable standing still. The relations existing between the mathematical properties of a function and the properties of its dynamic representation are many and they are very deep from a mathematical point of view, see, for example, [37]. In terms of the Theory of Semiotic Mediation [24], the artifact, that is constituted by dynamic graphs, embeds a mathematical knowledge and it is linked to artifact signs (signs related to the movement of points on the screen) and mathematical signs (signs related to the functions' properties expressed within a mathematical theory). Another very important tool offered by many DIEs is the trace of a point. The trace mark allows displaying the trajectory of an object during the dragging action. For example, by activating the trace mark on the independent variable x, when x is dragged in an interval [a, b], the part of the x-line corresponding to [a, b] will be displayed. In a similar way, by activating the trace mark on the dependent variable f (x), when x is dragged in [a, b], all the positions assumed by f (x) will be displayed and then the subset of the f (x)-line representing the set f ([a, b]) will be marked. In summary, dynamic graphs embed mathematical meanings that play a central role in the construction of meanings related to the concept of function. The direct/indirect motion is strictly connected to the mathematical meaning of functional relationship and to the possibility of distinguishing the independent variable from the dependent one. The trace activated on one of the two variables is connected to mathematical meanings like subsets of the domain, image of subsets of the domain, and to the graph itself (see below). The relation between the movements of the ticks is related to monotonicity properties and the relation between the speed of these movements is connected to the derivative. The change of direction of the tick representing the dependent variable is related to the presence of a relative maximum/minimum, etc. Obviously, we could go much further by carrying out an accurate analysis that leads to identifying the semiotic potential of the artifact in question. An important application of the use of the trace mark in this study is the construction of the Cartesian graph as the trajectory of the point (x, f (x)), displayed thanks to the activation of the trace, for x varying in the domain of the function. In this way, it is possible to represent both the trajectory of the moving point, and then to visualize the (dynamic) behavior of the function, and the global (and static) graph. We observe that it is possible to design different dynamic representations by varying the position of the two lines representing the domain of variation of x and f (x) respectively. In particular, we play on this possibility to build a sequence of tasks with the aim of promoting students' production of signs, the construction, and the development of mathematical meanings related to functions and their Cartesian graph. The sequence thus unfolds in different dynamic representations of functions. To slim down the narration, we are going to use the following acronyms referring to the different graphs of functions: DGp, DGpp, DGc, and SGc. The first letter stands for the modality in which it has been designed (D: dynamic, S: static), while the lower case letters indicate the number and the position of the axes (p: one horizontal line, pp: two horizontal parallel lines, c: Cartesian plane). In more detail: • DGp is a dynamic graph where, unlike the DynaGraphs described in [34], we bound the two variables on the same line to stress their belonging to the same set of numbers. The dynamic interactive file contains one fixed horizontal line, with two ticks bound to it. • DGpp appears like the traditional DynaGraph and it works as a DGp, but the two variables are bound to move along two distinct parallel lines. • The dynamic representation DGc brings us closer to the Cartesian graph of the function. In this representation, the two lines on which the variables move are perpendicular. • SGc is the well-known Cartesian graph, that can be drawn on a piece of paper. The representations are summarized in Table 1. In summary, through a sequence of activities, students construct the SGc of a function as the product of a process of interaction between the DGc and the curve in the Cartesian plane. It is from the relationship that binds together these two elements that the Cartesian graph assumes its meaning as a representation of a function. This interplay can be revealed by considering simultaneously the curve drawn in the plane and the underlying covariation between the two real variables, one depending on the other. In the sequence, we made it possible through the use of the trace mark. In particular, by constructing the point of coordinates (x, f (x)) and by activating the trace on it, as x is dragged along its line, the trajectory of (x, f (x)) remains plotted and visible; then the well-known Cartesian graph can be displayed (Figure 1). graph assumes its meaning as a representation of a function. This interplay can be revealed by considering simultaneously the curve drawn in the plane and the underlying covariation between the two real variables, one depending on the other. In the sequence, we made it possible through the use of the trace mark. In particular, by constructing the point of coordinates ( , ( )) and by activating the trace on it, as x is dragged along its line, the trajectory of ( , ( )) remains plotted and visible; then the well-known Cartesian graph can be displayed (Figure 1). Results: The Didactical Activities In this section, we present a didactical sequence on the introduction of real-valued functions of a real variable and their graphical representations. The sequence is designed based on the design principles presented above and it can be proposed to both middle and high school students. Similar activities have been experimented with and analyzed [16][17][18]38,39] through different theoretical frameworks. Before getting into the heart of some activities, we make some observations about the structure of our sequence that starts from a DGp and brings us to the construction of an SGc. Results: The Didactical Activities In this section, we present a didactical sequence on the introduction of real-valued functions of a real variable and their graphical representations. The sequence is designed based on the design principles presented above and it can be proposed to both middle and high school students. Similar activities have been experimented with and analyzed [16][17][18]38,39] through different theoretical frameworks. Before getting into the heart of some activities, we make some observations about the structure of our sequence that starts from a DGp and brings us to the construction of an SGc. In the following activities, students are given some files with dynamic representations of some functions. The main task is: "Explore the construction, identify and describe possible movements by using the dragging tool." Specific requests are then made in some tasks to focus students' exploration on specific aspects of functions and specific properties of the representations. To simplify the presentation, we group the activities into three groups: activities with dynamic representations; activities for the construction of the Cartesian graph; activities for the transitions between different representations. Activities with Dynamic Representations Activity 1 AIMS: To create a situation in which students can experience the functional dependency and construct a language for describing variables and dependency. TASK: Explore the construction, identify and describe possible movements by using the dragging tool and write down your own observations on a sheet of paper. DESCRIPTION: Students can observe and act on pre-designed interactive files, with the DGp of different functions and they are asked to work alone or in pairs. The functional relationship is experienced through the possibility of direct and indirect action. The DGp has one unnumbered line in which only the numbers 0 and 1 are marked. The unnumbered line allows students to focus their attention on the movements of the ticks and not on the numbers where they stop. A description is expected in terms of the distinction between the two variables by assigning them different names and in terms of the reciprocal movements, hence the covariation. Activity 2 AIMS: To motivate the choice of introducing a second line of real numbers and a new representation, the DGpp. TASK: Explore the construction, identify and describe possible movements by using the dragging tool and write down your own observations on a sheet of paper. DESCRIPTION: Students are asked to work alone or in pairs on a pre-designed interactive file, with the DGp of the function f (x) = |x| (or other functions such that f (x) = x in some intervals). This particular function can promote an inner conflict for students when they investigate the movement of the two variables for positive values of the independent one since for these values, the two ticks are perfectly overlapped and they move together. As shown in Figure 2, in the DGp it is not possible to know if there are two overlapped ticks or if the tick representing the dependent variable is missing. This specific case shows the gains of having the two lines with the variables separated (Figure 3), which is relevant for laying the foundation for the construction of the Cartesian plane, where the domain and the codomain of the function are presented separately from one another. tween the two variables by assigning them different names and in terms of the reciprocal movements, hence the covariation. Activity 2 AIMS: To motivate the choice of introducing a second line of real numbers and a new representation, the DGpp. TASK: Explore the construction, identify and describe possible movements by using the dragging tool and write down your own observations on a sheet of paper. DESCRIPTION: Students are asked to work alone or in pairs on a pre-designed interactive file, with the DGp of the function ( ) = | | (or other functions such that ( ) = in some intervals). This particular function can promote an inner conflict for students when they investigate the movement of the two variables for positive values of the independent one since for these values, the two ticks are perfectly overlapped and they move together. As shown in Figure 2, in the DGp it is not possible to know if there are two overlapped ticks or if the tick representing the dependent variable is missing. This specific case shows the gains of having the two lines with the variables separated (Figure 3), which is relevant for laying the foundation for the construction of the Cartesian plane, where the domain and the codomain of the function are presented separately from one another. DESCRIPTION: Students work alone or in pairs on pre-designed interactive files, with the DGpp of functions that are not defined for all real numbers (e.g., ( ) = √ 3 2 ). The use of numbered axes makes it possible to give a more precise description of the range in which the dependent and the independent variable can move. Moreover, by activating, for example, the trace on the dependent variable, the set of images is displayed when the independent variable is dragged (Figure 4). DESCRIPTION: Students work alone or in pairs on pre-designed interactive files, with the DGpp of functions that are not defined for all real numbers (e.g., f (x) = √ x + 3 − 2 ). The use of numbered axes makes it possible to give a more precise description of the range in which the dependent and the independent variable can move. Moreover, by activating, for example, the trace on the dependent variable, the set of images is displayed when the independent variable is dragged (Figure 4). Explain your answers. DESCRIPTION: Students work alone or in pairs on pre-designed interactive files, with the DGpp of functions that are not defined for all real numbers (e.g., ( ) = √ 3 2 ). The use of numbered axes makes it possible to give a more precise description of the range in which the dependent and the independent variable can move. Moreover, by activating, for example, the trace on the dependent variable, the set of images is displayed when the independent variable is dragged (Figure 4). Activities for the Construction of the Cartesian Graph The construction of the Cartesian graph of a function involves the construction of a DynaGraph with perpendicular axes and of the trajectory of ( , ( )). In particular, starting from the DGpp and in order to introduce the second dimension to construct a DGc, it is necessary to rotate the line containing the dependent variable. In this way, we obtain the Cartesian axes on which the two ticks are bound to move. The functional dependence between the two variables is still represented by the relation between direct and indirect motion but they have two different directions: the tick on the abscissa axis is directly draggable, while the tick on the ordinate axis is indirectly draggable. Activities for the Construction of the Cartesian Graph The construction of the Cartesian graph of a function involves the construction of a DynaGraph with perpendicular axes and of the trajectory of (x, f (x)). In particular, starting from the DGpp and in order to introduce the second dimension to construct a DGc, it is necessary to rotate the line containing the dependent variable. In this way, we obtain the Cartesian axes on which the two ticks are bound to move. The functional dependence between the two variables is still represented by the relation between direct and indirect motion but they have two different directions: the tick on the abscissa axis is directly draggable, while the tick on the ordinate axis is indirectly draggable. Activity 5 AIMS: To work on the covariation of the two variables in two dimensions. TASK: Explore the construction, identify and describe possible movements by using the dragging tool and write down your own observations on a sheet of paper. DESCRIPTION: Students can observe and act on pre-designed interactive files, with the DGc of different functions. They verbalize their experience by using new words that are possibly connected to their previous activities with the other representations. Activity 6 AIMS: To relate the movements of the two ticks, along the Cartesian axes, to the movement of the point (x, f (x)) and to its trajectory. TASK: Students are asked to explore the DGc of a function, by dragging the ticks bound to the Cartesian axes, and they are asked to "Imagine and draw on a sheet of paper the trajectory of the point (x, f (x))." DESCRIPTION: This is a key step of the didactical sequence that brings to the construction of the graph of the function in the Cartesian plane. Before showing an SGc, we promote students' exploration of the DGc and we ask them to imagine, and then to draw on a sheet of paper, the trajectory of the point (x, f (x)). We observe that this point is not visualized on the computer screen, so this kind of task requires paying attention to the variations of both variables at the same time and then sketching a curve that synthesizes this covariation. Imagining and plotting the trajectory of the point (x, f (x)) from the observation of the movements of the two ticks (representing x and f (x)) in the DGc fosters familiarity with the behavior of the function resulting from the covariation of the variables. In this activity, students are asked to draw the curve on a sheet of paper and then to verify it by using the tools offered by the DIE. In particular, as shown in Figure 1b, it is possible to build the point (x, f (x)) as the intersection point of the perpendicular line to the x-axis passing through (x, 0) and the perpendicular line to the y-axis passing through (0, f (x)). Then, by dragging x, it is possible to see on the screen how (x, f (x)) moves in relation to the movements of x. Finally, by activating the trace mark on this point and dragging the independent variable, it is possible to obtain the image of the trajectory followed by (x, f (x)) (see Figure 1c). The curve that forms on the screen is the graph of the function in the Cartesian plane. Activities for the Transitions between Different Representations The following activities are designed to let students work with different graphs of the same function and, in particular, to support the transitions between dynamic and static representations of functions. Activity 7 AIMS: To promote processes for constructing the Cartesian graph of a function from the observation of the movements of the two variables in a dynamic representation. TASK: Students are asked to explore the DGpp of a function, by dragging, and they have to "Draw on a sheet of paper the trajectory of the point (x, f (x)) in a Cartesian plane." DESCRIPTION: Students can observe and act on pre-designed interactive files, with the DGpp of different functions. By observing how f (x) varies depending on x-movements, they have to draw the Cartesian graph of the function on a sheet of paper. A game that can be played by students in pairs consists of giving only one student access to the DGpp of a function and asking him/her to describe the movements (in words) to the other student that has to draw the Cartesian graph on the paper. Activity 8 AIMS: To promote covariational reasoning in a static function representation, to support the construction and the interpretation of the behavior of a function and of the movements of the variables from the observation of a Cartesian graph. TASK: Students are asked to match static representations to dynamic representations. DESCRIPTION: Students work in pairs. One of the students sees the Cartesian graphs of some functions, drawn on a sheet of paper, the other student works with some dynamic representations (DGpp or DGc) of the same functions. Neither student has access to the representations seen by the other. Their goal is to match the two sets of representations. The student who sees the static graphs has to describe the movements of the two variables and the other student has to identify the corresponding DynaGraph (and vice versa). Discussion and Conclusions The tasks presented in this paper are prototypes of activities of which several variants can be designed. The design of these tasks is carried out based on design principles and their implementation in a classroom is essentially based on methodological principles. The design principles have been formulated on the basis of well-established theories in mathematics education that offer insights into how to promote cognitive learning processes. The epistemological analysis of knowledge, especially concerning real-valued functions of a real variable, led us to focus on functional dependence and on covariation [12,28,29] as the main mathematical meanings which ground the didactical sequence. From a Vygotskian perspective and, in particular, in light of the Theory of Semiotic Mediation [24], we have identified an artifact having the semiotic potential for embedding mathematical meanings of covariation and, at the same time, allowing the design of tasks aimed at promoting fundamental processes for managing and interpreting graphs of functions. Generally speaking, the Theory of Semiotic Mediation provides activities to promote different types of signs [24]: • Activities with artifacts, in which students produce specific signs that are linked to the use of specific artifacts and, then, they are called artifact signs. • Individual production of signs. Asking students to discuss, to write down their observations, to describe the activity, is meant to promote students' production of signs. • Collective production of signs. Through the Mathematical Discussion [25], signs are shared, and through the orchestration of the teacher, the signs evolve into mathematical signs. During this phase, definitions can emerge (under the teacher's guide) as verbal representations of specific properties that students may have already observed and that are associated with specific signs. The activities that we have presented in this paper are designed to promote the production of specific signs with the use of artifacts, the reflection on these signs, and the collective evolution of these signs towards mathematical signs. In the following, we briefly outline an a priori analysis of the production of artifact signs, referring in part also to the results of similar didactical activities, that have been experimented with and analyzed through different theoretical frameworks, for example, [16][17][18]. The first activities with the artifact have the goal of making students realize that both ticks move (apart from the case of constant functions), but only one of them can be directly dragged. This asymmetry characterizing the situation, and the request to write down their observations or, more generally, to communicate with someone who was not looking at the computer screen, can prompt the students to look for a suitable language to distinguish the two ticks when they are asked to describe the movements. We observe that the linguistic distinction of the two ticks is the first fundamental step in the formation of the meaning of dependent and independent variables and of functional relationship. We expect that the language used by students initially will appear to be linked to the everyday language, with descriptions recalling spatial and temporal references of the position and speed of the two ticks, with recurrent terms (artifact signs) such as "move", "right/left/up/down", "when", "before/after". The dependence relation can be effectively expressed by students thanks to the difference between direct and indirect motion. In [18], the author, in describing the language used by the students in activities with DynaGraphs, reports several expressions similar to the following: "They move both [the ticks], we move just one of them"; "One [tick] does not move with the mouse, but moves when I move the other". The expressions "point that I can move" and "point that I can move by dragging another point" are artifact signs that should evolve towards the mathematical signs "independent variable" and "dependent variable". These artifact signs are part of a whole web of signs referring to movement, that allow the construction of the meanings that can evolve into mathematical meanings related to different properties of functions. Some example of more articulated expressions used by students to describe dynamic graphs are reported in [16]: • "They [the ticks] move both because, that is, with respect to the two fixed points that are zero and one, by moving maybe B to the right, A moves to the left and then it goes below zero and by moving B to the left A goes to the right" • "The two ticks move simultaneously along the line in such a way that, moving in the opposite direction, they are symmetrical with respect to their meeting point" In these cases, the artifact signs refer to the relationship between the movements of the two ticks expressed in terms of the relationship between the directions of movement. This artifact sign is related to the mathematical sign "decreasing function" and it has a dual role, from cognitive and didactical points of view: in the genesis of the meaning of "decreasing function", and in the construction of the cognitive processes underlying the interpretation of the graphs of functions, i.e., in the "dynamic reading" of a Cartesian graph. All these considerations could be extended to other artifact signs emerging along with the implementation of the activities. For example, linguistic expressions describing the speed of the ticks (evolving in the mathematical sign of "derivative"), the change of direction of the tick representing the dependent variable (evolving in the mathematical sign of "local maximum/minimum"), and so on (see [37]). In summary, the activities with the different representations (DGp, DGpp, DGc, SGc) and the activities involving a transition between two or more representations are aimed at making students produce artifact signs that present both similarities and differences and that constitute a semiotic chain evolving towards mathematical signs. As written in [24] (p. 778): "The construction of semiotic chains constitutes one of the goals of teacher's interventions. [ . . . ] reaching a mathematical definition does not only mean the production of a mathematically correct statement, but also the construction of a web of semiotic relationships supporting the construction of the corresponding mathematical concept. The construction of this web allows one to freely use artifact signs far beyond the definition of mathematical signs, without losing the generality requested by a mathematical discourse, or to come back to such signs whenever their evocative power could be useful" [24] (p. 778). Finally, in this paper, we focused on the didactical sequence of tasks designed to promote this web of semiotic relationships on mathematical meanings underlying functions and their representation, considering these meanings to be essential from both cognitive and didactical points of view. Obviously, the didactical sequence will not be concluded until the fundamental intervention of the teacher is aimed at organizing the mathematical knowledge in a theory. Conflicts of Interest: The authors declare no conflict of interest.
2021-05-07T00:02:58.086Z
2021-03-07T00:00:00.000
{ "year": 2021, "sha1": "2ce1906a4c179c897f8e61f58a802d0dac0df0af", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7390/9/5/572/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a6fbcc335e55f00cd552e5aff93ac75513e99405", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
208261029
pes2o/s2orc
v3-fos-license
Rare Loot Box Rewards Trigger Larger Arousal and Reward Responses, and Greater Urge to Open More Loot Boxes Loot boxes are a purchasable video-game feature consisting of randomly determined, in-game virtual items. Due to their chance-based nature, there is much debate as to whether they constitute a form of gambling. We sought to address this issue by examining whether players treat virtual loot box rewards in a way that parallels established reward reactivity for monetary rewards in slots play. Across two sets of experiments, we show that loot boxes containing rarer items are more valuable, arousing, rewarding and urge-inducing to players, similar to the way slots gamblers treat rare large wins in slots play. Importantly, we show in Experiment 2 that the duration of Post Reinforcement Pauses, an index of reward reactivity, are longer for boxes with rarer items. Boxes containing rarer rewards also trigger larger Skin Conductance Responses and larger force responses—indices of positive arousal. Findings of Experiment 2 also revealed that there was an increase in anticipatory arousal prior to the reveal of loot box rewards. Collectively, our results elucidate the structural similarities between loot boxes and specific gambling games. The fact that players find rarer game items hedonically rewarding and motivating has implications for potential risky or excessive loot box use for some players. Introduction The incorporation of chance-based microtransactions (i.e., in-game purchases) in videogames has sparked concern over the potential connection between video-games and gambling. Much of this concern is centred around the incorporation of loot boxes (a form of chance-based microtransaction) into games (King and Delfabbro 2018). Loot boxes are purchasable virtual boxes comprised of randomly determined in-game virtual items that vary in value based on their rarity in the game. Recent research has established a link between problem gambling severity and expenditure related to loot boxes specifically, arguing that loot box use within games may act as a 'gateway to gambling' Cairns 2018, 2019). Although researchers contend that there are parallels between loot box purchases and gambling, little is known about how players hedonically and motivationally respond to these types of rewards at the psychological, physiological and behavioural level. Specifically, in the gambling literature, research has demonstrated that physiological arousal triggered during gameplay is the primary reinforcer of gambling behaviour, and is tightly linked to one's urge to gamble (Brown 1986;Baudinet and Blaszczynski 2013). However, unlike traditional gambling situations, such rewards in loot boxes are non-monetary in nature. To elucidate the impact of loot box use on reward processing and motivation in players, the present research examines how avid players of a game containing loot boxes psychologically value these rewards, and further, how such rewards influence players' arousal and hence craving (i.e., urge) to open more loot boxes. Structural Similarities Between Loot Boxes and Slot Machines Researchers have often compared loot boxes with slot machines, given that they both operate on a variable-ratio reinforcement schedule, which is known to elicit a pattern of reinforced/repeated behaviours (Haw 2008). Indeed, the specific contents of any given loot box are unknown to the player, in that the items are randomly determined. A key difference between loot boxes and slots is that the items within loot boxes are valuable solely within the confines of the game. The appeal of loot boxes lies in the chance to obtain rare items that a player may wish to procure-the rarer the item the more it appears to be valued by players. The chance-determined content of loot boxes is similar to the unpredictable nature of outcomes in slot machines. In slots losses are the most common, small wins are less common, and large wins are exceedingly rare. In general, just as different slots outcomes are associated with varying monetary values that correlate with their rarity, loot boxes too contain items whose worth to the player may depend on their rarity. One of the goals of the current research will be to confirm that players do indeed find rarer items as being subjectively more valuable. Rewards, Arousal and Urge in the Context of Gambling Crucially, the allure of both slots games and loot box events within video-games likely involves the different arousal signatures for these various types of outcomes. Importantly, physiological arousal (e.g., triggered by gambling wins) is associated with both the onset and maintenance of gambling behaviours and has been shown to promote the urge to gamble (Baudinet and Blaszczynski 2013). Skin conductance responses (SCRs), which measure sweat gland activity, are a well-established indicator of physiological arousal (Sharpe et al. 1995;Dixon et al. 2011;Dixon et al. 2013). Wins in slots play provoke increases in physiological arousal that are titrated to the size of the win. That is, as wins get progressively larger so to do skin conductance response magnitudes (Dixon et al. 2013). If loot boxes mimic slots outcomes, then loot boxes of varying rarity would be expected to replicate this pattern, with loot boxes containing more common (lower valued) items inducing only small amounts of arousal, and loot boxes containing rarer (higher valued) items inducing commensurately higher arousal, and hence larger SCRs. In addition to SCRs, physiological arousal during slots play can be measured using the force one exerts on the spin button. Dixon et al. (2015) demonstrated that slots players would exert greater force on the spin button to initiate the next spin following large wins. Additionally, the force applied following wins was titrated to the win-size. Dixon et al. (2015Dixon et al. ( , 2018a interpreted this relation between force and win size as attributable to arousal and showed that this force measure was even more sensitive to win size than SCRs. Hence, if players had to press a mouse to continue to see more loot-box openings, we would expect that the force exerted on the mouse would be titrated to the rarity of the items in the loot box that was just viewed. Post-reinforcement pauses (PRPs) are another means of gauging the reward value of slots outcomes (Dixon et al. 2013). PRPs are a measure of the length of time between the outcome delivery and the initiation of the next spin (Dixon et al. 2013). In slots, when players spin and lose, they tend to initiate the next spin right away. When they spin and win, they tend to pause before spinning again. As mentioned, the length of this post-reinforcement pause tends to be titrated to the size of the win-the bigger the win, the longer the pause. Players appear to pause to internally celebrate the rewarding events, which exerts a momentary inhibition of further reward-seeking behaviour (Delfabbro and Winefield 1999;Dixon et al. 2013). Therefore, we expect that loot box users would demonstrate similar PRPs in response to more valuable loot boxes. In addition to arousal effects triggered after the outcomes are revealed, loot boxes may also trigger arousal prior to the outcome. Increased arousal is highly associated with anticipation of risk, but importantly also with reward (Critchley et al. 2001). In games like Overwatch, when a loot box is obtained, there is a brief anticipatory period in the moments leading up to the reveal of the items. Animations show the loot box shaking for a period of approximately 2 s prior to showing the items exploding out of the box. Hence, arousal might be expected to increase even before the reveal of the specific loot items. Thus, loot boxes may be particularly alluring outcomes because they may trigger a buildup of arousal prior to the outcome, followed by a further increase in arousal if the items revealed are ones coveted by the player. Hence, both the anticipation of, and the experience of reward linked to rare events (large wins, rare loot-box items) likely play a critical role in the subjective and physiological experiences of both slot machine players and loot box users. Additionally, a number of studies have shown that different types of outcomes promote the urge to keep playing in a gambling context-a phenomenon likely mediated by this combined effect of arousal triggered before and after reward delivery. For example, in both scratch cards and slot machines, if urge to keep playing is assessed following an outcome, urge to keep playing tends to be higher following a win and lower following a loss (Stange et al. 2016(Stange et al. , 2017aClark et al. 2012). Hence, rarer, more valuable, loots are expected to induce greater urge to open additional loot boxes versus more common and less valuable loots. As urge plays an integral role in problem gambling behaviours, demonstrating the urge-inducing properties 1 3 of loot boxes would further fortify the notion of an existing relationship between loot boxes, their problematic use and gambling. Current Study Overall, the current research seeks to determine whether loot box users for a particular game treat loot boxes of varying values in ways that are similar to the way slot machine gamblers treat varying sizes of wins in slots play. We chose the game Overwatch as our central focus as it is considered to be one of the most popular games that contains loot boxes among young adults (Guskin 2018). Across two studies, we expect participants to rate loot boxes containing rarer items as being more subjectively valuable to them. We also expect loot boxes with rarer items to be more arousing, positively valenced, rewarding, and more inducing of urge to open another box. Understanding players' reward reactivity in response to loot boxes of varying value will aid in determining whether loot boxes elicit arousing and urge-inducing responses, which are heavily implicated in the development of problematic behaviours in gamblers. If we can show that loot box rewards are treated in much the same way that monetary outcomes are treated in slots play, it would underscore that both reward structures may lead to similar reward processing and motivational effects. In general, slots are known to lead to problematic gambling for some players, and hence are highly regulated. Therefore, showing that loot boxes are responded to similarly to slots outcomes would speak to the question of whether the loot boxes constitute a form of gambling and are in need of regulation. Overview of Experiment 1 To our knowledge, this is the first experiment of its kind to directly observe game player responses to loot box rewards. We first aimed to confirm whether players who are familiar with loot boxes in the game Overwatch systematically categorize the value of the items in loot boxes based on the rarity of the items within the box. Although intuitively it would seem that this should be the case it is important to demonstrate this relation since other factors could potentially be at play-some loot boxes may be valued if they contain common items that nonetheless have a personal relevance to a particular player based on the character they typically play. To demonstrate that there was indeed a systematic relation between rarity and perceived value, we assessed the correlation between the net worth of the ingame items contained in a box as calculated using in-game currency with participants' subjective ratings of that loot box's value. Since the in-game currency measure is determined by the rarity of the items, if players do systematically value loot boxes containing rarer items more than loot boxes with less rare items, then we expect the subjective ratings of loot box value to track with its assigned objective value. The second aim of Experiment 1 was to determine whether loot boxes of greater objective and subjective value would yield higher ratings for arousal, positive valence, and urge to open another box. Overall we expected players to respond to loot boxes of greater objective and subjective value to be more arousing, more positively valenced, and importantly, more inducing of urge to open another loot box. Participants We recruited a total of 57 participants from two pools of students at the University of Waterloo. Twenty-eight participants were recruited from a pool of students voluntarily participating in psychology studies for credit. The remaining 29 student participants were recruited from poster advertisements across the University of Waterloo campus. In order to participate, students were required to have played the game Overwatch at least once in the past 4 weeks, as well as opened a loot box within Overwatch at least once in the past 4 weeks. Participants were compensated $5 for their time. We excluded 10 participants due to incomplete data or failed attention checks. This left us with a final sample of 47 participants. Loot Box Stimuli Participants viewed 49 videos of actual Overwatch loot box openings. Each video was a total of 15 s in length. In the first 2 s of each video, the loot box would appear to shake, at about 2 s the loot box would release four coloured coins representing each item into the air. The full reveal of the four loot box items occurred 5 s into the video (see Fig. 1). The video trial presentations appeared in randomized order for all participants. To calculate the objective value of each loot box, we used the objective cumulative worth of all the items based on their individual credit worth in the game. Individual items belong to one of four possible classes based on its rarity in the game, signified by the Fig. 1 Depiction of loot box video event. Loot box opening begins at 0 s, the coin reveal begins at 2 s and the full item reveal at 5 s. Colours associated with each item are visible to players during both the coin reveal and item reveal periods colour of the coin shown during the Coin Reveal. The coins then become the platform beneath each item once they are revealed during the Item Reveal phase. The four classes in order of increasing rarity are as follows: common, rare, epic and legendary. However, by the game's standards, each loot box guarantees at least one 'rare' tier item, allowing for the classification of boxes into three categories (Carpenter 2017). These 'rare', 'epic' and 'legendary' demarcations are used by the game designers and are familiar to avid players. Thus, we used the same classification system to categorize our stimulus set. Each class corresponds to a particular value of in-game credits. Furthermore, each class is associated with a specific colour highlighting the in-game value of the received items to players (see Table 1). For instance, the "rare" tier consisted of boxes containing at least one "blue" item and no epic (i.e., magenta) or legendary (i.e., gold) items, conferring a value range of 150 to 225 credits (a full list of the objective loot box values in our stimulus set can be found in supplementary materials). The finalized stimulus set consisted of 29 rare boxes, 15 epic boxes and 5 legendary boxes. These frequencies correspond with the actual probabilities of loot boxes of these values in the game, given that the stimuli were derived from a player's single loot box opening session. Subjective Ratings of Arousal and Valence Subjective ratings of arousal and valence were measured using Self-Assessment Manikins (SAM; Bradley and Lang 1994). Ratings of arousal and valence were measured for each loot box event. Participants indicated their current emotional state from a range of five manikins, which depicted an image representing varying levels of arousal and positive/ negative valence (see Fig. 2). Subjective Ratings of Urge to Open Another Loot Box Urge to open another loot box was measured using a 100-point line scale, with 0 representing no urge and 100 representing high urge. Participants rated their urge after each loot box event via a mouse click along the line in response to an item that read 'Using the scale below (0-100), please rate your level of urge to open another loot box'. Loot Box Subjective Value In order to gauge subjective value, participants were asked to indicate the number of ingame credits they would be willing to spend on the loot box using a number line. Loot box Box contains at least one "Gold" item. 5 1075-1325 credits credits ranged from 0 credits (no value) to 4000 credits (high value). This 4000 point scale was used because this was the maximum amount obtainable, given that each box contained 4 items, with each item being worth up to 1000 credits. Participants were also asked to indicate their subjective worth on a scale from 1 (no worth) to 16 (high worth). Due to the issues of multicollinearity of this measure with subjective value, we have omitted the measure of subjective worth from further analysis. Procedure This experiment was conducted using the online survey platform Qualtrics. Participants recruited from the university's online research pool were redirected to the experiment, and immediately granted half a credit toward a course of their choosing upon completion. Participants recruited from the poster ads were asked to email researchers for a link to the survey. Upon completing the online consent form, participants immediately began the experiment phase. Participants were presented with the randomized set of 49 loot box opening videos, each of them followed by the subjective survey battery. They were required to watch each video from start to finish before proceeding to the subjective surveys. Each subjective survey set contained a photographic depiction of the loot box outcome from the most recently opened loot box, as well as questions regarding their level of arousal, their subjective valence, and urge to open another box. Participants also indicated how much they were willing to spend for the items in each box. Participants completed the same survey set for all 49 videos, and were then debriefed. Data Reduction and Analysis Strategy Out of the 57 participants recruited, only 47 had valid data for all trials and had passed the attention check. Outliers were removed using the Van Selst and Jolicoeur (1994) trimming procedure. Subjective responses were analyzed by comparing the different tiers of the loot boxes to which participants were exposed. For all measures, loot outcomes that fell into their respective tiers were trimmed for outliers and then averaged. Using 'arousal' as an example, an outlier free average was calculated for the 29 'rare' loot boxes, another outlier free average was calculated for the 'epic' loot boxes and a third outlier free average was calculated for the 'legendary' loot boxes. These averages from each participant were used as input data for a repeated measures analyses of variance (ANOVA) with tier (rare, epic, legendary) as the repeated factor. Any significant main effects were analyzed using Fisher's Least Significant Difference (LSD) post hoc comparisons. Greenhouse-Geisser corrections were used when violations of sphericity occurred. Objective and Subjective Loot Box Values-Validity Check The 49 loot boxes ranged in objective value from 150 game credits to 1325 game credits. Some loot boxes had the same objective value. For instance, there are 19 loot boxes objectively valued at 150 credits, 2 loot boxes worth 175 credits and 7 loot boxes worth 200 credits. In total there were 14 unique values of loot boxes (150, 175, 200, 225, 325, 350, 375, 400, 425, 450, 500, 1075, 1125, and 1325). To determine whether these objective loot box values correlated with participants' subjective ratings of value, we tabulated the subjective ratings of value for the 14 aforementioned loot box values (e.g., an average subjective value was calculated for the 19 loot boxes objectively valued at 150 credits and used for this data point). Participants subjective ratings were then averaged culminating in 14 average ratings for each objective loot box value. There was a strong, positive association between the objective and subjective values of the loot boxes measured in credits, r(13) = .962, p ≤ .001. This correlation demonstrates that these frequent players would pay more in game currency for loots that are objectively worth more according to the net worth of the loot boxes. Moreover, Pearson's r correlations also showed that the subjective ratings are strongly and positively correlated with arousal (r(13) = .959, p ≤ .001) positive valence (r(13) = .938, p ≤ .001), and urge (r(13) = .905, p ≤ .001) to continue opening loot boxes. Objective ratings were also positively correlated with arousal, valence and urge. This pattern of results shows that more valuable loot boxes are deemed more arousing, positively valenced and inducing of the urge to open loot boxes. There was a consistent pattern of ratings for reward value when observing average ratings of loots across the reward tiers. Specifically, there was a main effect of credit value across the three different reward tiers, F(1.17, 54.82) = 59.53, p ≤ .001, η p 2 = .564. As predicted, legendary loots (M = 987.15, SD = 807.85) were deemed the most valuable compared to loots that fell into the epic (M = 347.06, SD = 509.58; p ≤ .001) and rare categories (M = 188.56, SD = 360.89; p ≤ .001). Moreover, rare tier loots were the least valued and rated lower in comparison to epic loots (p ≤ .001). Here we provide evidence that players are indeed determining the value of the loots based on the rarity of the items depicted by the game, as opposed to other idiosyncratic factors such as how well an item would suit a particular a given player's personal avatar (see Fig. 3). Subjective Ratings of Arousal, Valence and Urge Figure 3 also displays the average ratings for arousal, valence, and urge across the three reward tiers of loot boxes. In terms of arousal, a repeated measures ANOVA with a Greenhouse-Geisser correction revealed that there was a significant main effect of arousal across the three reward tiers, F(1.48, 68.19) = 125.28, p ≤ .001, η p 2 = .731. As expected, Fisher's LSD comparisons demonstrated that loot boxes containing at least one legendary item (M = 3.39, SD = 1.04) showed the greatest arousal scores compared to epic loot boxes (M = 2.21, SD = .753; p ≤ .001) and rare loot boxes (M = 1.46, SD = .487; p ≤ .001). Additionally, epic loot boxes were deemed more arousing to players compared to rare loot boxes (p ≤ .001). Finally, a repeated measures ANOVA with a Greenhouse-Geisser correction revealed a significant main effect of urge across the reward tiers, F(1.22, 56.50) = 23.89, p ≤ .001, ƞ p 2 = .349. As expected, Fisher's LSD indicated that urge ratings were highest after Table 1 In summary, in Experiment 1 we distributed an online survey containing 49 loot box videos and examined Overwatch players' subjective ratings of value, arousal, valence and urge to open another box for each video. Players subjectively valued most those loot boxes with the highest objective worth (e.g., those that contained at least one of the most uncommon 'legendary' items) compared to loots that were objectively worth less (e.g., those containing more common items falling into the 'rare' and 'epic' tiers) (Fig. 3). Moreover, players also gave larger ratings of arousal, valence and urge as the reward value of the loot box increased (see Fig. 3). Overview of Experiment 2 We showed in Experiment 1 that players systematically categorized valuable and non-valuable loots based on increasing rarity, and hence increasing objective value, of the items in a loot box. We also showed that loots of increasing rarity were associated with greater arousal, were more positively valenced and more urge inducing. In Experiment 2, we sought to replicate this pattern of subjective responses of value, arousal, valence and urge. Since loots containing common items (those in the 'rare' tier) were the most negatively valenced, a measure of disappointment was introduced as a more nuanced assessment of negative valence. We therefore expect that more common items acquired in loot boxes should produce higher ratings of disappointment (van Dijk 1999). Additionally, we sought to determine whether these subjective ratings converge with prominent measures of hedonic reward and arousal. Specifically, we examined whether the most uncommon items were hedonically the most rewarding as indexed by post-reinforcement pauses (PRPs). We predicted that loots containing exceedingly uncommon items (e.g., items in the legendary reward tier) would be more rewarding, hence, producing longer PRPs. We also examined whether video game players found these more valuable loots to be more arousing events as indexed by skin conductance responses (SCRs) and force responses. If so, the opening of valuable boxes should trigger larger SCRs, and such reward related arousal should manifest in harder mouse button presses as they press to continue the experiment and view more boxes. Specifically then both SCRs and force on the mouse button should rise with the rarity of the loot box just opened. Arousal should not only follow the opening of loot boxes, but should also heighten in the moments just before a loot box is opened. During this anticipation phase, when players might or might not see uncommon (and hence) valuable items, we would expect a rise in anticipatory arousal quantified by increases in skin conductance levels (SCLs). Participants A total of 46 avid Overwatch players from two participant pools were recruited to participate in the experiment. Of those 46, 37 participants were recruited through the University of Waterloo's SONA system for partial course credit. The remaining 9 participants were recruited from posters advertising the experiment placed around the University of Waterloo campus and received $10 as financial remuneration for their time. Eligibility requirements were identical to Experiment 1. Loot Box Stimuli We employed the same battery (n = 49) of video stimuli used in Experiment 1, plus three additional "practice" loot box videos used for familiarizing participants to the experimental protocol (the latter were not analyzed). Post-Reinforcement Pauses Participants were not required to watch each video in its entirety. Rather, they could click on a modified mouse to advance to the next stage of the experiment (the answering of subjective questions about the video they had just seen). Post-Reinforcement Pauses (PRPs) were based on how long players waited before clicking this modified mouse. Concretely, PRPs were measured by time in seconds between the reveal of the coloured coins and when they clicked on the modified mouse. Skin Conductance Responses Skin Conductance Responses (SCRs) were recorded via the use of two electrode plates (MLT118F GSR Finger Electrodes) attached to the index and middle finger of the participant's non-dominant hand. The electrodes were fed into a Powerlab (model 4/30), which amplified the signal and converted the analog signal to a digital recording of participants' physiological responses. Force Force was quantified as the amount of pressure (mv) imparted on the modified mouse when the participant made the press response to initiate the subjective surveys following the loot box video. Subjective Value, Arousal, Valence and Urge to Open Another Box Items measuring subjective value, arousal, valence and urge were identical to what was used in Experiment 1. Subjective Ratings of Disappointment Disappointment was measured using a 100-point line scale, with 0 representing no disappointment and 100 representing high disappointment. Participants rated their disappointment after each loot box event via a mouse click along the line. Design The experiment utilized a within-subjects design where following viewing 3 practice loot box videos and answering subjective questions to these videos participants were presented with 49 experimental loot box trials. Each participant viewed all 49 boxes, which consisted of outcomes ranging in value from 150 credits to 1325 credits. Procedure After informed consent was provided, participants completed a demographic questionnaire using Qualtrics software on a PC computer for reasons peripheral to the current research. Upon completion, participants were instructed to face a separate Macintosh computer where the loot box trials took place. The researcher attached two electrodes to the middle and index fingertips of the participant's non-dominant hand. The researcher instructed the participant to keep the hand attached to the electrodes as still as possible throughout this phase of the experiment. Participants were then instructed to view each loot box video and to click the modified mouse when they were ready to move on to the subjective measures pertaining to the video they had just seen. Upon completion of these subjective questions, a new loot box video appeared. Participants were told that the first three loot boxes would be practice and as such could ask questions for clarification before moving onto the experimental trials. Once participants had viewed and completed the questionnaires for all 49 experimental trials, the experiment concluded. Experiment 2 Results and Discussion Out of the 46 participants recruited, only 40 had valid data for all physiological and subjective measures. One participant was excluded for clicking the mouse (i.e., advancing to the subjective questions) prior to the reveal of the coins for 8 or more videos (15% of trials). Five participants were excluded for pressing the modified mouse button so softly that it failed to be recorded. Data Reduction and Analysis Strategy Both physiological and subjective measures were subjected to outlier analyses. Outliers were determined using the Van Selst and Jolicoeur (1994) trimming procedure which removes the biases in outlier attribution due to different numbers of observations across conditions. This technique was necessary as there were a greater number of loot boxes with more common items than loot boxes with rare items in this experiment. Moreover, for non-excluded participants, if there were any trials where participants pressed the modified mouse to initiate the subjective surveys prior to the reveal of the coins, these trials were excluded from all further analyses since participants would not have viewed any information relevant to the value and contents of the loot box. Reactions to the items within the loot box were measured using SCR amplitudes (Dawson et al. 2007). Recall that the coin reveal occurred at the 2 s mark of the video. We defined a 6 s window following the coin reveal (from the 3 s to 9 s marks in the video) in which changes in eccrine gland activity are attributable to viewing the coins or the appearance of the items themselves at the 5 s mark. We then subtracted the value at the beginning of this window from the maximum SCR within this window. The resulting value was the SCR amplitude related to the items in the loot box. Following Dawson et al. (2007), in calculating SCR amplitudes we considered as valid responses only those responses that were accompanied by an increase in skin conductance levels greater than .05 microsiemens. Subjective responses were analyzed in an identical manner to Experiment 1. SCRs and PRPs were also analyzed using a similar strategy as the subjective responses. That is, for all subjective, physiological and behavioural measures, loot outcomes that fell into their respective tiers (rare, epic and legendary) were trimmed for outliers and then an average score was calculated for boxes within each tier. Repeated measures ANOVAs with Fisher's LSD post hoc comparisons were conducted for each measure. Greenhouse-Geisser corrections were again used in cases of sphericity violations. Increases in skin conductance levels due to the anticipation of loot box openings were based on a 4 s window comprised of a baseline epoch (2 s window that occurred before the presentation of a loot box) and an anticipatory epoch (2 s window which depicted the loot box shaking prior to the reveal of its contents). Changes in SCLs for the baseline and the anticipation period were measured using SCL slopes and were directly compared for each trial using dependent t tests (Fig. 4). Objective and Subjective Loot Box Values-Validity Check As in Experiment 1, a Pearson correlation between the 14 objective loot values and 14 final averaged subjective values for these boxes were strongly positively correlated (r (13) = .95, p ≤ .001). Results for the scales assessing subjective value illustrated the expected pattern of increasing value from rare to legendary tiers. For subjective ratings of value (in credits), a repeated measures ANOVA with a Greenhouse-Geisser correction illustrated a significant main effect of condition, F(1.243, 48.471) = 52.00, p ≤ .001, η p 2 = .57. Fisher's LSD comparisons revealed that participants rated themselves as willing to spend the greatest amount of in-game credits for legendary tier loots (M = 1059.47, SD = 857.30) and the least amount 1 3 for rare tier loots (M = 135.35, SD = 292.81). Ratings of value for the legendary tier were significantly greater than the epic tier (M = 377.83, SD = 542.83) (p ≤ .001) and the rare tier (p ≤ .001). Additionally, ratings of value were significantly higher for the epic tier in comparison to the rare tier (p ≤ .001). See Fig. 5 for mean subjective value ratings. Subjective Measures Average ratings of subjective measures are shown in Fig. 5. A repeated measures ANOVA with a Greenhouse-Geisser correction was conducted for subjective ratings of arousal, showing a significant main effect of condition, F(1.371, 53.486) = 141.21, p ≤ .001, η p 2 = .78. As expected, Fisher's LSD comparisons revealed that subjective ratings of arousal were greatest for legendary tier loots (M = 3.13, SD = .95) and lowest for rare tier loots (M = 1.39, SD = .49). Ratings of arousal for legendary tier loots were significantly greater than epic tier loots (M = 2.05, SD = .74) (p ≤ .001) and significantly greater than rare tier loots (p ≤ .001). Further, arousal ratings for epic tier loots were significantly greater than rare tier loots (p ≤ .001). A repeated measures ANOVA with a Greenhouse-Geisser correction conducted on subjective measures of valence illustrated the same pattern of results. There was a main effect of condition, F(1.506, 58.722) = 101.68, p ≤ .001, η p 2 = .72. Fisher's LSD comparisons also revealed that positive valence was greatest for legendary tier loots (M = 3.84, SD = .58) and lowest for rare tier loots (M = 2.24, SD = .67). legendary tier loots were associated with significantly greater ratings of positive valence than epic tier loots (M = 2.92, SD = .61) (p ≤ .001) and rare tier loots (p ≤ .001). Positive valence ratings for epic tier loots were also significantly greater than rare tier loots (p ≤ .001). Subjective (p ≤ .001). Moreover, urge ratings for epic tier loots were significantly greater than urge ratings for rare tier loots (p ≤ .001). As predicted, results from ratings of disappointment revealed a monotonic decrease in average scores from rare to legendary tiers. A repeated measures ANOVA with a Greenhouse-Geisser correction revealed a significant main effect of condition, F(1.610, 62.802) = 149.77, p ≤ .001, η p 2 = .79. Fisher's LSD comparisons showed the most disappointment for rare tier loots (M = 76.07, SD = 22.69) and the least disappointment for the legendary tier (M = 17.55, SD = 16.27). The legendary tier had significantly lower ratings of disappointment than the epic tier (M = 53.18, SD = 23.43) (p ≤ .001) and the rare tier (p ≤ .001). Ratings of disappointment were also significantly lower for epic tier loots than rare tier loots (p ≤ .001). Physiological and Behavioural Reactions to Loot boxes A repeated measures ANOVA with a Greenhouse-Geisser correction revealed a significant main effect of condition for SCR amplitudes, F(1.232, 46.816) = 11.39, p ≤ .001, η p 2 = .23. Further, Fisher's LSD comparisons revealed no significant difference between the rare tier loots (M = .58, SD = .28) and epic tier loots (M = .54, SD = .29) (p = .08). However, SCR amplitudes in response to legendary tier loots (M = .77, SD = .49) were significantly greater than those for rare tier loots (p ≤ .05) and epic tier loots (p ≤ .05). See Fig. 6 for a representative example of the sizeable SCR amplitude for legendary loots compared to the epic and rare loots. Similarly, there was a main effect of condition for the force with which participants pressed the modified mouse, F(1.433, 55.903) = 4.53, p ≤ .05, η p 2 = .10 (with a Fig. 6 Raw SCR values over 6 s following the 'coin reveal' of a loot box opening for a representative participant (determined by the median response average for legendary loots). The raw values depict the median participants' average amplitudes after viewing a legendary, epic, and rare tier loot box respectively. For all trials, participants SCLs were forced to zero via subtraction at the beginning of the SCR window-thus the figure shows changes in SCL over the 6 s window Greenhouse-Geisser Correction). Fisher's LSD comparisons found no significant difference in force between the rare tier loots (M = .13, SD = .03) and epic tier loots (M = .14, SD = .03) (p = .43). Similar to the SCR amplitudes, significantly greater force was found for legendary tier loots (M = .14, SD = .04) versus rare tier loots (p ≤ .05) and epic tier loots (p ≤ .05). Lastly, a repeated measures ANOVA with a Greenhouse-Geisser correction demonstrated a significant main effect of tier for PRPs, F(1.479, 57.667) = 21.16, p ≤ .001, η p 2 = .35. Fisher's LSD post hoc tests revealed that players had smaller PRPs for the rare tier (M = 4.91, SD = 1.09) in comparison to both epic (M = 5.40, SD = 1.05, p ≤ .001) and legendary tiers (M = 5.70, SD = 1.11, p ≤ .001). Epic and legendary also significantly differed in participant PRP lengths (p ≤ .001). See Fig. 7 for graphical illustrations of these physiological and behavioural measures. Anticipatory Arousal A paired samples t test revealed significantly greater SCL slopes in the anticipatory period (M = .0002, SD = .0003) in comparison to the baseline period (M = -.00002, SD = .0001), t(39) = − 3.88, p ≤ .001 (see Fig. 8). The lower panels of Fig. 8 graphically depict the Fig. 7 Results from physiological and behavioural measures. a Average participant SCR amplitudes for Rare, Epic and Legendary tier loot boxes. b Average force exerted on modified mouse to initiate the following loot box opening for Rare, Epic and Legendary tier loot boxes. c Average length of PRPs for Rare, Epic and Legendary tier loot boxes. Error bars are ± 1 SE. * p ≤ .05; ** p ≤ .001 continuous changes in SCLs of two participants who fell at the median for SCL increases during the anticipation of the loot box reveal. This figure clearly shows a ramping up of of physiological arousal in anticipation of the loot box event (see Fig. 8). General Discussion The current research aimed to characterize how loot box users respond to loot box rewards of varying value. We reasoned that if responses were similar to that reported for slots players reacting to varying sizes of monetary wins at the hedonic and motivational level, then it would indicate a need for loot boxes to be similarly regulated to prevent or reduce problematic usage. Participants were exposed to a series of video stimuli depicting loot box openings from the game Overwatch, with loots ranging in value based on the game's reward tier hierarchy. For each opening, we gauged their subjective ratings of value, as well as subjective experiences of arousal, valence, and urge in Experiment 1. Experiment 2 employed the same subjective measures, while also measuring physiological and behavioural experiences of arousal and reward valence. In Experiment 1, we provide initial evidence that players systematically discriminate valuable from less valuable loots based on the rarity of the items, which corresponds with the game's item value hierarchy (e.g., rare, epic and legendary tiers). Experiment 1 also showed that loots of greater rarity are subjectively more arousing, positively valenced and inducing of urge to open more boxes. Experiment 2 successfully replicated the results for these subjective measures, in addition to supplying converging evidence of the arousing, hedonically rewarding and motivating nature of these non-monetary rewards with PRPs, SCRs and force measures. Our data provide strong evidence for the allure of these nonmonetary reward items, and the motivational impact such rewards have on players. In contrasting players subjective value with the objective value of the loots, we showed that participants found loots containing items of greater rarity to be more valuable both at the group level (e.g., the means) and the individual level (e.g., correlations). Specifically, at the group level, the magnitude of the ratings of subjective value were titrated to the magnitude of the rarity across the three designated reward tiers. Similarly, correlations revealed that players' ratings of subjective loot box value corresponded with the objective loot box values. This finding is an important confirmation of our assumption that loot boxes containing items of greater rarity would be more valuable to participants than more common items. Similar to the indisputable, rewarding feeling of winning money, we show that obtaining in-game items within a loot box appear to activate the same reward responses as money in a slot machine (Dixon et al. 2013(Dixon et al. , 2015(Dixon et al. , 2018a. Our PRP results mimicked participants subjective "value" ratings for loots over the different reward tiers, such that there was a monotonic increase in pause length with increasing reward tier value. Specifically, loots in the legendary tier elicited longer pauses than the more common, lowest valued tier of loot boxes. Post-reinforcement pauses are seen as a direct measure of the hedonic pleasure associated with rewarding stimuli, and such PRP results mirror findings of greater PRPs following bigger wins in slots play (Dixon et al. 2013). Coupled with the subjective ratings of value, such findings are indicative of players' awareness and sensitivity to the value of different loots, despite loot boxes not conferring any real-world monetary worth. Our findings also suggest that items of the greatest rarity were the most subjectively and physiologically arousing, hedonically pleasing, and importantly the most inducing of urge to open another box. Specifically, subjective ratings of arousal, valence and urge all showed the same monotonic increase with increasing reward tier. Convergently, for disappointment, players showed a monotonic decrease with increasing reward tier (i.e., the most common loots were the most disappointing, the least common loots the least disappointing). Taken together, the fact that these subjective measures were yoked to the magnitude of the objective reward value suggest that the degree of positive excitement elicited for these events is related to the rarity of the loots in the game. Similarly, the legendary reward tier loots were associated with greater skin conductance responses and force (a complementary measure of positive arousal) compared to epic or rare loots. Thus, our subjective, physiological and behavioural indices of arousal converge to support the notion that the rarest loots (those falling in the legendary category) are the most rewarding, exciting and motivating events for players. Unlike our subjective measures and PRP results, there was no differentiation in force magnitude nor skin conductance amplitude between loots corresponding to the epic and rare tiers. As force is typically quite sensitive to the reward magnitudes in slots (Dixon et al. 2015(Dixon et al. , 2018a, this lack of differentiation may be due to the smaller disparity between the value ranges of the rare and epic tier in comparison to the much larger disparity in value associated with legendary tier boxes. As can be seen in Table 1 the upper bound of the rare tier (225 credits) and the lower bound of the epic tier (325 credits) differ by only 100 credits whereas the upper bound of the epic tier (500 credits) and the lower bound of the legendary tier (1075 credits differs by 575 credits. Thus, it may be that 'rare' and 'epic' tiers defined by the game, are too similar to be differentiated by SCR and force measures. Importantly, both measures are convergently sensitive to the presentation of loot boxes containing the most uncommon items. Even before seeing the items in the loot box, participants showed a marked increase in arousal in anticipation of the loot box opening. Previous research has illustrated increased skin conductance and activation of arousal-related brain regions during reward anticipation (Critchley et al. 2001). The finding that loot boxes elicit strong anticipatory arousal suggests that participants treat loot boxes as having the potential to confer reward. Such anticipatory arousal patterns are akin to player experience in slots play, such that there is a buildup in anticipation as the reels spin and sequentially settle (Dixon et al. 2011). While there may be a build-up of anticipation in both slots and loot box openings, there are some subtle differences in the reveal of outcomes that may make the subjective feeling of anticipation distinct between the two games. For instance, slot machine reel symbols can be used as cues to index the proximity of a desired outcome as each reel sequentially settles. A classic example includes near-miss outcomes in slots-such outcomes are driven by cues that seemingly inform the player how close they are to their desired goal (e.g., a large win or a jackpot). In the case of loot boxes, game designers go to great lengths to illustrate general cues designed to increase arousal. In Overwatch, prior to displaying any contents the loot box is shown to tremble and shake-reminiscent and perhaps hoping to mirror one trembling with anticipation. To our knowledge there is no comparable feature in standard slot machine games. Physiological arousal has been implicated in the maintenance of gambling behaviours across multiple modes of gambling, and our results for physiological arousal and urge dovetail with these previous findings from the gambling literature (Clark et al. 2012;Baudinet and Blaszczynski 2013;Stange et al. 2017a, b). The gradual ramping up of arousal and the fact that participants experienced additional increases in physiological arousal following the coin reveal (especially for higher value loots) corroborates players' urge ratings and confirms the strong motivational force of these uncommon rewards. The urge to open more loot boxes following viewing of higher valued loot boxes may have implications for players' behaviours regarding loot box use. For instance, there could be concern that increases in urge to open another loot box after receiving a valuable loot box during game play may invigorate players to access more loot boxes, either through continued gameplay (e.g., requiring an increased investment of time) or through purchasing (e.g., requiring increased monetary investment). In summary, our research lends credence to previous commentaries and research suggesting that loot boxes are psychologically akin to gambling (Drummond and Sauer 2018;Brooks and Clark 2019). The current research is among the first to provide empirical evidence that the reveal of highly desirable items increases both arousal and more importantly urge to open more loot boxes, for the potential for problematic play in loot box games. Demonstrating such reward reactivity and urge for the rarest loots using loot box related cues is important for understanding how these gambling-like gaming features may result in problematic use, as they elicit responses that mirror those of foreknown addictive gambling forms. This is especially concerning when coupled with the structural similarities between loot boxes and slot machines, such as the use of a variable ratio reinforcement schedule. In variable ratio schedules, rewards are unpredictable and high valued (good) loots occur much less frequently than lower valued (bad) loots. This reward schedule framework has been associated with potentially maladaptive behaviours in gambling, and thus can potentially extend to loot boxes (Haw 2008). In most jurisdictions, loot boxes are very loosely regulated compared to legalized gambling activities. For one, gambling venues and websites are obligated to include help resources for gamblers who feel that their gambling behaviour is out of control. As the harms related to loot box use are becoming more salient, one direction for regulation could involve requiring games to feature similar safeguards. Another key regulatory discrepancy between games with loot boxes and gambling involves strictly enforced age regulations. Our results support the need for such age regulations for users under the legal gambling age. Limitations This experiment is not without limitations. Firstly, we did not differentiate reactivity to these rewards among players who may potentially be at risk of excessive use of loot boxes. However, research has yet to solidify what constitutes a problematic loot box user, and thus, we are limited by the current research landscape. Secondly, in order to maximize experimental control, we used video stimuli rather than loot boxes that were obtained by the player. Future research should aim to replicate our findings in a more naturalistic setting using real loot boxes either won through game play, or purchased by the player. Given the added component of agency and ownership of rewards that are either earned by one's own gameplay, or purchased with one's own money, one might expect an amplification of the effects on the player that we have shown in this experiment which lacks such agency. Finally, since the presentation and valuation system of loot boxes is heterogeneous across games, future research should aim to reproduce our results using loot box stimuli from other games. Conclusion In conclusion, our findings provide initial insight into the impact of loot box opening on player reward reactivity and motivation. Despite conferring no real-world value, loot boxes, especially those of greater rarity, are treated as rewarding and urge-inducing events. While the relationship between loot boxes, problem video gaming and problem gambling is still in need of further investigation, the consequences of such potential associations have profound implications for the future regulation of these and similar features in games.
2019-11-25T16:15:02.654Z
2019-11-23T00:00:00.000
{ "year": 2019, "sha1": "3b26a3a301915b30fdb1e463f8707eb587332f4a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10899-019-09913-5.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "3b26a3a301915b30fdb1e463f8707eb587332f4a", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
225785595
pes2o/s2orc
v3-fos-license
Safety Wheel Chair for Paralysis Patient This paper aims to develop a wheelchair for paralysis patients. The objectives of the paper include Fall detection, Forward bending detection, and SMS service. All these technologies have been covered in terms of their requirements, costs, system design, and system analysis. Fall detection is done using Accelerometer sensor. It detects the fall from the wheelchair of the patient who is having spinal problems. The sensor sends a signal to the controller and activates the buzzer. SMS will also be sent to the caretaker. The second technology used is forward bending detection. It is used for those who are bending their backs in order to inform them to sit properly. Ultrasonic sensor is used for this detection, alarm will be activated and SMS also will be sent. Next technology is the use of SMS service. If the patient is able to move one hand, he can use a keypad and 10 different buttons can be provided for different services. Depending on the keypad pressed, the corresponding SMS will reach the caretaker and the patient will be served accordingly. The proposed work is started after a thorough literature review. Arduino controller is used for getting the signal from the sensor and provide the information to the caretaker and switch ON buzzer. The proposed work is simulated using proteus and then implemented in hardware. Introduction The proposed work explained in the paper helps serving people with disabilities by developing several functions using the technologies. This work is mainly for the benefit of disabled people who use wheelchair. Three technologies are explained in this paper. Several research papers are discussed in the subsequent sections about the technologies used. This proposed work has first FALL DEDECTLON technology which work when the wheelchair with the person fall down. Then the alarm system in the chair will activate to attract people close to him and assist him immediately. Also, GSM module in the chair will send alert SMS to the person who is responsible of the disabled person. Using "FORWARD BEND DETECTION" is possible to find whether the person in the wheelchair is bending his back in front. In such a case, an alarm will be activated and SMS will be sent to the corresponding person. The bending technology is for the patient who doesn't have the strength to bend their dorsum back again. The SMS service technology will also be useful when the patient presses a keypad which is kept near his hand. The keypad is of 4*3 size and the messages will be sent for all the keypad from 0 to 6. The service can be extended for different other functions also (Diksha Goyal et al. 2013). The patient can now send SMS to his caretaker. Each button will provide unique message. This service is useful for the patient whose has one side paralysis. GSM module is attached with the prototype. Fixed text messages like "I'm hungry ", "I'm thirsty", "I need help" and so on can be used for the SMS service. The main objectives of the project are listed below. Using accelerator meter sensor to verify the angle of chair and send SMS, activate alarm if not in straight position. Using ultrasonic sensor to find the position of the patient and if this sensor detects nothing activate alarm and send SMS. To send SMS if the person presses keys. Methodology selection is important for obtaining the objectives in set timeline. Waterfall model is used for the proposed work. It has a linear sequential flow chart in which each process is flowing downward like waterfall. The current phase follows the previous phase. The major advantage of this model is that the timeline will be maintained and all the objectives are obtained in a sequential manner. Returning to the previous phase is not allowed in this model. Changing of requirements in the future phase is impossible. The reasons are listed below for the selection of the waterflow model obstacle (Chhaya. G. Patil, et al. 2014). The development is easy to structure and plan the work. The process is well defined and the objectives are obtained in timely manner. Returning to previous stage is costlier, and this project is less in cost. This work is done for enough duration of time. Here are all comments that describe all stages on waterfall methodology (G. Găşpăresc et al. 2014) In the requirement stage, collection of materials and requirements needed for the project. In analysis stage, all the collected requirements will be verified with a thorough literature study. Next stage is design stage in which circuit design, system design, schematic diagram are finalized and analyzed. The above three stages are considered as planning stages and the real implementation is in the fourth stage. The connection of the circuit will be done once the simulation is done. The simulation of the work is done in tincerkad software. microcontroller is used after the work verification in simulation is done. Arduino is used, which is considered as the heart of the work. Programming is done and then downloaded in Arduino board which will follow the instructions of the written programming. The final product will be available and verified for all the objectives. Literature review Several literature papers are studied and as per the discussion (DW Hansen 2012) a wheelchair can be designed for the people who are disabled. It should have many facilities. The paper also explains the need of SMS service and the keypad usages and the gesture of the body. The paper explains the automation of the wheelchair for different gestures. The head and hand gestures are taken for considerations. Touchpad or keypad is used for hand gestures whereas accelerometer is used for head gestures (Prof. Vishal V. Pande, 2014). Switches are also used for hand gestures. The work is proposed with two modes of operation to connect all the needs of the user. The patient or the caretaker can switch from one mode to another (Zhmud V et al. 2015). A microcontroller is used to control the hand and head gesture variation from the switch and the accelerometer. Motors connected to the wheel will be responding to the control provided by the microcontroller, as per the gestures variation. (Rory A. Cooper, et al. 2000) In the paper, Eyeball, Motion-Controlled, Wheelchair Using IR Sensor is designed to control wheelchair movement by the variation of the iris movement of the patient. This is useful for the old people and disabled patients who cannot talk. In this work, IR sensors are attached with the eye glass or eye frame. This IR sensor will follow up the movement of IRIS and the signals from the sensor will be transferred to the microcontroller. Microcontroller (pic18F452) is used to control the motors in the wheelchair. Since IR sensor will detect only the white objects, there is a necessity to make some unique sequence of bits for each movement direction of the iris eye. In the proposed work, all the IR sensors are fitted into left lens of the glasses. Through this technology, the wheelchair gets commands from a single eye. IR sensor will continuously transmit beam of IR rays. When a white rays come to the receiver, the rays will be reflected and captured, when black objects are in contact, the rays will be absorbed. The white object mentioned in the sclera and the iris will be the black object. The patient is educated to move the iris as per the requirement of the movement of the wheelchair. The movement is restricted to right and left directions controlled by the microcontroller (S. Shaheen, A.Umamakeswari, 2013). Paralysis patients depend on other people and therefore need a wheelchair so they can move easily without any help (M. Reitbauer, 2008). The wheelchair should be designed in such a way that it is helpful for all the possible facilities. Ultrasound sensors can also be used for the wheelchair to avoid any obstacles in the path. 8051 microcontroller can be used to control ultrasound sensor, motor drivers and dc motors. GH-311 ultrasound sensor is used to measure the distance to any object or obstacle (Chhaya. G. Patil, et al. 2014). The sensor will send the soundwave and will be reflected from the obstacle be received again by the sensor. The sensor sends the data to the microcontroller and will send the signal in terms of voltage to the motors connected to the wheelchair. The paper explains that the detection range in this sensor is 3 cm to 3 m. This information from the literature paper is useful for connecting ultrasonic sensor to the wheelchair. (Bhagat Amar et al.2014). The below section will be discussing the circuit design by providing block diagram, schematic diagram and analysis of the circuit. System block diagram System block diagram and flow of work will be helpful for finalizing the proposed work. System block diagram is shown in figure 1. Figure 1. System Block diagram The proposed work consists of three technologies/ services; Fall detection, forward bed detection and SMS service. According to literature, ultrasonic sensor, keypad and accelerometer sensors are used for achieving the above services in a single wheelchair. The flow graph of the work is explained below 1. If Accelerometer angle is not fixed, then the used service is fall detection. This represents the fall of the patient from the wheelchair and the alarm is activated. SMS is also sent to the caretaker. 2. In forward bend detection, if the push button is 1 then it means that the user is bending forward, the ultrasonic sensor value is also verified for this case. Alarm is also activated, and SMS is sent. 3. The third service is for the patient who is paralyzed from one side only. They can use keypad and provide information for the caretaker about their need by SMS. GSM modem is connected to the module. Fall detection technology ADXL335 Accelerometer is used as input. The changes in angles and rotation are sensed in all the three axes. This sensor is connected to the chair. Any variation in the accelerometer will be considered as falling case and the changes are informed to the microcontroller. The ADXL335 will send value of the fall axis as analog voltage between (0) to (5) volts to the analog inputs of Arduino. Figure 2 shows the implementation of fall detection service. SMS services This keypad is used as an input. Several messages are stored in the microcontroller mapping with each keypad pressed. 4*4 or 4*3 keypads can be used and 16 or 12 messages can be saved. In this proposed work, keypads from 0 to 6 are used. The keypad is connected to 8 digital Arduino port pins. The button value is decided by the row and column values and thus the pin is decided. Figure 3 explains the simulation circuit of the SMS service. Forward bending detection Ultrasonic sensor is used and it detects the position of the head in the chair. Pushbuttons are used to determine whether the person is sitting in chair or has fell down. This equation 1 is used to calculate the distance of obstacles on ultrasonic. Figure 4 represents the bending detection circuit. System Simulation and Discussion The complete simulation of the project is done using the Proteus software. Once the connection of the circuit is finished, the code is downloaded from the IDE environment to the board which is in hex format. Figure 5 explains the complete system simulation. Ultrasonic sensor is fixed behind the chair of the patient to detect his back position. There is a pin called simPin in the sensor which shows the distance from the object. A potential divider circuit is again used to show the performance of the ultrasonic sensor. System Testing and Implementation All the three services are implemented and tested in the wheelchair. The final prototype of this circuit includes normal chair, plastic box to cover stripboard circuit and a piece of wood comes with large pushbutton as shown in figure 7 Conclusion This proposed work aims to help disabled people with physical disabilities like paralysis, handicapped; those who are using the wheelchair in their daily lives. This work has three services and it can be universally used by the disabled and old people. The cost of the prototype is suitable, and the caretaker can get the information about the patients immediately. Literature reviews are done to finalize the objectives and selection of components. Suitable methodology selected to obtain the objectives. Datasheets of the sensors are used to fix the values of the Arduino control. The work can be further extended to improve the services like increasing the SMS messages, and connecting IR sensor in the eye frame for the patients to freely move on their own.
2020-10-28T18:03:11.898Z
2020-07-09T00:00:00.000
{ "year": 2020, "sha1": "f285a3220e5ca5f1556677a3228be50a10550c8b", "oa_license": "CCBYNCSA", "oa_url": "https://www.jsr.org/index.php/path/article/download/977/824", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c85e08a6186d5a289cb639b1f6c0d40ead0b74cb", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
122503428
pes2o/s2orc
v3-fos-license
Local polynomial convexity of the unfolded Whitney umbrella in $\mathbb C^2$ The paper considers a class of Lagrangian surfaces in $\mathbb C^2$ with isolated singularities of the unfolded Whitney umbrella type. We prove that generically such a surface is locally polynomially convex near a singular point of this kind. Introduction Polynomial convexity of real submanifolds of C n is a well-studied subject in complex analysis due to its deep relation to the approximation problems, pluripotential theory and Banach algebras (see, for instance, [2,26] for a detailed discussion).M. Gromov [15] found remarkable connections between the polynomial (or the holomorphic disc) convexity of real manifolds and global rigidity of symplectic structures.In the present work we prove that a generic Lagrangian surface in C 2 is polynomially convex near an isolated singularity which is topologically an unfolded Whitney umbrella.This study is inspired by the work of A. Givental [14] where he proved that a wide class of compact Lagrangian surfaces in a symplectic 4-manifold can be realized as a compact surface in the complex affine space C 2 equipped with the standard symplectic form.This surface is Lagrangian outside a finite number of complex points (i.e., points singular for the CR structure, see below) and unfolded Whitney umbrellas. Denote by z = x + iy and w = u + iv the standard coordinates in C 2 .Let ω = dx ∧ dy + du ∧ dv be the standard symplectic form on C 2 .A smooth map φ : If a symplectic map is a (local) diffeomorphism, we call it a (local) symplectomorphism. A smooth map ι : S → (C 2 , ω) from a smooth real surface S is called isotropic if ι * ω = 0. A. Givental [14] showed that near a generic point p ∈ S, which is an isolated singular point of ι of rank one, the map π : R 2 (t,s) → R 4 (x,u,y,v) : (t, s) → ts, 1 is a local normal form for ι.In particular, this means that there exists a local symplectomorphism near ι(p) sending ι(S) onto a neighbourhood of the origin in Σ := π(R 2 ).The set Σ, as well as ι(S) near ι(p), is called the unfolded (or open) Whitney umbrella.Our main result is the following. Theorem 1. Suppose φ : C 2 → C 2 is either a generic real analytic symplectomorphism near the origin, or the identity map.Then there exists a neighbourhood of the point φ(0) in the surface φ(Σ) with the compact polynomially convex closure. The case where φ is the identity map is considered separately since it is not generic.This implies that the Whitney umbrella Σ is polynomially convex near the origin.The above theorem also holds under weaker assumptions, namely, if φ is a generic local real analytic diffeomorphism and Dφ(0), the differential of φ at zero, is symplectic, or if φ is a C ∞ -smooth symplectomorphism with the jet at the origin satisfying some additional assumptions.See Section 5 for details. Denote by B(p, r) the open Euclidean ball of C 2 centred at p and of radius r > 0. As an application of Theorem 1 we obtain the following result. For a real n-dimensional submanifold E (closed or with boundary) in C n the polynomial convexity of E is connected with its global topological properties.Generically E is totally real, i.e., at a generic point the complex span of its tangent space coincides with C n .It is well-known [2,26] that an n-dimensional totally real submanifold of C n is locally polynomially convex, i.e., for every point p ∈ E there exists r > 0 such that the intersection E ∩ B(p, r) is polynomially convex.This is no longer true in the global situation.Indeed, M. Gromov [15] proved that a compact Lagrangian (with respect to the standard symplectic structure on C n ) submanifold of C n contains a boundary of a non-constant holomorphic disc.Extending his ideas, H. Alexander [1] proved that for every totally real submanifold E of C n of real dimension n there exists a non-constant bounded holomorphic disc with the boundary attached to E except possibly a single point of the unit circle.However, the methods used to prove these results cannot be immediately generalized to the case when E has singularities.An interaction between the rational convexity and the symplectic structure was studied by J. Duval [10].Recently J. Duval and D. Gayet [12] studied the rational convexity of certain immersed Lagrangian manifolds. The local polynomial convexity can fail near a point where E is not totally real.Generically these points are isolated in E. In the complex dimension n = 2, the tangent space of E is a complex line, so such points are called complex.There are three types of complex points: elliptic, hyperbolic and parabolic (see, for instance, [2,26]), and the local polynomial convexity depends on the type.H. Bishop [5] and C. Kenig -S.Webster [19] proved that a neighbourhood of an elliptic point in E has a non-trivial hull.On the other hand, F. Forstnerič and E. L. Stout [13] proved that E is locally polynomially convex near a hyperbolic point.The parabolic case is intermediate and in general both possibilities occur.This case was studied by B. Jöricke [16,17].Normal forms of real analytic totally real or Lagrangian surfaces near complex points are studied by J. Moser -S.Webster [20] and S. Webster [27].The present work shows that the local convexity properties near a Whitney umbrella are similar to the case of a hyperbolic point. We thank S. Nemirovskii and V. Shevchishin for bringing our attention to this problem and for helpful conversations. Geometry of Whitney umbrellas The map π : R 2 (t,s) → R 4 (x,u,y,v) given by ( 1) is a smooth homeomorphism onto its image, nondegenerate except at the origin, where the rank of π equals one.It satisfies π * ω ≡ 0, and so Σ is a Lagrangian submanifold of (C 2 , ω) with an isolated singular point at the origin.Thus, The crucial role in our approach is played by an auxiliary real hypersurface M defined by Clearly, Σ is contained in M .Note that the hypersurface M is smooth away from the origin, and strictly pseudoconvex in B(0, ε) \ {0} for ε sufficiently small.Suppose now that φ : C 2 → C 2 is a local smooth diffeomorphism near the origin such that its linear part Dφ(0) at the origin is a symplectic map.Without loss of generality we may assume that φ(0) = 0.The standard symplectic structure on C 2 is given by the matrix where I 2 denotes the identity matrix on R 2 .Similarly, we write The condition that Dφ(0) is symplectic means that (Dφ(0)) t Ω Dφ(0) = Ω (where t stands for matrix transposition).Therefore, the real (2 × 2)-matrices A, B = (b jk ), C, D = (d jk ) satisfy The standard complex structure of C 2 in real coordinates is given by the matrix which corresponds to multiplication by i.We perform an additional complex linear change of coordinates ψ.Let ψ : R 4 → R 4 be a linear transformation given by the 4 × 4 matrix This matrix commutes with J and so gives rise to a non-degenerate complex linear map in . The differential at the origin of the composition ψ • φ is given by where we used identities (4) to simplify the matrix.Further, a direct calculation shows that and therefore, the matrix G is symmetric with positive entries in the main diagonal.The determinant ∆ = g 11 g 22 − g 2 12 (8) of G coincides with that of the matrix in (5) corresponding to a C-linear map of C 2 .Hence ∆ is also positive.Let and It follows from ( 2) and ( 6) that In particular, the function ρ ′ is strictly plurisubharmonic in a neighbourhood of the origin, and the hypersurface M ′ is strictly pseudoconvex in a punctured neighbourhood of the origin. Characteristic foliation and polynomial convexity Let X be a totally real surface embedded into a real hypersurface Y in C 2 .Define on X a field of lines determined at every p ∈ X by where H p Y = T p Y ∩ i(T p Y ) denotes the complex tangent line to Y at the point p.Integral curves, i.e., curves which are tangent to L p at each point p, of this line field define a foliation on X.It is called the characteristic foliation of X. We consider the characteristic foliation of Σ\{0} ⊂ M and (ψ•φ)(Σ)\{0} ⊂ φ(M ).Characteristic foliations are invariant under biholomorphisms.Therefore, in order to study the characteristic foliation on φ(Σ) with respect to φ(M ), it is sufficient to study the characteristic foliation of Σ ′ = ψ • φ(Σ) with respect to M ′ .Considering its pull-back by ψ • φ • π we obtain a smooth vector field in a neighbourhood of the origin in R 2 (t,s) with the singularity at the origin.For simplicity, its trajectories will also be called the leaves of the characteristic foliation. Our ultimate goal is to prove the following Proposition 1.Let φ be as in Theorem 1.There exist two rectifiable curves γ 1 and γ 2 passing through the origin in R 2 (t,s) with the following property: if K ⊂ R 2 (t,s) is a sufficiently small compact containing the origin and not contained in γ 1 ∪ γ 2 , then there exists a leaf γ of the characteristic foliation on Σ ′ such that K ∩ γ = ∅ but K does not meet both sides of γ. The proof of the proposition will be given in Sections 4 -7.It is based on the local theory of dynamical systems and can be read independently from the rest of the paper. Assuming Proposition 1 we now prove our main results.The proof is based on the argument due to J. Duval [11] and B. Jöricke [16,17].Suppose that φ satisfies the assumptions of Theorem 1, and Σ ′ = φ(Σ).First we establish non-existence of holomorphic discs attached to Σ ′ near the Whitney umbrella.By a holomorphic disc we mean a map f : ∆ → C 2 holomorphic in the unit disc ∆ ⊂ C and continuous on ∆.As usual, by its boundary we mean the restriction f | ∂∆ ; we identify it with its image. Proof.Fix ε > 0 such that the function ρ ′ in (10) is strictly plurisubharmonic in the ball B(0, 2ε).Suppose that f is not constant.Then the function ρ ′ • f is subharmonic in the unit disc and the maximum principle implies that f (∆) is contained in {ρ ′ < 0}.By the uniqueness theorem the set of points f −1 (0) has measure zero on the unit circle.Since Σ ′ is totally real outside the origin, it follows by the boundary regularity theorem [7] that f is smooth (even real analytic) up to the boundary outside the pull-back f −1 (0).Applying the Hopf lemma (see for instance [23]) to the subharmonic function ρ ′ • f on ∆ we conclude that f is transverse to the hypersurface M ′ at every point different from the origin.Therefore, the complex line tangent to f (∆) at a boundary point is transverse to the tangent complex line of M ′ at this point.In particular, the boundary K := f (∂∆) is transverse to the leaves of the characteristic foliation of Σ ′ .This contradicts Proposition 1. Proof of Theorem 1.Given a compact set K, we denote by K its polynomially convex hull.Let A local maximum principle of Rossi [24] , where the boundary of E is taken with respect to K. By choosing E = X ess and U = C 2 we see that X tr = X ess .Therefore, to prove that X is polynomially convex, it is enough to show that X tr is polynomially convex. By an analytic curve in an open set U in C 2 we mean an irreducible complex 1-dimensional analytic subset in U . Lemma 2. [Duval, [11]] Let p ∈ X \ {0}, and let Ω ′ be given as in (9).Suppose there exist two continuous families {V t } t∈[0,1) and {W t } t∈[0,1) of analytic curves in an open neighbourhood of Ω ′ with the following properties: (i) V 0 and W 0 meet X transversely at p and with opposite signs of intersection, and (ii) for t > 0, the varieties V t and W t are disjoint from X tr .Then p / ∈ X tr . Duval's original result is stated for the O(Ω)-hull of a smooth totally real surface X ⊂ ∂Ω, where Ω ⊂ C 2 is a strictly pseudoconvex domain.However, the proof is also valid in our situation.Indeed, suppose p ∈ X \ {0}, and {V t }, {W t } are as in Lemma 2. Duval's construction produces an arbitrarily small neighbourhood U of p and a complex 1-parameter family {C a } of analytic curves in U such that C a fill out U \ X, avoiding X, and such that each C a can be swept out of Ω ′ through a continuous family.Using this family we are now in the position to prove that p / ∈ X tr .For that we apply the classical characterization of polynomially convex hulls due to Oka (see [21], [25], [26]), which we state in the form suitable for our purposes: For a compact set X ⊂ C 2 , a point p ∈ C 2 does not lie in the polynomially convex hull X of X, if there exists a neighbourhood U of p and a continuous family {S t } t∈[0,1] of analytic curves in U with the properties that (i) p ∈ S 0 , (ii) S t are disjoint from some open neighbourhood of X, (iii) for any compact K ⊂ U , there exists t K < 1 such that S t ∩ K = ∅ for t K < t ≤ 1, and (iv) there exists t 0 < 1 such that {S t } ∩ X = ∅ for t 0 < t ≤ 1. In our situation we take any point in U \ X and choose the family {S t } to be a continuous subfamily of {C a } that initially passes through this point and then leaves Ω ′ .Note that by Lemma 1, the polynomial hull of X is contained in Ω ′ , and therefore condition (iv) in Oka's characterization holds.This shows that no point near p can be in X ess , and therefore p does not belong to X tr .This verifies Lemma 2. Let γ 1 and γ 2 be as in Proposition 1, and suppose that X tr is not contained in the union of these curves.Then, by Proposition 1, there exists a leaf γ of the characteristic foliation that touches X tr at a point p ∈ X \ {0}.There exists a small neighbourhood U of p and local holomorphic coordinates centred at p such that in these coordinates M ∩ U is strictly convex.Again, the local analysis of Duval [11] produces two families of analytic curves in U that satisfy the conditions of Lemma 2. The intersection of each curve with Ω ′ is compactly contained in U , and so they can be considered as curves in a neighbourhood of Ω ′ .By Lemma 2, p / ∈ X tr .It follows from the above considerations that X tr is contained in the union γ 1 ∪ γ 2 .A rectifiable arc is polynomially convex; moreover, if Y is compact and polynomially convex, and Γ is a compact curve, then the set ( Y ∪ Γ) \ (Y ∪ Γ) is either empty or contains a purely one dimensional analytic subvariety of C 2 \ (Y ∪ Γ) (see [26], p.122).By taking Y and Γ to be our rectifiable curves γ j , we see that their union cannot bound a one dimensional variety, and therefore X tr is polynomially convex.Hence, X ess is empty, which implies that X is polynomially convex.Theorem 1 is proved. Proof of Corollary 1.Let φ(0) = p.By Theorem 1 there exists ε > 0 such that φ(Σ) ∩ B(p, ε) is polynomially convex.We may further assume that φ(Σ) ∩ ∂B(p, ε) is a rectifiable curve.By the result of J. Anderson, A. Izzo, and J. Wermer [3, Thm.1.5], if X is a polynomially convex compact subset of C n , and X 0 is a compact subset of X such that X \ X 0 is a totally real submanifold of C n , of class C 1 , then continuous functions on X can be approximated by polynomials if and only if this can be done on X 0 .We apply this result to X = φ(Σ) ∩ B(p, ε) and X 0 = {p} ∪ (φ(Σ) ∩ ∂B(p, ε)).Indeed, X 0 , is polynomially convex, and furthermore, by [26], Thm.3.1.1and Cor.3.1.2,continuous functions on X 0 can be approximated by polynomials.From this the corollary follows. The rest of the paper is devoted to the proof of Proposition 1. Reduction to a dynamical system In this section we deduce the dynamical systems describing the characteristic foliations on Σ and Σ ′ .In Sections 6 and 7 we will discuss the topological behaviour of these foliations near the origin. 4.1.Foliation on Σ.The tangent plane to Σ \ {0} is spanned by the vectors The directional vector of the characteristic line field is determined from the equation where α = α(t, s), β = β(t, s) are some smooth functions on R 2 \ {0}, and the vector X belongs to the complex tangent H π(t,s) M .Let Multiplication by i of a vector in C 2 corresponds to multiplication by J of the corresponding vector in R 4 .For v ∈ T p M , the inclusion v ∈ H p M holds if and only if v, iv ∈ T p M .Therefore, where •, • is the standard Euclidean product in R 4 , and ∇ρ is the gradient of the function ρ.Therefore, we can choose where dπ is the differential of the map π.It follows that the characteristic foliation on Σ \ {0} (or, more precisely, its pull-back on R 2 \{0} by the parametrization map π) is given by the system of ODEs of the form ṫ where the dot denotes the derivative with respect to the time variable τ . 4.2.Foliation on Σ ′ .Let f : R 2 → R 4 be given by where we use the notation of the previous section.The directional vector of the characteristic foliation on Σ ′ is determined by , where X ′ t = ∂f /∂t and X ′ s = ∂f /∂s, and α = α(t, s), β = β(t, s) are some smooth functions on R 2 \ {0} which are chosen in such a way that the vector X ′ belongs to the complex tangent H f (t,s) M ′ .We have , where ρ ′ is a defining function of M ′ , and the gradient ∇ρ ′ is expressed in terms of (t, s) using the parametrization f .Therefore, we can choose Thus, It follows that the characteristic foliation on Σ ′ is determined by the system of ODEs of the form ṫ = α(t, s) ṡ = β(t, s). We write f (t, s) = (f 1 (t, s), . . ., f 4 (t, s)), where using ( 6) and ( 1) we may express each f j as a power series in (t, s): where f 1 jklm and f 1 jk are real numbers.Similarly, Denote by e jk the entries of the matrix E in (6).Then From these formulas we immediately obtain and The defining equation of M ′ can be chosen to be ρ • (ψ • φ) −1 , where ρ defines M as in (2).Let (x ′ , u ′ , y ′ , v ′ ) be the coordinates in the target domain of ψ • φ, in particular, we have Then Therefore, Note that in (26) the only quadratic terms are x ′2 and 9 4 u ′2 .By taking partial derivatives in the above expression with respect to x ′ , u ′ , y ′ and v ′ , and expressing the resulting vector in terms of (t, s) we will obtain the coordinates of the vector To determine the phase portrait of the characteristic foliation we will only need some low order terms in the power series Therefore, instead of explicit differentiation of ( 26), we will employ a different strategy for computing coefficients of the terms of lower degree in the (t, s)-Taylor expansion of α and β. 4.3. The power series of α.We have We proceed in several steps computing the coefficients in the expansion for α.To begin with, there cannot be a free term in the power series of α because every term in ∇ρ ′ will necessarily have positive degree in t or s.Term t: Since no component of ∇ρ ′ can contain a degree zero term or the monomial t, there is no term t in α. Term s: The first two components of X ′ s do not contain free terms, therefore, monomial s can appear in α only if R x or R u will contain it.By inspection of ( 18) - (21) we see that y ′ and v ′ are the only terms that can produce monomial s.Therefore, for s to appear in R x or R u , the function ρ ′ must contain at least one of the terms x ′ y ′ , x ′ v ′ , u ′ y ′ or u ′ v ′ .However, from (26) neither of these terms exists.Thus, there is no monomial s in the power series of α.Term ts: We inspect terms in X ′ s of degree lower than ts.These appear in (X ′ s ) 1 (terms t and s), in (X ′ s ) 2 (term s), in (X ′ s ) 3 (a free term, t and s), and in (X ′ s ) 4 (a free term, t and s).Therefore, for ts to appear in α, at least one of the following options must occur: (1) either R x or R u has t, s or ts; (2) R y has either t or s (3) R v has t.Of the above three options only (1) can happen: ρ ′ contains the term x ′2 , and therefore, R x contains 2ts.It follows now from ( 18),( 23) and ( 27) that α 11 = −2g 12 . To simplify further considerations, we note that term t cannot occur in any of the components of the vector ∇ρ ′ .Term t 2 : By inspection of X ′ s , we conclude that either R x or R u has term t 2 , so ∇ρ ′ must have either x ′ y ′ , x ′ v ′ , u ′ y ′ or u ′ v ′ , neither of which appears.This means that α does not contain term t 2 .Term s 2 : By inspection of X ′ s , the following options are possible: (1) either R x or R u has s or s 2 ; (2) either R y or R v has term s.Option (2) is impossible, but ρ ′ can have terms u ′2 , u ′ v ′2 or u ′ y ′2 which gives (1).We have the following expression for α 02 , which depends on the coefficients of the Taylor expansion for (ψ • φ) −1 : . Term t 3 : By inspection of X ′ s , the following options are possible: (1) either R x or R u has at least one of t 2 or t 3 ; (2) R y has t 2 .Option (2) can happen only if ρ ′ would have y ′2 or y ′ v ′ , which is impossible.For the same reason in option (1) terms R x or R u cannot produce t 2 .The only term in ∇ρ ′ that can produce t 3 is u ′ .Therefore, the only possibility in (1) is the term t 3 in R u , which indeed happens since ρ ′ contains u ′2 .It follows that α 30 = −3g 22 . Thus, 4.4.The power series of β.We have Again, there cannot be a free term in β because every term in ∇ρ ′ will necessarily have positive degree in t or s.Further, no component in ∇ρ ′ can produce a term t, and so the power series of β cannot contain monomial t.Term s: Since no component of X ′ t contains a free term, β cannot have monomial s.Term ts: By inspection of X ′ t we conclude that either R x or R u must have term s, which is impossible.Hence, β does not contain monomial ts.Terms t 2 and s 2 : Analogous considerations show that these terms cannot appear in β.Term t 2 s: By inspection of X ′ t the following is possible for R: (1) R x has at least one of t 2 , s, or ts; (2) R u has at least one of t 2 , s, or ts; (3) R y has t 2 ; (4) R v has s. Term ts 2 : This term can appear in β.We have Term t 3 : By inspection of X ′ t , the only option is that either R x or R u has term t 2 .This is however not possible. Term t 4 : The possibilities for R are as follows: (1) R x has at least one of t 2 or t 3 ; (2) R u has at least one of t 2 , or t 3 ; (3) R v has t 2 .Option (3) cannot occur.The only possible option in (1) or ( 2) is that t 3 appears in R u .This comes from the term u ′2 in ρ ′ .It follows that β 04 = 6g 12 . Term s 3 : We have Combining everything together we get β(t, s) = 4g 11 t 2 s + β 12 ts 2 + β 03 s 3 + 6g 12 t 4 + j+k>3, (j,k) =(4,0) We note that if φ is merely a smooth diffeomorphism, then the above calculations give the values for the jets of α and β at the origin of the corresponding orders.In either case the characteristic foliation on Σ ′ is given by ṫ It is easy to see that for a generic symplectomorphism φ : (x, u, y, v) → (x ′ , u ′ , y ′ , v ′ ) and a generic ψ the coefficients α 02 , β 12 , β 03 do not vanish.Indeed, if ψ is close to the identity map and the component u ′ of φ contains the term av 2 with a = 0, then f 2 02 = 0 and α 02 , β 12 , β 03 do not vanish.Therefore, they do not vanish generically. Remark.It follows from the above considerations that our restriction on φ to be generic involves only the 2-jet of φ at the origin.In other words, it suffices to require in Theorem 1 that φ has a generic 2-jet at the origin.Lemma 3. Let φ be a local symplectomorphism near the origin, and let X be the vector field near the origin in R 2 corresponding to the characteristic foliation on Σ ′ .Then X does not vanish outside the origin. Generalities on planar vector fields For the proof of Proposition 1 we need to determine the topological structure of the orbits or maximal integral curves associated with the vector fields defined by ( 14) and (28).Both systems have higher order degeneracy (the linear part vanishes) at the origin, and consequently it is a nonelementary singularity of ( 14) and (28).Therefore, standard results, such as the Hartman-Grobman theorem, do not apply here.Instead, we will use some more sophisticated tools from dynamical systems.We will be primarily interested in understanding the topological picture of ( 14) and (28) near the origin up to a homeomorphism preserving the orbits.In this section we outline relevant results and recall some common terminology. The local phase portrait of a vector field near a nonelementary isolated singularity can be determined through a finite sectorial decomposition.This means that a neighbourhood of the singularity is divided into a finite number of sectors with certain orbit behaviour in each sector.If the vector field has at least one characteristic orbit (i.e., orbits approaching in positive or negative time the singularity with a well-defined slope limit), then the boundaries of the sectors can be chosen to be characteristic orbits.The overall portrait is then understood by gluing together the topological picture in each sector.The general result due to Dumortier [8] (see also [9]) can be stated as follows: Suppose that a C ∞ -smooth vector field X singular at the origin in R 2 satisfies Lojasiewicz's inequality |X (x)| ≥ c|x| k , c > 0, k ∈ N, for x ∈ R 2 is some neighbourhood of the origin.Then X has the finite sectorial decomposition property, that is, the origin is either a centre (all orbits are periodic), a focus/node (all orbits terminate at the origin in positive or negative time), or there exists a finite number of characteristic orbits which bound sectors with a well-defined orbit behaviour (hyperbolic, parabolic, or elliptic).If the vector field X has a characteristic orbit, then its phase portrait is determined by its jet of finite order k, in the sense that any other vector field with the same jet of order k at the origin has the phase portrait homeomorphic to that of X .Further, whether the vector field X has a characteristic orbit depends only on a jet of X of some finite order. The original proof of the above result in [8] is based on the desingularization by means of successive (homogeneous) blow-ups.After each blow-up the singularity is replaced by a circle, and after a finite number of such blow-ups one obtains a vector field with only non-degenerate singularities.The construction of the blow-up maps depends only on a finite order jet of the original vector field at the origin.From the configuration of the singularities of the modified system on the preimage of the origin under the composition of blow-ups, it is always possible to deduce if the original vector field has a characteristic orbit.If such an orbit exists, then the singularity is not a centre or a focus, and the phase portrait is determined by a jet of finite order.Further, the Lojasiewicz inequality holds for any real analytic vector field in a neighbourhood of an isolated singularity (see, e.g., [4]) and, in particular, in our case, in view of Lemma 3. Alternatively, it is possible to use quasihomogeneous blow-ups, which are chosen according to the Newton diagram associated with X (see [22]).The advantage is that this gives a computational algorithm for constructing the sectorial decomposition for a particular system.A detailed discussion of this approach for real analytic systems is given in Bruno [6] in the language of normal forms.Using Bruno's method we will show that for a real analytic φ in general position, the vector field defined by (28) will always have a characteristic orbit, and its phase portrait near the origin is a saddle. If in Theorem 1 the map φ is smooth, then the vector field corresponding to the characteristic foliation is only smooth, and the Lojasiewicz inequality imposes additional assumption on the vector field, and therefore on φ.The Lojasiewicz condition depends on the jet of the vector field at the origin and holds for all jets outside a set of infinite codimension in the space of jets, but it is not clear whether for a generic smooth symplectomorphism the inequality is satisfied.However, assuming that the Lojasiewicz condition does hold, the topological picture of the characteristic foliation is determined by its finite jet at the origin.Therefore, we may consider a polynomial vector field obtained by truncation of (28) at sufficiently high order without distorting the phase portrait of the system.After that we may apply Bruno's method to determine its geometry.Thus, in Theorem 1 we may assume that φ is a generic smooth symplectomorphism such that the vector field corresponding to the characteristic foliation satisfies the Lojasiewicz inequality. If in Theorem 1 the map φ is a real analytic diffeomorphism with Dφ(0) symplectic, then all of the arguments go through provided that the vector field (28) vanishes at the origin only.The latter holds for the following reason: consider near the origin the complexification F of the real analytic map Moreover, since f has rank 2 outside the origin, it follows that the Jacobian of F does not vanish on R 2 \ {0}, and therefore, F is a local biholomorphism near any point on R 2 \ {0}.But this implies that Σ ′ \ {0} is totally real, and therefore the characteristic foliation has no singularities outside the origin.Thus, Theorem 1 holds under the assumption that φ is a generic real analytic diffeomorphism with Dφ(0) symplectic. In the remaining part of this section we outline Bruno's algorithm, while the actual numerical calculations for ( 14) and (28) are presented in the next section. Let X be a real analytic vector field on R 2 given by We write where Q = (q 1 , q 2 ) is the multi-index with integer entries, and (t, s) The Newton polygon Γ is defined as the convex hull of the set The Newton diagram, or the open Newton polygon in the terminology of [6], Γ of X is the union of the compact edges of the Newton polygon Γ.This can also be obtained as follows.Let q 2 * = min{q 2 : (q 1 , q 2 ) ∈ D} and q 1 * = min{q 1 : (q 1 , q 2 * ) ∈ D}.The point Γ 1 := (q 1 * , q 2 * ) is the left boundary point of the intersection of D with the horizontal support line q 2 = q 2 * .Consider the non-vertical support line L for D through Γ 1 .Continuing this procedure to the point Q * = (q * 1 , q * 2 ) which is the lowest point of D on the left vertical support line of D, i.e., q * 1 = min{q 1 : (q 1 , q 2 ) ∈ D} and q * 2 = min{q 2 : (q * 1 , q 2 ) ∈ D}.Denote this last point by Γ k .For every j = 1, ..., k − 1 we denote by Γ (j is enumeration as described above, k = 0 for vertices, k = 1 for edges) of the Newton diagram, there is a corresponding sector U k j in the phase space R 2 (t,s) , so that together they form a neighbourhood of the origin (here boundaries of the sectors are not necessarily integral curves).In each U 0 j one brings the system to a normal form, and in U 1 j one uses power transformations (quasihomogeneous blow-ups) to reduce the problem to study elementary singularities of the transformed system.After that the results in each sector can be glued together to obtain the overall phase portrait of the system near the origin. Several theorems on normal forms can be used to determine the phase portrait in a neighbourhood of an elementary singular point.We state those of them that will be relevant to us. Consider the system of two differential equations in two variables of the form where V ⊂ Z 2 is to be given. Second Normal Form [6, Ch.II, §2, Thm 1, p. 128]: Let R * and R * be two vectors in R 2 contained in the second and the forth quadrant respectively, and let the convex cone V bounded by R * and R * containing the first quadrant be such that its angle is less than π.Then system (31) can be transformed by a formal change of variables into a normal form where Q ∈ V, and Third Normal Form [6, Ch.II, §2, Thm 2, p. 134]: Suppose that the right-hand side of (31) contains the series in integral, non-negative powers of x 2 , with the support entirely contained within the sector where the vectors R * = (r * 1 , r * 2 ) and R * = (r 1 * , −1) are such that r * 1 < 0 < r * 2 , r 1 * > 0, and |r * 1 /r * 2 | < r 1 * .This means that f 1Q in f 1 (X) vanishes unless the vector Q lies in the sector Denote by 1 V(X) the class of such series f 1 .The coefficient f 2Q in f 2 (X) of (31) will vanish unless the vector Q lies either in 1 V, or along the ray {q 2 = −1, q 1 ≥ r 1 * }.Denote the class of such series by 2 V(X).Then the following holds: if the series f i in (31) are of class i V(X), then there exists a formal change of coordinates (32), where the h i are series of class i V(Y ), which transforms (31) into a system (33) in which the g i are series of class i V(Y ) consisting only of resonant terms g iQ Y Q with Q, Λ = 0. In the cases which we will consider below, normalizing changes of coordinates will be analytic or at least C ∞ -smooth local diffeomorphisms, see [6].This is sufficient in order to study a local topological behaviour of orbits. We now consider some important special cases.For the vertex Q = Γ (0) j , the corresponding sector is defined as where R * and R * are the unit (i.e., their coordinates are coprime integer numbers) directional vectors of the edges adjacent to Q.The system can be brought to the Principal or the Second Normal Form by introducing a new time variable τ 1 so that dτ 1 = (t, s) Q dτ .A particularly simple case occurs when Q = (q 1 , q 2 ) = Γ (0) j is the first or the last point of Γ and is not contained in the first quadrant.In this situation one of the coordinates of Q equals −1.Say, if q 2 = −1, then one takes R * = (0, 1), and the corresponding normal form has vertical integral curves.It follows that the original system (29) in the sector does not have any integral curves terminating at the origin.Similarly, if q 1 = −1, then R * = (1, 0), and again in the system does not have any characteristic orbits.Suppose now that Γ j is an edge of Γ.Let R = (r 1 , r 2 ) be the unit directional vector of Γ j .The corresponding sector in the phase space is given by Consider the power transformation is given by y is a matrix with integer coefficients and determinant equal to 1.In the matrix form, we can write X = (t, s), Then (29) can be given by (ln where X Q = t q 1 s q 2 .The power transformation can be expressed now as Y = X A taking (37) into (the superscript t stands for transposition), and After division by the maximal power of y 1 one obtains a new system.Here the y 2 -axis corresponds to {t = s = 0} in the original coordinates, and therefore one needs to investigate the new system in a neighbourhood of the y 2 -axis.The singularities of the new system are simpler than those of the original system, and therefore, an induction procedure can be used.Quite often the topological behaviour of the system in U 1 j (ε) can be determined by considering the truncation of the system which is defined by taking the sum in (30) only over the vertices contained in Γ (1) j . Phase portrait of the standard umbrella Since the standard umbrella corresponds to the non-generic case where φ is the identity map, we study its characteristic foliation separately.We rewrite system (14) in the form and set where Q = (q 1 , q 2 ) is the multi-index with integer entries, and (t, s) Q = t q 1 s q 2 .The Newton diagram Γ consists of two vertices Γ (0) 1 = (2, 0) and Γ (0) 2 = (0, 2) and the line segment (edge) Γ (1) 1 between them (see Fig. 1).For each element of the Newton diagram (the two vertices and the edge), there is a corresponding sector in the phase space R 2 (t,s) , so that together they form a neighbourhood of the origin.Accordingly we consider 3 cases. Case 1.First consider the vertex (2, 0).We define R * = (1, 0) and R * = (−1, 1).The first vector arises since Γ (0) 1 = (2, 0) belongs to the horizontal support line of D, and the second one is the unit vector in the direction of the edge in Γ starting from Γ (0) 1 .We can make the change of time dτ 1 = t 2 dτ .This yields the system The Newton diagram D corresponding to (39) has vertices (−2, 2) and (2, 0), in particular, it is contained in the sector V (with the angle < π) bounded by the rays generated by R * and R * .Therefore, for sufficiently small ε, in the sector there exists a smooth change of variables (t, s) putting the initial system to the Second Normal Form of Bruno.In the new coordinates the system has the form where the coefficients g 1Q and g 2Q are all zero except those for which −3q 1 + 4q 2 = 0.The line L := {−3y 1 + 4y 2 = 0} determined by the linear part of system (40) intersects the interior of the sector V (see Fig. 2).It follows (see Bruno [6], p. 132) that the system defined by (40), and hence by (39), is a saddle, i.e., each ray {±y j > 0, j = 1, 2} is an integral curve, and in each quadrant in R 2 , the integral lines are homeomorphic to hyperbolas.This is the description of system (17) in sector U 1 .Case 2. Consider now the second vertex (0, 2).Again, following [6] define R * = (1, −1) and R * = (0, 1).The first vector is the direction of the edge starting at the vertex (0, 2); the second one corresponds to the vertical support line and is imposed since this vertex is the left boundary point of the Newton diagram.The corresponding sector where the change of dependent variables will be performed is given by The change of time dτ 1 = s 2 dτ transforms system (17) into The edge of Γ becomes vertical in the new system.Performing as above a change of time, we may divide both sides by y 2 1 to obtain ẏ1 Under the change of variables (42), the line y 1 = 0 corresponds to the origin, and therefore, we are interested in the integral curves of system (43) that intersect the line y 1 = 0 at points with y 2 = 0.The set {y 1 = 0, ±y 2 > 0} are integral curves of (43), but they correspond to t = s = 0 in the original system.According to Bruno ([6], p. 141), the points on the y 2 axis can be either simple points, in which case the integral curves of (43) near such points are parallel to the y 2 -axis, or singular points.The truncation of system (43) (see the end of the previous section) contains only the terms that correspond to the edge under consideration and its vertices, and thus has the where f ′ 20 (y 2 ) = 7 + 2y 2 2 (we follow the notation of [6]).Singular points are determined from the equation f ′ 20 (y 2 ) = 0.In our case f ′ 20 (y 2 ) is strictly positive.Therefore, in (44) all points with y 1 = 0, y 2 = 0 are simple points.From this we conclude that in the sector U With this information the integral curves in all sectors can be glued together.It is readily verified that the phase portrait of system ( 17) is in fact a saddle, the integral curves in each quadrant of R 2 are homeomorphic to hyperbolas and do not intersect the coordinate axes (see Fig. 4). (54) This is a system for which the origin in an elementary singularity (the linear part is not zero).To determine the dynamics we need to understand the sign of the coefficients of the linear part, i.e., of λ 1 = −(2g Claim.λ 1 and λ 2 are of the opposite sign both for c + and c − . First note that λ 1 and λ 2 depend only on the coefficients g jk , i.e., only on the linear part of the map ψ • φ.Therefore, it is enough to prove the claim for linear symplectomorphisms.If φ is the identity map, then it is easy to see that λ 1 and λ 2 are of the opposite sign. 1 which contains at least one other point of D. Denote by Γ (0) 2 the left boundary point of the intersection of D with L. Consider now the support line through Γ (0) 2 which contains a point of D different from Γ (0) k are joined by the Newton diagram Γ.For each element Γ (k) j
2012-08-23T13:14:05.000Z
2011-06-21T00:00:00.000
{ "year": 2011, "sha1": "090682a29e07c8689d11b0f508cec325012307c0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "090682a29e07c8689d11b0f508cec325012307c0", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
14857482
pes2o/s2orc
v3-fos-license
Peak cortisol response to corticotropin-releasing hormone is associated with age and body size in children referred for clinical testing: a retrospective review Background Corticotropin-Releasing Hormone (CRH) testing is used to evaluate suspected adrenocorticotropic hormone (ACTH) deficiency, but the clinical characteristics that affect response in young children are incompletely understood. Our objective was to determine the effect of age and body size on cortisol response to CRH in children at risk for ACTH deficiency referred for clinical testing. Methods Retrospective, observational study of 297 children, ages 30 days – 18 years, undergoing initial, clinically indicated outpatient CRH stimulation testing at a tertiary referral center. All subjects received 1mcg/kg corticorelin per institutional protocol. Serial, timed ACTH and cortisol measurements were obtained. Patient demographic and clinical factors were abstracted from the medical record. Patients without full recorded anthropometric data, pubertal assessment, ACTH measurements, or clear indication for testing were excluded (number remaining = 222). Outcomes of interest were maximum cortisol after stimulation (peak) and cortisol rise from baseline (delta). Bivariable and multivariable linear regression analyses were used to assess the effects of age and size (weight, height, body mass index (BMI), body surface area (BSA), BMI z-score, and height z-score) on cortisol response while accounting for clinical covariates including sex, race/ethnicity, pubertal status, indication for testing, and time of testing. Results Subjects were 27 % female, with mean age of 8.9 years (SD 4.5); 75 % were pre-pubertal. Mean peak cortisol was 609.2 nmol/L (SD 213.0); mean delta cortisol was 404.2 nmol/L (SD 200.2). In separate multivariable models, weight, height, BSA and height z-score each remained independently negatively associated (p < 0.05) with peak and delta cortisol, controlling for indication of testing, baseline cortisol, and peak or delta ACTH, respectively. Age was negatively associated with peak but not delta cortisol in multivariable analysis. Conclusions Despite the use of a weight-based dosing protocol, both peak and delta cortisol response to CRH are negatively associated with several measures of body size in children referred for clinical testing, raising the question of whether alternate CRH dosing strategies or age- or size-based thresholds for adequate cortisol response should be considered in pediatric patients, or, alternatively, whether this finding reflects practice patterns followed when referring children for clinical testing. Electronic supplementary material The online version of this article (doi:10.1186/s13633-015-0018-y) contains supplementary material, which is available to authorized users. Background Undiagnosed adrenal insufficiency can be life-threatening [1][2][3]. Children exposed to prolonged courses of exogenous glucocorticoids or with congenital or acquired forms of hypopituitarism are at increased risk for adrenocorticotropic hormone (ACTH) deficiency, or central adrenal insufficiency [1,4]. Despite considerable clinical experience, the diagnosis of ACTH deficiency remains complex [5], and the "optimal" method to reliably diagnose ACTH deficiency remains unclear [6], particularly in children. While the insulin tolerance test (ITT) is often considered a "gold standard," its use is limited due to the potential for severe hypoglycemia and its contraindication in patients with a history of seizures or cardiovascular disease [7,8]. Although frequently used, the low-dose ACTH stimulation test does not allow for direct measurement of pituitary response, and concerns have been raised about the difficulty of reliably diluting the low dose of medication with precision [9]. The standard-dose ACTH stimulation test may be used to assess for primary adrenal insufficiency, but the large dose of 250 mcg produces supra-physiologic ACTH levels, which may lead to falsely reassuring cortisol responses in patients who may truly have inadequate responses to stress under more physiologic conditions [9]. Stimulation of the pituitary with corticotropin-releasing hormone (CRH), or corticorelin, can be used to test for both primary and secondary adrenal insufficiency through its stimulation of the release of ACTH from the pituitary [10][11][12]. The CRH stimulation test has been suggested as a useful and safe alternative to the ITT, as the cortisol response to CRH has been found to be significantly correlated with cortisol response to insulininduced hypoglycemia [13,14]. However, distinguishing between "healthy" and "inadequate" cortisol responses to this test remains a challenge, in part because there is not a clear consensus from previous studies on the patientspecific clinical factors that determine peak cortisol, particularly in children. Indeed, although some pediatric studies suggest that cortisol response after stimulation with CRH remains constant with increasing age [12,15,16], in other investigations, cortisol response after stimulation using other strategies, including low- [4] or standard-dose [17] ACTH, was negatively associated with age in children. The current recommended dosing for CRH is weight-based, which assumes a comparable pituitary and adrenal response to this medication across all ages and sizes. Previously published studies have not systematically focused on the relationship of body size to CRH response, particularly in children younger than six years [12,15,16]. Children under six years of age, in particular, may differ in clearance rates of medications due to incomplete maturation of physiologic and enzymatic processes [18]. Thus, the objective of the present study was to determine the effect of both age and body size on cortisol response, as measured by peak cortisol and cortisol rise from baseline (delta) to a standard CRH test in a cohort of nearly 300 children referred to a tertiary care center for suspicion of ACTH deficiency. Design This is a retrospective electronic medical record review of all children and adolescents referred for outpatient adrenal stimulation testing with CRH between January 2007 and April 2013 at The Children's Hospital of Philadelphia Day Medicine Unit. Subjects Subjects were less than 18 years of age at the time of testing; neonates (<30 days), most of whom receive clinically indicated adrenal stimulation testing as inpatients, were excluded. For subjects who underwent multiple stimulation tests, only the first was used for this analysis. All subjects underwent stimulation with 1 mcg/kg corticorelin (CRH) intravenously, prepared as a solution of 50 mcg corticorelin/mL by our institution's main pharmacy. Per standard protocol at our institution, cortisol and ACTH were measured at baseline and 15, 30, 60, and 90 min after CRH administration. This study was reviewed, approved, and granted a waiver of consent by the Institutional Review Board of The Children's Hospital of Philadelphia. Anthropometric and pubertal data Height and weight were abstracted from electronic medical record as measured on the day of stimulation testing. If unavailable from the day of the test, heights were abstracted from the closest Endocrinology clinic visit that occurred no more than 3 months before or after stimulation. BSA was calculated using the Mosteller formula [19]. The following additional elements of the physical examination from the closest Endocrinology clinic visit within 3 months of stimulation testing were also abstracted: breast Tanner stage (girls only), testicular volume and Tanner stage (boys only), and Tanner stage for pubic hair (both girls and boys). Subjects without either height or weight data or without pubertal exam were excluded from further analysis; this was 54 out of 297 subjects initially identified to have completed testing. Laboratory assessment of ACTH and cortisol values The main hospital laboratory at the Children's Hospital of Philadelphia performed all laboratory testing. Cortisol and ACTH were measured by chemiluminescence. The lower limit of detection for cortisol was 1.0 mcg/dL (30 nmol/L) and for ACTH was 5 pg/mL (1 pmol/L). For the hospital's main laboratory, the coefficient of variation for the cortisol assay was approximately 3-4 % and for ACTH was 5 %. (Personal communication with Tracey G. Polsky, MD, PhD, assistant director of the Clinical Chemistry Laboratory, The Children's Hospital of Philadelphia, February 20, 2015) Subjects without available ACTH values were excluded from further analysis (n = 11). Indication for testing All outpatient Endocrinology clinic visits within 3 months before or after stimulation testing were reviewed to determine the indication for referral for adrenal stimulation testing. A step-wise hierarchical approach was applied in order to assign a single, primary indication for each subject for the purpose of these analyses, even though patients could have more than one indication for testing. This is described here and illustrated graphically in Fig. 1. This categorization approach was developed based on a comprehensive review of pediatric adrenal insufficiency [1]. First, all subjects with exogenous glucocorticoid exposure noted as an indication for testing were assigned "exogenous glucocorticoid exposure" as their primary indication for testing. For remaining subjects, if short stature was listed as an indication for testing, they were categorized as either "concern for isolated growth hormone deficiency," or "concern for multiple pituitary abnormalities (excluding neoplasm)," depending on whether the medical history or imaging suggested a possibility of multiple pituitary abnormalities. Many of these patients underwent CRH stimulation testing as well as growth hormone (GH) stimulation testing. Subjects who subsequently had a likely inadequate response to growth hormone stimulation testing (GH <10 mcg/L) [20] were classified into "possible growth hormone insufficiency." Those with GH peak ≥ 10 mcg/L were considered to have "growth hormone sufficient short stature;" these subjects had apparently intact pituitary GH axis and no other indication of abnormal pituitary function aside from short stature. Next, the remaining subjects who did not have short stature listed as an indication for testing were categorized into one of the following groups: "neoplastic process with condition or therapy placing patient at risk for pituitary injury" or "known multiple pituitary abnormality." No subjects were suspected of having primary adrenal insufficiency. Subjects without documentation of concern for central adrenal insufficiency as the indication for testing were excluded from further analysis (n = 10), for a final subject total of 222. Statistical analysis "Peak cortisol" was defined as the maximum observed cortisol value measured following CRH administration. Change in cortisol, or "Δ cortisol," was defined as the difference between the baseline and peak cortisol. Cortisol values below the detection limit of 1 mcg/dL were recorded as 1 mcg/dL for the purpose of data analysis. Units were converted to SI using the standard conversions of 27.59 nmol/L per 1.0 mcg/dL cortisol and 0.22 pg/mL per 1.0 pmol/L ACTH. Descriptive variables were summarized by mean ± standard deviation (SD), and outcome variables by mean ± standard error of the mean (SEM) unless otherwise stated. Categorical variables, including pubertal stage, were assessed across groups and sex using the chi-square test. Peak cortisol was assessed across weight and age quartiles using one-way ANOVA. Baseline, delta, and peak cortisol and ACTH were assessed across indication for testing using one-way ANOVA. Rate of peak cortisol < 500 nmol/L was compared across indications and age category (six years or younger vs older than six years) using the chi-square test. Two-sample t-test was used to compare the groups with suspected GH deficiency (those found to be likely GH sufficient and those likely to Fig. 1 Categorization of indication for adrenal stimulation testing. Indications were extracted from outpatient Endocrinology notes; for many patients, more than one indication existed have GHD) and to compare cortisol response of subjects categorized by age younger than six years or older. Zscores for height and BMI were calculated using CDC 2000 growth standards for children 2 years and older and WHO 2006 growth standards for children younger than 2 years, as recommended by the CDC [21]. To assess any potential confounding effect on the outcomes of interest (laboratory parameters of the CRH stimulation test, including peak cortisol, baseline cortisol, and Δ cortisol), bivariable linear regression analysis was first performed for covariates of interest, including: age, size (as measured by weight, height, BMI, BSA, BMI z-score, and height zscore), indication for testing, baseline cortisol, pubertal status, race/ethnicity, and time of day (AM or PM). Factors with p-value of <0.2 were included in multivariable linear regression models for peak and delta cortisol. Next, backward elimination using p < 0.1 was used to determine final multivariable models for peak and delta cortisol. Each model included only one "size factor" (i.e., weight, height, BMI, or BSA) or age due to the strong positive correlation (p < 0.0005) between age and each of weight, height, BMI, and BSA. BMI z-score and height z-score were also included in separate models because z-scores were determined using both age and absolute height or BMI. Thus, to avoid the collinearity problem, only one of each of age or "size factors" was included per model. Of note, weight, height, BMI, BSA are absolute size factors, while BMI zscore and height z-score are calculated based on reference values for age and sex, and thus are relative size factors. Data analysis was performed with Stata, Release 13.0 (College Station, TX) and R version 3.0.0. A two-sided p value of <0.05 was considered statistically significant. Table 1 shows subject characteristics, summarized by indication for testing. The 222 subjects (27 % female) who met inclusion criteria had a mean age of 8.9 years (SD 4.5, range 0.4-17.8 years). Seventy-five percent were pre-pubertal (Tanner I), 22 % were peri-pubertal (Tanner II-IV), and 4 % were post-pubertal (Tanner V). These proportions were not significantly different between boys and girls (p = 0.07). Seventy-nine percent of subjects did not have pubic hair at time of stimulation testing. By body mass index (BMI) z-score, 26 % of subjects were overweight or obese (BMI z-score ≥1.04) [21]. As expected in a population including glucocorticoid-treated children, as well as those with known pituitary abnormalities and/or clinically referred for short stature, subjects were relatively short (mean height z-score −1.96, range −6.07 to 3.2). Subjects with known multiple pituitary abnormalities tended to be older and weighed more than those who were undergoing initial evaluation for pituitary abnormalities (p < 0.0005 for both; see Table 1). Additionally, for each group, the majority of subjects were male. This was most notable for the group with neoplasms with risk to the pituitary. The two groups of subjects screened for growth hormone deficiency (GHD) were similar in age, weight, height, and gender distribution (p > 0.05), and 49 % of those tested for GHD had peak growth hormone of < 10 mcg/L. Cortisol and ACTH response to stimulation Mean peak cortisol for all subjects was 609. Data are presented as mean (SD) a As determined by ANOVA across indication for testing 404.2 nmol/L (SD 200.2, range 0-905.0). Using cortisol of 500 nmol/L, a commonly used threshold to define "failure" to achieve reassuring cortisol response to CRH stimulation [6,22], failure rate varied significantly by indication for testing (p = 0.0066 by ANOVA). Fourtyeight (22 %) of all subjects had peak cortisol less than 500 nmol/L. The greatest failure rate occurred in the group tested due to exogenous glucocorticoid exposure; this group had 63 % (22/35) of subjects with peak cortisol < 500 nmol/L. Mean peak ACTH for all subjects was 20.2 pmol/L (SD 18.7, range 1. 1-197.8). Mean baseline ACTH was 4.1 pmol/L (SD 3.6, range 1.1-30.4); mean delta ACTH was 16.1 pmol/L (SD 18.2, range 0-192.5). Mean peak ACTH for subjects with peak cortisol less than 500 nmol/L was 11.4 pmol/L (SD 9.0, range 1.1-36.3), compared to mean peak ACTH of 22.6 pmol/L (SD 19.9, range 3.6-197.8) for subjects with peak cortisol of 500 nmol/L or greater. This difference was statistically significant (p = 0.0002 by two-sample t-test). Relationship between cortisol response, body size, age, and other clinical covariates Table 2 displays results of bivariable analysis of factors predicted to have a potential effect on peak or delta cortisol. In bivariable analysis, peak cortisol was significantly negatively associated with age, weight, height, BMI, BSA, and height z-score (p < 0.05 for each). Negative associations between body size factors and delta cortisol were also found but were less robust, with only height z-score reaching a similar level of statistical significance (p < 0.05). For purposes of further investigation of the relationship between outcomes with the predictive factors, factors with p-value < 0.2 were included in multivariable analysis and noted in Table 2, as described in Methods; for delta cortisol, these factors included weight, height, and BSA. Unlike for peak cortisol, age was not correlated with delta cortisol (p > 0.2). Baseline cortisol was significantly positively correlated with peak cortisol and negatively correlated with delta cortisol (p < 0.0005 for each). Baseline ACTH (p = 0.007), delta ACTH (p < 0.0005), and peak ACTH (p < 0.0005) were significantly positively correlated with peak cortisol. Delta and peak ACTH were also significantly positively correlated with delta cortisol (p < 0.0005 for each) ( Table 2). To assess for bias conferred by subjects considered to have central cortisol deficiency, sensitivity analysis was performed. In bivariable analysis excluding subjects with peak cortisol < 500 nmol/L, age and size variables of interest (weight, height, BSA, and height z-score) remained significantly negatively associated with peak cortisol (p < 0.05). BMI also remained negatively associated with peak cortisol (p = 0.13) (data not shown). Other factors with marginal significance (p < 0.2) in bivariable analysis of peak cortisol were sex (male vs female, p = 0.182) and time of testing (after vs before 12:00 PM, p = 0.17); both factors were negatively associated with peak cortisol. For delta cortisol, ethnicity (Hispanic/Latino vs non-Hispanic/Latino, p = 0.09) and time of testing (after vs before 12:00 PM, p = 0.023) were significantly negatively correlated. Pubertal status (post-pubertal vs pre-pubertal: p = 0.003) was significantly negatively correlated with delta cortisol but not peak cortisol. These factors were included in initial multivariable models for peak and delta cortisol. For peak and delta cortisol, indication for testing was associated with cortisol response (p < 0.0005). To account for this in multivariable linear regression, interaction terms between indication for testing and the size variable of interest were created and included in each model. Multivariable model for peak cortisol response to stimulation Multivariable linear regression analysis was used to determine factors that were independently associated with cortisol response to CRH stimulation. Table 3 displays final multivariable linear regression models obtained after backward elimination of non-significant (p > 0.1) variables that were initially included from bivariable analysis. Baseline cortisol and peak ACTH remained significantly positively associated with peak cortisol. Sex was marginally associated with peak cortisol in each multivariable model, with p-values ranging from 0.048 for the model including age to 0.057 for the model including height z-score. Each model also included interaction terms between indication for testing and either age or the size factor of interest. Because age and size are highly associated, only one of these was included in each model to avoid collinearity, as described in Methods. Models for weight, height z-score, and BSA included significant interaction terms between indication for testing and size. For these models, an interaction between size and exogenous glucocorticoid administration was detected (p < 0.05); within glucocorticoid-exposed children, the smallest children seemed to have the lowest peak cortisol, as described in more detail below. Multivariable linear regression analysis was repeated separately for the group with exogenous glucocorticoid exposure (Additional file 1: Table S1). In this analysis, the exogenous glucocorticoid group did not have an independent association between peak cortisol and weight or BSA (p > 0.05), but did have a significant positive association between peak cortisol and height z-score (beta = 58.6, p = 0.015), opposite the direction of the negative association between size and peak cortisol over all other subjects. Multivariable model for delta cortisol Table 4 displays final multivariable linear regression models obtained after backward elimination of nonsignificant (p > 0.1) variables that were initially included from bivariable analysis with delta cortisol. In the final multivariable models, pubertal status, ethnicity, and time of stimulation testing were no longer significantly independently associated with delta cortisol. Baseline cortisol remained significantly negatively and delta ACTH significantly positively associated with delta cortisol. Similar to the models for peak cortisol, each model of delta cortisol included interaction terms between indication for testing and the size factor of interest, as described above. Models for weight, BSA, and height z-score included significant interaction terms between indication for testing and size. Again, for these models, the association between delta cortisol and size (weight, BSA, or height z-score) among subjects tested due to exogenous glucocorticoids was positive, opposite that of overall subjects. Similar to the findings for peak cortisol, when analysis of delta cortisol was repeated by indication for testing, a significant positive association (p = 0.003) between height z-score (but not weight or BSA) and delta cortisol was found for subjects with exogenous glucocorticoid exposure (data shown in Additional 2: Table S2). Relationship between weight, age and peak cortisol response Figure 2 displays peak cortisol response by quartiles of absolute weight. As shown, subjects in the highest weight quartile tended to have the lowest peak cortisol, consistent with the negative correlation found on multivariable regression. By one-way ANOVA, peak cortisol differed significantly across weight quartiles (p = 0.0076). In the highest weight quartile, 36 % (20/55) of subjects failed to achieve a peak cortisol of 500 nmol/L, as opposed to 17 % (28/167) of subjects in quartiles 1-3 (p = 0.002 by chi-square test). To better understand the interaction between weight, age, and cortisol response, this analysis was repeated by age quartile, and no significant difference in peak cortisol across age quartiles was noted (p > 0.05, data not shown). Relationship between peak and baseline cortisol and time of testing Although mean baseline cortisol drawn between 8:00 and 9:00 AM tended to be higher than those drawn after 9:00 AM (226.2 nmol/L, SD 88.0 for 12 subjects vs 200.2 nmol/L, SD 148.2 for 206 subjects), this did not reach statistical significance (p = 0.5). Peak cortisol also did not differ significantly between these groups (mean peak 562.8 nmol/L, SD 171.4 vs 610.0 nmol/L, SD 216.6, Baseline and peak cortisol response in children six years or younger We sought to characterize cortisol response in children 6 years and younger, as limited data is available for children of this age referred for clinical testing. Baseline cortisol was significantly higher in children 6 years or younger (239.4 nmol/L, SD 172.0 for 57 subjects vs 188.7 nmol/L, SD 132.5 for 165 subjects, p = 0.0223 by two-sample t-test). Peak cortisol, however, did not significantly differ between these groups (652.1 nmol/L, SD 221.8 for 57 subjects vs 594.4 nmol/L, SD 208.5 for 165 subjects, p = 0.08). Rate of peak cortisol response less than 500 nmol/L also did not differ significantly between these groups (11/57 (19 %) vs 37/165 (22 %), p = 0.6 by chi-square). Relationship between weight, indication for testing, and peak cortisol To better understand the interaction between weight and indication for testing in the multivariable model for peak cortisol, subjects who "failed" (peak cortisol < 500 nmol/L) CRH stimulation were compared across weight quartiles and indication for testing, as shown in Fig. 3. As shown, the group with exogenous glucocorticoid exposure had significantly higher rates of failure, particularly for the middle two weight quartiles. A summary of failure rates for all groups is shown in black; this demonstrates the trend across groups toward higher failure rates among the highest weight quartile. Overall failure rates for each indication for testing are summarized in Table 5. Finally, to minimize effects of indication of testing on cortisol response, bivariable analysis was repeated for the two groups with the most similar subjects: those tested due to concern for GHD and subsequently found to be either likely GH sufficient or deficient. These groups had similar weight and age distributions (p = 0.79 and p = 0.50 by two-sample t test). In this analysis, the negative association between weight and peak cortisol (p = 0.008) and age and peak cortisol (p = 0.021) persisted, suggesting that the differences among subjects due to indication for testing cannot solely explain the negative correlation between body size or age and cortisol response. Discussion In both bivariable and multivariable analyses, peak cortisol after CRH stimulation testing was significantly negatively associated with age and multiple measures of body size, including weight, height, height z-score, and BSA in our study of over 200 children referred for clinical testing. Delta cortisol, another measure of cortisol response to stimulation, was similarly negatively associated with weight, height, height z-score, and BSA, but was not significantly associated with age. These findings may be interpreted in several ways. First, due to the retrospective nature of our study, referral bias may have played a role. For example, the high failure rate among glucocorticoid-exposed subjects may be due to referral of the most severely affected individuals, raising the pre-test probability of failure. Additionally, younger (and smaller) subjects may have been referred more readily despite their relatively healthy clinical status, making these subjects more likely to pass their stimulation test. Another possibility is the changing nature of indication for testing across age and size. For example, older and larger subjects may have been tested for indications that also increased their pre-test probability of failure, independent of their size and age. To account for these possibilities, we performed several sensitivity analyses. As explained above in Results, when bivariable analysis was repeated only for subjects tested due to concern for isolated growth hormone deficiency, the negative association between peak cortisol and weight or age persisted. Therefore, another important possible explanation to explore is that true physiologic differences in adrenal response to CRH exist, despite weight-based dosing, across the wide range of body sizes and ages of children included in the present study. An important previous study in healthy children did not find age-or size-based differences in pharmacokinetic or pharmacodynamic parameters in the response to CRH, but the sample size was relatively small (n = 21, girls and boys ages 6-15 years), and these investigators themselves noted the lack of data for children under 6 years of age [16]. However, in investigations using other techniques to assess hypothalamic-pituitaryadrenal axis function (low-and standard-dose ACTH), peak cortisol response in healthy children [17] and in children exposed to exogenous glucocorticoids [4] did decrease with age, consistent with our findings using CRH stimulation testing over all subjects. As noted previously, the subjects in our study with glucocorticoid exposure did not demonstrate a similar negative relationship between peak or delta cortisol and height z-score (Additional file 1: Table S1 and Additional file 2: Table S2). This may reflect the higher likelihood of adrenal suppression in subjects who had glucocorticoid exposure great enough to negatively affect height growth, as even inhaled corticosteroids have been associated with decreased height growth, particularly in prepubertal children [23]. Other studies have also found associations between cortisol levels and age in healthy children, but the direction of effect has differed between studies depending on statistical approaches, highlighting how the simultaneous effects of age and size are challenging to disentangle [24][25][26][27]. For example, in one of these investigations, salivary cortisol concentration was initially found to increase with age, but after statistical adjustment for BSA, the relationship with age seemed reversed [25]. The authors posit that this finding could reflect a lower production rate or higher rate of clearance of cortisol with age [25]. A separate study focused on this question found that daily cortisol production remained constant with age, again after adjustment for body surface area [28]. Taken together, these studies illustrate the important challenge in pediatrics of scaling for size when interpreting experimental results across a wide range of subject ages [29], particularly in the youngest children, in whom there is the additional complexity of incomplete maturation of kidney and liver function, which also affects drug metabolism [30]. To our knowledge, an independent, negative relationship between cortisol response to CRH and body size has not been demonstrated previously. As discussed above, due to the high correlation between age and body size, discerning the relative effects on cortisol response of each of these is a challenging undertaking, particularly due to differences in indication for testing across age and weight. For example, the association between peak cortisol response and age/size may be due to maturational differences in the responsiveness of the adrenal gland to ACTH and/or clearance of cortisol, differences that cannot be fully adjusted by the current weightbased dosing regimen of CRH. Differences in adrenal gland size may also partly explain differences in responsiveness across the ages and body sizes tested. The adrenal gland does not grow at the same pace as the rest of the body; instead, it decreases in size from birth to around one year of age, then gains mass, but more slowly than the body as a whole [31]. If circulating cortisol concentration were to remain constant or even increase with age as has been described by several authors [24][25][26], these relatively smaller adrenal glands would need to produce proportionally more cortisol to distribute across relatively larger blood volumes, assuming constant clearance. Although these relatively smaller adrenal glands would thus produce larger amounts of cortisol relative to body size on a constant basis, they may not produce as robust a response to acute stimulation, as they may already be operating at a "higher capacity." This "lower reserve" could explain the lower peak cortisol response to stimulation in subjects with larger body surface area and relatively smaller adrenal glands. Alternatively, age may be the driving force in the negative association, through mechanisms not primarily driven by body size. We looked for but did not find an effect of puberty and/or presumed adrenarche (pubic hair development alone) on cortisol response, but the sample was enriched in young, pre-pubertal children, so these effects may have been more difficult to detect. Indeed, at least one previous study has suggested increased volume of distribution and more rapid clearance of cortisol with the onset of puberty [32]. Sampling beyond the usual prescribed time range for CRH stimulation testing would be required to estimate these parameters. Additional careful pharmacokinetic and pharmacodynamics studies in children could help answer these questions. Strengths and limitations The strengths of the present study include its large size and wide range of ages studied, including 57 children age six years and younger, the largest study to our knowledge of children in this age range who have undergone stimulation with CRH. As mentioned above, our study has limitations related to its retrospective nature. One potential limitation was the health status of our subjects, who had a wide range of diagnoses and exposures to medications and therapies. Although this limits our ability to generalize to healthy children, our subjects are representative of the patients who most often undergo adrenal stimulation testing. As noted above, however, 89 % of subjects with presumed growth hormone sufficient short stature reached a cortisol peak of 500 nmol/L, consistent with our belief that this group was representative of subjects with a likely intact HPA axis, regardless of their short stature. In addition, we considered the possibility that age-or size-related differences in indication for testing could introduce bias into our results. We observed the negative association between peak cortisol and age or size even in multivariable regression analyses including testing indication and interaction terms between testing indication and age/size (Table 3). However, it would be optimal to reproduce these results in additional cohorts prospectively grouped by age and indication for testing and to consider studies in healthy children as well. Additionally, referral patterns may be valuable to study, as one interpretation of our results may be that younger/smaller children with intact adrenal function may be more likely to undergo testing to exclude ACTH deficiency as part of an initial evaluation. This may explain our finding that smaller, shorter children tended to have higher peak cortisol, opposite of what one would expect if these children were short due to underlying pathology associated with ACTH deficiency. Finally, an additional limitation is that cumulative glucocorticoid exposure was unavailable for analysis; although this was not the primary focus of our study, it may have allowed for a better understanding of the cortisol response among this group of subjects. The present study, the largest collection to date of pediatric CRH stimulation testing results to our knowledge, demonstrates that cortisol response to CRH stimulation is negatively associated with both age and size, as reflected by weight, height, BSA, and height zscore, in children referred for clinical testing, even after accounting for important clinical covariates. Additional careful pharmacokinetic and pharmacodynamic studies, including serial measurements of CRH, ACTH, and cortisol, could help clarify the etiology of these differences. That is, the volume of distribution of CRH, and/or the clearance of cortisol are at least two potential sources of age-or size-related variation.
2017-06-28T04:54:01.028Z
2015-10-22T00:00:00.000
{ "year": 2015, "sha1": "c75619fd4080e223afdb0730f107431895f6ad92", "oa_license": "CCBY", "oa_url": "https://ijpeonline.biomedcentral.com/track/pdf/10.1186/s13633-015-0018-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c75619fd4080e223afdb0730f107431895f6ad92", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
257685649
pes2o/s2orc
v3-fos-license
Cingulum and Uncinate Fasciculus Microstructural Abnormalities in Parkinson’s Disease: A Systematic Review of Diffusion Tensor Imaging Studies Simple Summary This article reviews the use of diffusion tensor imaging (DTI) to evaluate changes in white matter microstructure within two specific fiber tracts in Parkinson’s disease patients. It also examines how these structural changes may be related to cognitive impairments seen in advanced PD patients and provides insight into developing more targeted treatments for different types of Parkinson’s disease. Abstract Diffusion tensor imaging (DTI) is gaining traction in neuroscience research as a tool for evaluating neural fibers. The technique can be used to assess white matter (WM) microstructure in neurodegenerative disorders, including Parkinson disease (PD). There is evidence that the uncinate fasciculus and the cingulum bundle are involved in the pathogenesis of PD. These fasciculus and bundle alterations correlate with the symptoms and stages of PD. PRISMA 2022 was used to search PubMed and Scopus for relevant articles. Our search revealed 759 articles. Following screening of titles and abstracts, a full-text review, and implementing the inclusion criteria, 62 papers were selected for synthesis. According to the review of selected studies, WM integrity in the uncinate fasciculus and cingulum bundles can vary according to symptoms and stages of Parkinson disease. This article provides structural insight into the heterogeneous PD subtypes according to their cingulate bundle and uncinate fasciculus changes. It also examines if there is any correlation between these brain structures’ structural changes with cognitive impairment or depression scales like Geriatric Depression Scale-Short (GDS). The results showed significantly lower fractional anisotropy values in the cingulum bundle compared to healthy controls as well as significant correlations between FA and GDS scores for both left and right uncinate fasciculus regions suggesting that structural damage from disease progression may be linked to cognitive impairments seen in advanced PD patients. This review help in developing more targeted treatments for different types of Parkinson’s disease, as well as providing a better understanding of how cognitive impairments may be related to these structural changes. Additionally, using DTI scans can provide clinicians with valuable information about white matter tracts which is useful for diagnosing and monitoring disease progression over time. Introduction Parkinson's disease (PD) is a prevalent neurodegenerative disorder that affects 1% of individuals over the age of 60 worldwide [1][2][3]. Symptoms of PD can be divided into two categories: motor and nonmotor. Motor symptoms, which are the most well known, include akinesia (reduced movement), bradykinesia (slowness of movement), tremor, rigidity, gait Search Strategy and Data Extraction We performed a systematic search of the published literature to identify the studies that investigated the involvement of uncinate fasciculus and cingulum associated with PD pathology and symptomatology using DTI. We used the broad search terms: "Diffusion Recent MRI studies have revealed dysfunction of the cingulum in a variety of neurological and psychiatric disorders. Despite limited available data on anatomy and function of the cingulum, it is crucial to unravel hidden interactions of the highly complex limbic system [20]. DTI is a technique used to assess the microstructural composition of white matter by analyzing the movement of water molecules in the brain. It has been used to detect changes in the brain tissue caused by neurological diseases [21]. Intact WM contributes to high fractional anisotropy (FA), which results from high directionality of water diffusion along axon bundles and lower tissue density [22]. Mean diffusivity (MD) is also used to assess the diffusion of water [23]. Additionally, diffusion can be measured parallel or perpendicular to white matter fascicles, known as axial diffusivity (AD) and radial diffusivity (RD), respectively. Changes in AD values indicate axonal damage and fragmentation, while changes in RD values indicate changes in axonal density, axonal diameter, and myelination [23]. DTI has been widely used to investigate pathological changes in the WM of PD patients by probing the diffusivity of water molecules within the WM tracts [24]. In this systematic review, we aim to provide a comprehensive understanding of specific WM fiber tracts, particularly the cingulum bundle and uncinate fasciculus, in Parkinson's disease patients, in terms of their associations with DTI profiles. Search Strategy and Data Extraction We performed a systematic search of the published literature to identify the studies that investigated the involvement of uncinate fasciculus and cingulum associated with PD pathology and symptomatology using DTI. We used the broad search terms: "Diffusion Tensor imaging" [All Fields] OR "Diffusion tensor MRI" "[All Fields] OR" " Diffusion MRI" "[All Fields] OR" " DTI "[All Fields]) AND ("Parkinson's disease "[All Fields] OR " Parkinson disease "[All Fields] OR " PD "[All Fields])". We searched electronic databases including Scopus and PubMed from 2015 to November 2022, This was complemented by manual searching of the related papers through the list of references. To avoid duplication, the results were imported to Covidence software and articles were separately screened by two of the authors (Z.K. and M.H.K.). In case of disagreement, a third person (F.H.) interfered to decide whether to include or exclude articles. Among the search results, abstracts were screened for relevance. Studies which had investigated diseases other than PD or had used imaging methods other than DTI were excluded. Full papers were obtained for studies published in English that performed DTI in PD patients, and further assessed if they had investigated the cingulate bundle and uncinate fasciculus in PD patients to be included in this systematic review. Figure 2 illustrates the process of study selection according to the PRISMA guidelines. Result In our literature review, we identified 61 studies that investigated the changes in the uncinate fasciculus and cingulum in PD and used the PRISMA guidelines. One additional study was found during the writing process [25]. The studies were conducted globally, with inclusion ranging from 2015 until December 2022, However, several studies examining white matter changes in PD patients were excluded as they were either animal studies. Additionally, any studies that focused on structures other than the uncinate fasciculus and cingulum were also excluded. A summary of the study demographics is presented in Table 1. Result In our literature review, we identified 61 studies that investigated the changes in the uncinate fasciculus and cingulum in PD and used the PRISMA guidelines. One additional study was found during the writing process [25]. The studies were conducted globally, with inclusion ranging from 2015 until December 2022, However, several studies examining white matter changes in PD patients were excluded as they were either animal studies. Additionally, any studies that focused on structures other than the uncinate fasciculus and cingulum were also excluded. A summary of the study demographics is presented in Table 1. All studies except ten [36,40,53,60,63,73,82,[84][85][86] included healthy controls. The sample size of the PD patients varied widely, with a pilot study having only seven female participants [34] and the largest sample size being 205 from the prospective and longitudinal Swedish BioFINDER study [31]. The studies included both male and female participants, with most studies having a predominantly male population. One study [34] only included females, and two studies did not mention the gender distribution [43,45]. The disease duration of Parkinson's disease ranged from 1 ± 1.3 years in one study [60] to 14.3 ± 7.75 years in another study [35]. PD A summary of the studies is presented in Table 2. PD is a progressive, degenerative disorder that affects multiple systems in the body. It is characterized by the accumulation of α-synuclein protein in various brain regions, leading to both motor and non-motor symptoms [31]. A previous study utilized DTI to demonstrate degeneration of the nigrostriatal pathway in PD patients. Results showed differences in FA and MD, highlighting the value of DTI in the diagnosis of PD [31]. A study conducted on animals examined the effectiveness of diffusion kurtosis imaging (DKI)-an extension of DTI-in detecting changes caused by the accumulation of αsynuclein (α-syn) in the white matter (specifically the cingulum) of α-syn over-expressing transgenic mice (TNWT-61). The findings suggest that DKI could serve as a highly sensitive method for identifying changes in brain tissue induced by α-synuclein accumulation, which may indicate the progression of Parkinson's disease [31]. The cingulum, being a vulnerable area in the brain, has drawn attention in neurodegenerative research. Research has suggested that evaluating the cingulum fibers through DTI could enhance early diagnosis of neurodegenerative diseases. Decreased connectivity in the cingulum tract has been found to be negatively correlated with neutrophil-to-lymphocyte ratio (NLR) in the early stage of PD progression [31]. NLR is a non-invasive marker of peripheral neuroinflammation and increased NLR is associated with poor cellular immunity. According to the results of the study, degeneration of central white matter tracts in the brain occurs early in Parkinson's disease and is primarily located in the cingulum. This degeneration may contribute to early cognitive dysfunction. Changes in the DTI measures, including increased MD [31,39,64,68] and decreased FA [50,59,65] have been detected in PD patients, with a higher number of group differences being found as the mean diffusivity increases. Studies have suggested that MD may be more sensitive in detecting subtle white matter changes in early PD than FA, as has been found in other studies in early Alzheimer patients [87]. The pattern of decreased FA and increased MD and RD is indicative of neurodegeneration [48], which has been found in individual PD patients in the present study [33]. Three studies [34,52,70] suggest that the cingulum (where the FA of PD patients is greater than the FA of controls) is modulated by PD through a compensatory mechanism. The FA measures obtained from these brain regions may potentially be used to detect brain signal changes in an early stage of PD, possibly even before the clinical manifestation of motor symptoms [34]. The exact cause of the changes observed in the DTI of PD patients remains unknown, but it is believed to be due to variations in the diffusion ellipsoid dimensions caused by the neurodegenerative process. While FA is commonly used as a measure of white matter integrity, this interpretation should be approached with caution as it is influenced by various factors such as myelination, axon packing, membrane permeability, internal axonal structure, and tissue water content [88]. The findings of some studies have shown decreased FA and increased RD [45] in the cingulum of PD patients, while others have found increased FA and decreased AD in the same region [42]. It is hypothesized that extensive damage to white matter fibers occurs in the early stages of PD, potentially due to the aggregation of synapsins and Lewy bodies in vulnerable brain regions, resulting in atrophy, neuron loss, and demyelination of nerve fibers [89]. Motor Symptoms In comparison to healthy controls (HC), most studies found degeneration in PD patients with motor symptoms, as indicated by decreased FA [37,79,83], increased MD [74] and a combination of decreased FA and increased MD [46]. These results suggest that FA, MD, and other DTI measures could serve as quantitative biomarkers of motor symptom severity in PD. Lower FA is typically associated with decreased WM connectivity and is considered an indication of WM microstructural abnormalities. However, it is unclear why these changes occur. Motor symptoms usually appear late in PD patients, which suggests that decreases in FA occur later in the disease than increases in MD. Some studies have found increased FA in PD patients [52,60]. The cingulum, an association fiber that connects anterior and posterior cortical regions, showed increased FA or decreased MD/RD in PD patients and was associated with better olfaction and lower motor severity [76,90]. These findings suggest that increased connectivity in these WM structures could serve as a compensatory mechanism to facilitate efficient information transfer between different regions of the brain. Non-Motor Symptoms It has been found in most studies that individuals with PD who have non-motor symptoms such as dementia, depression, cognitive impairment, psychosis, or apathy have significantly decreased FA values compared to HC [29,36,40,56,80,86]. The pattern of decreased FA and increased mean diffusivity (MD) has also been shown in several studies [33,47,62]. In some studies, compensation was also defined by increased FA or decreased MD [43,55,84]. Studies have consistently shown that an increase in MD (AD and RD) is associated with cell atrophy and demyelination, which may indicate extensive degeneration in advanced PD patients who present with dominant non-motor symptoms. This loss of structural organization is believed to be linked to neurodegeneration. Depression and dementia frequently occur in PD patients, often appearing late in the disease at stage 3 or 4 of the Hoen and Yahr staging for motor involvement. They can also be present early on in the honeymoon period of PD, and have been shown to correlate with the severity of motor involvement [37]. Abnormal functioning in depression and dementia in PD patients may be due to degeneration of the microstructure in the white matter located in frontal-limbic regions. This has been observed in previous studies, and one hypothesis is that abnormalities in the frontal limbic system cause depression in PD patients [27]. Disruption of the structural integrity of white matter in the cingulum tract can be recognized as a marker to predict early PD, regardless of white matter alteration related to REM sleep disorder [83,91], depression [91,92], or olfaction dysfunction [93], which are thought to be early nonmotor symptoms of PD. Thus, changes in the cingulum microstructure could be used to detect early stages of PD and help distinguish between PD patients without dementia and depression or those in preclinical stages. Correlation DTI values in the cingulum have been shown to be significantly associated with cognitive function in PD patients. This was demonstrated by a correlation between DTI values and scores from the Mini-Mental State Examination (MMSE) and the Frontal Assessment Battery (FAB). The study results indicate that the more extensive the diffusivity abnormalities in the cingulum, the worse the cognitive performance [66]. Spearman rank order correlation analyses found significant correlations between changes in FA values in the cingulum and sociodemographically corrected Consortium to Establish a Registry for Alzheimer's Disease (CERAD) total scores in PD patients [29]. Additionally, lower scores on the Parkinson's Disease-Cognitive Rating Scale (PD-CRS) were associated with decreases in FA values in the cingulum [63]. A novel finding from one study showed a linear association between AD and the PD-CRS score in major WM tracts, without concurrent RD alterations. This suggests that extensive and progressive axonal degeneration, without evident demyelination, may be involved in cognitive impairment in PD [94,95]. The cingulum bundle has a correlation with the short Geriatric Depression Scale (GDS) [42]. In one study, the FA values in the left cingulum of PD patients with depression were negatively correlated with the Hamilton Depression Rating Scale (HDRS) scores, but no correlation was found with other disease characteristics such as age, duration, Unified Parkinson Disease Rating Scale III (UPDRS III), H&Y scale, and MMSE [36]. Another study found that the FA values negatively correlated with the UPDRS-III scores across all PD patients in the cingulum [34]. Positive correlation with disease duration and RD in the left cingulum was revealed by using Tensor-based registration (DTI-TK) and negative correlation by TBSS. The study suggested a preference for the DTI-TK based registration technique before statistical analysis [61]. On the other hand, some studies found no significant correlation between FA in the left cingulum and clinical measures [50,73]. A significant association was also found between FA of the ROI in the left cingulum and appendicular skeletal muscle mass index (ASMI). Low FA values in the left cingulum were identified as the strongest predictor of sarcopenia in PD patients [53]. Positive correlations of FA and non-motor symptoms such as depressive symptoms were also found in the left cingulum in some studies [85]. Interestingly, it has been found that PD risk can be affected by cardiovascular risk factors including serum cholesterol and apo-lipoprotein levels [96]. In the early stages of PD, apo-lipoprotein A1 might predict the microstructural changes of certain white matter tracts like the cingulum [97]. The relationships between clinical presentations, MD, AD, RD, and serum nuclear DNA levels have also been demonstrated. Results suggest that poor cardiovascular autonomic status in PD patients not only directly affects the white matter microstructure but also increases the serum nuclear DNA level, further impacting the white matter microstructure [79]. Impairment of the ipsilateral posterior cingulum in PD may reflect the loss of dopaminergic inputs from the midbrain, as indicated by the statistically significant association between Activities of Daily Living (ADL) and maximum MD/RD [81]. A correlation between verbal memory and FA in the right posterior cingulum tract (PCT) was found, with greater FA in the right PCT being associated with better performance in verbal recognition memory, a core process in subsequent recognition memory [56]. The Lille Apathy Rating Scale (LARS) scores of the apathetic PD group were negatively correlated with the FA values in the left cingulum [86]. The AD and RD values were positively associated with the UPDRS, UPDRS-III, and NMSS scores in the cingulum [45]. A negative association between RD and a positive association between FA values and the Scales for Outcomes in Parkinson's disease-Cognition (SCOPA-COG) scores were found in the cingulum [45]. One study found correlations between the MD parameter and declining processing speed and discrepancies in the cingulum tract [31]. PD A summary of the included studies is presented in Table 3. The uncinate fasciculus interacts with the orbitofrontal cortex, assigning value to stored representations through interactions with temporal lobe-based information related to reward and punishment [98]. The most common cause of PD in autosomal recessive families is mutations in the parkin gene (PRKN) [99]. Tract-based spatial statistics (TBSS) using permutation analysis of linear models (PALM) has revealed elevated radial diffusivity (RD) in patients with parkin dysfunction (PRKN) compared to HCs. This finding is considered to be one of the most prominent pathological manifestations of parkin dysfunction, as it has been demonstrated by the elevated RD in multiple tests [57]. Another study [72] also confirms these results and suggests that PRKN patients with widespread increases in RD are more susceptible to widespread demyelination. Different methods, including convolutional neural network (CNN) based methods, have shown that patients with PD exhibit increased mean diffusivity (MD) values [35,38,49,51,64]. However, some studies have reported decreased fractional anisotropy (FA) values [44,48,59,67]. MD appears to be more sensitive at detecting subtle changes in white matter in the early stages of PD compared to FA. It is due to damages to axons and neurons, as well as loss of myelin integrity in PD, that might result in decreased restriction of water molecule displacement, leading to higher MD values in PD patients [100]. A distinct pattern of neurodegeneration, characterized by low FA, high MD, low AD, and high RD, has been identified in PD patients [48]. This pattern was observed in a study by Andica et al. using TBSS [32] suggesting that PD patients are more susceptible to degeneration of the uncinate fasciculus (UF). Christina Andica et al. conducted an analysis of white matter (WM) and gray matter in PD patients using TBSS. They found that PD patients with neurocognitive and psychiatric disorders (PD-wNCP) and PD patients without these disorders (PD-woNCP), compared to healthy controls (HCs), exhibited lower fractional anisotropy (FA), higher mean diffusivity (MD), higher radial diffusivity (RD), and higher axial diffusivity (AD), which has been described as neurodegeneration [33]. This pattern has previously been defined as neurodemyelination [48]. Motor Symptoms The exact role of the UF in the development of motor symptoms in Parkinson's disease is still not clear. Only a few studies have investigated the effect of the uncinate fasciculus on motor symptoms, with inconsistent results [28,66,71]. Non-Motor Symptoms Parkinson's disease, along with other neurodegenerative diseases such as Alzheimer's disease, frontotemporal dementia, and apathy, is characterized by non-motor symptoms that are associated with white matter (WM) pathways, including the UF and cingulum [101]. Changes in diffusion tensor imaging (DTI) measures have been observed in the UF in PD patients with cognitive impairment, depression, and apathy. This suggests that non-motor symptoms in PD are related to the impairment of long white matter nerve fibers. It is known that multiple neurotransmitter pathways, including noradrenergic and cholinergic pathways, that project to the frontal lobe are impaired in PD patients with non-motor symptoms and other non-cognitive problems [102][103][104][105][106]. Studies have reported that patients with non-motor symptoms such as depression, dementia, and cognitive impairment exhibit more degeneration, as indicated by decreased FA [36,63,80], increased MD [26,73], and increased mean diffusivity and AD [27]. A significant reduction in white matter connectivity in UF has been found in PD patients with depressive symptoms compared to non-depressed patients. The pathophysiology of depression has been extensively studied in relation to UF, with reduced FA serving as a marker for tract microstructural alteration in individuals with major depressive disorder (MDD) [107]. Although Delaparte et al. [108] could not identify any significant differences between anxious and non-anxious depression, anxiety, which is frequently associated with depression, has been demonstrated to be connected to disrupted UF [108,109]. Previous research [23,110,111] has defined degeneration in the UF as low FA and high MD in patients with impulsive-compulsive behaviors and PD with neurocognitive and psychiatric symptoms [33,54]. In addition, findings of increased MD confirm neurodegeneration observed in prior studies. One study found that PD patients with impulse control behaviors (PD-ICB) had higher MD compared to HCs in the UF [35]. Correlation Alterations in DTI parameters in PRKN patients were closely linked to both disease duration and serum levels of 9-hydroxystearate, a marker of oxidative stress. The microstructural changes in white matter seen in PRKN patients may therefore be a result of disease duration and oxidative stress. The study by Koinuma et al. [57] showed that the AD values in the UF were negatively correlated with the serum levels of 9-hydroxystearate, while MD and RD values were positively correlated with these levels. A study found a correlation between the uncinate fasciculus and access to lexical semantic information stored in the temporal lobe, primarily in the left hemisphere. The results suggest that both the right and left UF support word production when selecting among competing alternatives is required [38]. Another study found a significant positive correlation between brain activation in the left IOFC during the verbal learning memory fMRI task and the FA of the right UF. This suggests that the greater the integrity of the UF in PD patients, the greater the functional brain activation in the left IOFC while performing the learning task. The study also revealed a significant correlation between brain activation in the left inferior orbitofrontal cortex (IOFC) during the verbal recognition functional magnetic resonance imaging (fMRI) task and verbal memory impairment, suggesting that the deficit in verbal memory performance during the fMRI paradigm could be influenced by lower brain activation in orbitofrontal cortices during the recognition memory fMRI task [56]. In one study, it was found that there was no relationship between UPDRS and motor scores with the FA of each white matter fasciculus [59]. However, in another study, the FA values were negatively correlated with the UPDRS-III scores across PD patients in the UF [34]. A decrease in total PD-CRS score was associated with decreased FA values in the UF [63]. However, no significant correlation was found between BDI scores and FA values [73]. Additionally, a significant correlation has been observed between the DTI values in the right UF and the Hamilton Depression Scale (HAM-D) scores [66]. In addition, one study found a correlation between MD parameter and MoCa scores in the UF [26]. Based on several studies, the correlation between changes in white matter tracts and cognitive impairment does not seem to be influenced by region, cell type, or gender. Additionally, some studies have reported that voxel-wise correlation analysis for fractional anisotropy (FA) values did not reveal any variations based on either cell type or gender [29]. Further investigation comparing patients with and without Parkinson's disease (PD) found no significant differences in terms of age, gender, or level of education [39]. In addition, the results of multiple linear regression analyses indicated that in people with Parkinson's disease, white matter (WM) integrity and being male were significantly associated with muscle mass [53]. Studies conducted on various diseases, including Alzheimer's disease (AD), have demonstrated that patients may experience changes in brain structure even before displaying symptoms of cognitive impairment [112]. Likewise, the studies we have included suggest that diffusion tensor imaging (DTI) may be useful in detecting microstructural changes in Parkinson's disease before clinical symptoms become apparent [28,29,32,38,39,46,53,58,81,112]. Several studies suggest that in the early stages of Parkinson's disease, neural reorganization may occur as a compensatory mechanism to combat the pathology. This phenomenon could potentially explain why some individuals with Parkinson's disease do not experience cognitive impairments [76]. DTI may not be able to detect early changes in Parkinson's disease, but it can potentially serve as a surrogate marker by differentiating between early and late stages of the disease [45]. To confirm these findings and investigate potential links between preclinical brain changes and later development of cognitive impairment symptoms in Parkinson's disease patients, a longitudinal study is necessary. Earlier research has suggested that diffusion tensor imaging (DTI) could serve as a diagnostic tool to differentiate Parkinson's disease patients from healthy individuals. By analyzing white matter fiber connections and measuring specific biomarkers, DTI may be capable of providing clinical presentations and assessing the severity of Parkinson's disease [113,114]. As we mentioned before, FA, MD, and other DTI measures could serve as quantitative biomarkers of motor and non-motor symptoms in PD patients. Despite abundant published studies of DTI markers in PD, DTI is not currently widely utilized in clinically standard MRI scanning [115]. Due to limited scanning time, conducting DTI in clinical setting may result in problems such as noise, fiber crossings, low resolution, distortion, and artifacts. Thus, the decreased quality of DTI images makes it hard to obtain precise quantitative measurements [116,117]. The quality of DTI analysis will increase with the use of advanced diffusion techniques, including high-resolution, high-field MRI, enhanced distortion corrections, and fiber crossing solutions [118,119]. However, it is crucial to create clinically useful parameters based on these cutting-edge methods. Furthermore, scanning parameters including MRI field strength, number of encoding directions, and maximum b values have a significant impact on DTI variables. DTI measurements from various MRI facilities need to be harmonized, and consistent cutoff values for these DTI parameters need to be created in order to eventually improve the individual definition and treatments of PD. Conclusions Our review provided microstructural insight into the heterogeneous PD subtypes according to their distinct clinically relevant connectivity features. Cingulum: In this study, we found that individual PD patients had increased MD, possibly defined by degeneration in the early stages. When PD patients experience motor symptoms, FA decreases and/or MD increases, which may result in more degeneration at a later stage of the disease. PD patients with non-motor symptoms showed significant decreases in FA more towards the end of the disease, indicating that extensive degeneration occurred in their non-motor symptoms. UF: There is a high probability of widespread demyelination and degeneration in UF for PD. Non-motor symptoms appear lately with extensive degeneration. Cingulum compensation occurred for both motor and non-motor symptoms similarly.
2023-03-23T15:31:48.132Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "a9bcfdaa82e5a9d64964bca5e1edc489922da7e6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-7737/12/3/475/pdf?version=1679324779", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9d4f9e309594a3ea850ddb7700a91b22237305ed", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [] }
251677556
pes2o/s2orc
v3-fos-license
Quitting Smoking before and after Pregnancy: Study Methods and Baseline Data from a Prospective Cohort Study Smoking during pregnancy and postpartum remains an important public health problem. No known prior study has prospectively examined mutual changes in risk factors and women’s smoking trajectory across pregnancy and postpartum. The objective of this study was to report methods used to implement a prospective cohort (Msgs4Moms), present participant baseline characteristics, and compare our sample characteristics to pregnant women from national birth record data. The cohort study was designed to investigate smoking patterns, variables related to tobacco use and abstinence, and tobacco treatment quality across pregnancy through 1-year postpartum. Current smokers or recent quitters were recruited from obstetrics clinics. Analyses included Chi-square and independent sample t-tests using Cohen’s d. A total of 62 participants (41 smokers and 21 quitters) were enrolled. Participants were Black (45.2%), White (35.5%), and multiracial (19.3%); 46.8% had post-secondary education; and most were Medicaid-insured (64.5%). Compared with quitters, fewer smokers were employed (65.9 vs 90.5%, Cohen’s d = 0.88) and more reported financial strain (61.1% vs 28.6%; Cohen’s d = 0.75). Women who continue to smoke during pregnancy cope with multiple social determinants of health. Longitudinal data from this cohort provide intensive data to identify treatment gaps, critical time points, and potential psychosocial variables warranting intervention. Introduction Smoking during pregnancy is a key modifiable risk factor for poor maternal and infant health outcomes. Smoking is linked to preterm birth, stillbirth, neonatal mortality, miscarriage, and fetal growth restriction [1][2][3]. Furthermore, smoking during pregnancy is associated with long-term consequences for the child in terms of growth, delayed development, and weight problems [4][5][6][7]. Children of smokers have a higher incidence of childhood asthma, behavioral disorders, and poor academic performance in school [8,9]. Despite these problems, in the United States (US), one in fourteen women smoke during pregnancy [10]. Some women quit tobacco use when they learn about their pregnancy, but most return to smoking following birth [11][12][13][14]. Cross-sectional and longitudinal studies have identified important variables related to tobacco use and relapse both during pregnancy and postpartum such as socioeconomic status, depression, and having a partner who smokes [11,12,[14][15][16]. In the United Kingdom (UK), Munafò and investigators found that a reduction in depressive symptoms during the course of pregnancy to the immediate postnatal period was associated with smoking cessation [15]. A more recent cohort in the UK showed that while quit attempts increased late in pregnancy, the intention to quit in the next 30 days decreased in the same period of time. Further, both quit attempts and intention to quit decreased 3 months after delivery [17]. In the US, a prospective longitudinal cohort tracking the number of cigarettes 2 of 14 used per month from preconception to 2-months postpartum showed that most women cut down their smoking between the third and fourth month of pregnancy [18]. Together, these studies show that women's smoking changes during the peripartum period and that psychosocial variables are important determinants of smoking. Prior pregnancy and postpartum cohorts were unable to determine the precise timing of relapse among quitters since smoking status was assessed in a single evaluation at 3-months postpartum [17] or at only a few timepoints during the first year postpartum [12,15]. Even long cohorts that followed mothers up to 6 years post childbirth, assessed smoking status in only six timepoints during the entire 6-year period [11]. Another limitation of prior cohort studies is that they assessed few potential psychosocial variables related to tobacco use that can be informative for intervention development. Similarly, few studies have explored whether and how smoking cessation treatment is provided during health care visits. Several studies have suggested that health care providers routinely ask about tobacco use in the initial prenatal visits but do not routinely provide other elements of guidelines-based tobacco treatment [19][20][21]. In the UK, almost 40% of women reported no discussion of quitting with a health care provider across the entire pregnancy despite the fact that approximately half of the women were interested in receiving support to stop smoking [22]. Many of these prior studies have relied on retrospectively reported tobacco use data for preconception and pregnancy [11,12] or did not assess concurrent changes in psychosocial variables related to tobacco use that can be informative for intervention development [11,12,14,17,18,22,23]. Importantly, no studies have prospectively examined associations between women's perceptions of the quality of smoking cessation treatment during pregnancy and postpartum and any resultant changes in smoking status. In order to improve our understanding of proximal influences on women's peripartum smoking patterns, a prospective cohort study was conducted, Msgs4Moms, enrolling women during pregnancy and following them through 1-year postpartum. This longitudinal cohort study leveraged text messaging, a widely used, easily accessed, and low-cost communication method, as a novel approach for assessing smoking at weekly intervals. Text messages were also used to track when participants visited their health care provider and to conduct brief surveys on the quality of the tobacco treatment they received. The study also longitudinally assessed a wide range of potential variables related to tobacco use during pregnancy and the first year postpartum using five online surveys delivered at intervals throughout pregnancy and postpartum. Specifically, our surveys assessed psychological variables (e.g., depression, anxiety, coping, and stress) [24,25], financial and food insecurity, social environmental influences [24,26], substance use [16,27,28], and other relevant health behaviors. To our knowledge, the present cohort is the first study to prospectively track smoking behavior changes using weekly remote assessments of cigarette use from pregnancy to 1-year postpartum. It is also the first cohort to proximally assess the quality of tobacco treatment delivered across health care visits. This cohort study was developed to (1) identify detailed patterns of smoking and quitting during pregnancy and postpartum, (2) describe the sociodemographic and psychosocial variables related to women's peripartum smoking patterns, and (3) describe women's perspectives on the quality of tobacco treatment received during pregnancy, postpartum, and pediatric visits. The aim of this paper was to describe the novel methods used to implement the cohort study, collect intensive smoking status data over the course of pregnancy and the first year postpartum, and we present the baseline sociodemographic and tobacco-related characteristics of cohort participants. In addition, we compare demographic and smoking characteristics of the recruited sample with pregnant women from national birth record data to examine the representativeness of this cohort. Design The study was a prospective cohort design. Women were followed from enrollment during pregnancy to 1-year postpartum. The longitudinal follow-up included three types of assessments (see Figure 1): (1) a baseline and four follow-up email surveys assessing tobacco-related risk factors, (2) six surveys sent via text message with a survey link assessing the quality of tobacco treatment that participants received during prenatal or postpartum visits, and (3) weekly text messages assessing past 7-day tobacco use and craving. STROBE guidance was used for reporting [29]. Institutional Review Board approval was received from the University of Kansas Medical Center Institutional Review Board. first year postpartum, and we present the baseline sociodemographic and tobacco-related characteristics of cohort participants. In addition, we compare demographic and smoking characteristics of the recruited sample with pregnant women from national birth record data to examine the representativeness of this cohort. Design The study was a prospective cohort design. Women were followed from enrollment during pregnancy to 1-year postpartum. The longitudinal follow-up included three types of assessments (see Figure 1): (1) a baseline and four follow-up email surveys assessing tobacco-related risk factors, (2) six surveys sent via text message with a survey link assessing the quality of tobacco treatment that participants received during prenatal or postpartum visits, and (3) weekly text messages assessing past 7-day tobacco use and craving. STROBE guidance was used for reporting [29]. Institutional Review Board approval was received from the University of Kansas Medical Center Institutional Review Board. Sample Size The required sample size of 48 participants was based on 80% power to detect a small to medium (0.4) correlation between the quality of tobacco treatment and length of abstinence at p < 0.05 using a two-tailed test. Data from the National Health Interview Survey indicate a medium association between physician advice to quit and cessation [30]. Recruiting 60 participants with an expected retention rate of 80% would result in at least 48 women completing the study. Participants A cohort of 62 pregnant women was recruited. Eligibility criteria included having smoked at least 100 cigarettes in their lifetime, age 18 years of age or older, being current smokers or having smoked anytime in the 6 months prior to pregnancy (recent quitters), English-speaking, access to a cellphone, and being willing to receive and send text messages. We initially excluded women greater than 28 weeks gestational age; however, this exclusion criterion was removed to improve our ability to recruit participants. Recruitment Women were recruited between March 2019 and January 2020 from two Kansas and Missouri metro areas. The full sample was recruited from the University of Kansas Medical Center (KUMC) and from external health care clinics. Participants were recruited while attending prenatal appointments at five obstetric and family practice clinics and a Sample Size The required sample size of 48 participants was based on 80% power to detect a small to medium (0.4) correlation between the quality of tobacco treatment and length of abstinence at p < 0.05 using a two-tailed test. Data from the National Health Interview Survey indicate a medium association between physician advice to quit and cessation [30]. Recruiting 60 participants with an expected retention rate of 80% would result in at least 48 women completing the study. Participants A cohort of 62 pregnant women was recruited. Eligibility criteria included having smoked at least 100 cigarettes in their lifetime, age 18 years of age or older, being current smokers or having smoked anytime in the 6 months prior to pregnancy (recent quitters), English-speaking, access to a cellphone, and being willing to receive and send text messages. We initially excluded women greater than 28 weeks gestational age; however, this exclusion criterion was removed to improve our ability to recruit participants. Study Procedures 2.4.1. Recruitment Women were recruited between March 2019 and January 2020 from two Kansas and Missouri metro areas. The full sample was recruited from the University of Kansas Medical Center (KUMC) and from external health care clinics. Participants were recruited while attending prenatal appointments at five obstetric and family practice clinics and a prenatal educational program in partnership with a local health department. At external clinic recruitment sites, potentially eligible women completed a consent to contact form that included prescreening questions regarding smoking history. Clinic staff emailed or faxed the authorization to contact potential patients to the study team. At KUMC, study staff used the electronic medical record to identify potential participants and met with the patients during their clinic visit. Study staff provided a brief overview of the study and asked interested participants to complete the consent to contact form. In addition, the investigators explored using Facebook ads as a recruitment tool for a 2-week period but were unable to reach any of the potential participants who provided contact information. Women willing to participate in the cohort study were contacted by study staff to complete screening over the phone. Eligible women verbally consented to participate in the cohort study and study staff assessed comprehension of the study requirements and potential risks. Eligible participants were emailed a copy of the consent form and a baseline survey. Women who completed the baseline survey within 2 weeks were enrolled in the study. Emailed Survey Distribution Participants were sent emailed invitations to complete a baseline survey and four follow-up surveys at the following time-points: third trimester, 1-month postpartum, 6-months postpartum, and 1-year postpartum. The surveys assessed socio-demographic and pregnancy characteristics, tobacco and other substance use, and psychosocial variables. Retention efforts for the study included reminder emails for survey completion and follow-up calls from study staff. After the initial email invitation for each survey, survey reminders with the link to the survey were sent every 2 days for five occurrences. After this, study staff called participants until they either filled out the questionnaire or their survey 60-day window expired. Study staff also sent mailed paper surveys if unable to contact by phone or email. Weekly Tobacco Use Text Survey Distribution Participants were sent weekly brief text messages from enrollment through to 1-year postpartum. These weekly text surveys assessed cigarette smoking and craving over the past 7 days. Participants were asked about recent (past 7 days) and upcoming health care provider appointments. Tobacco Treatment Quality Questionnaire Distribution The six surveys assessing provider tobacco treatment quality were administered following two prenatal, the postpartum, and three well-baby visits. These surveys were sent via a text message survey link. To reduce recall errors, the tobacco treatment quality surveys were scheduled using appointment dates from the weekly text message questions. Surveys about tobacco treatment quality were sent within an hour of responding to the text message for recent appointments or scheduled for the date of upcoming appointments. The survey was scheduled to be resent every 24 h for five occurrences, with a 7-day window for completion. All study surveys were stored and distributed using REDCap. REDCap is a secure web-based application for developing surveys, capturing data, and managing data for research studies [31,32]. Text assessments were delivered via REDCap using the Twilio API (a text service provider) to deliver the text message inquiries and responses, which were stored in REDCap. All surveys in this study used primarily automated delivery. Each participant received a unique identifier that identified all sent and received messages and surveys. Incentives Participants received a USD 30 incentive at baseline, a USD 25 incentive for each follow-up survey (3rd trimester,1-, 6-, and 12-months postpartum) and an additional USD 10 if they completed the surveys within 1 week. They received also USD 5 for completing each survey following a health care visit (PEIs) and a bonus of USD 5 if they completed this survey within 5 days. Participants received a USD 1 incentive for each weekly text message survey with a bonus payment of USD 1 at the end of the month if they completed all the surveys for any given month. Participants could receive up to a maximum total of USD 330 for participating in this study. Incentives were electronically deposited to a cash card that could be used as a debit card. Survey Measures Survey time points and measures are presented in Table 1. Participant demographics and pregnancy-related characteristics including age, education, race, ethnicity [33], household income, employment status, transitions in housing, parity, and due date were collected at baseline. Tobacco Outcome Measures The 7-day point prevalence smoking status was assessed each week, including whether participants smoked, cigarettes per day, number of days smoked, and craving. Use of other tobacco products, e-cigarettes, and quit attempts were also assessed in each emailed survey. Variables Related to Tobacco Use The cohort collected measures identified in prior studies as factors related to abstinence/relapse in this population [34]. These include smoking history and related characteristics (e.g., nicotine dependence) [35], smokers in the social network, motivation and confidence to quit [36], and psychological variables (e.g., stressors, depression, and anxiety) [37,38]. Quality of Tobacco Treatment This study used an adapted version of the Patient Exit Interview (PEI) [39] to measure the degree to which participants' health care providers delivered the evidence-based 5 "A"s of tobacco treatment. The PEI yields a single score that represents the quality of tobacco treatment delivered by providers. The PEI has good validity [40] and can be administered in 3-5 min. Data Analysis Descriptive statistics (means and standard deviations for continuous variables and totals/percentages for categorical variables) were used to summarize baseline demographic and tobacco-related measures by smoking status. We calculated Cohen's d to indicate effect size [41]. Cohen's d is primarily used to report intervention effect sizes; however, effect sizes may be useful for interpreting the magnitude of differences between groups in observational studies [42,43]. Cohen's d values are: <0.2 = negligible effect, ≥0.2 to <0.5 = small effect, ≥0.5 to <0.8 = medium effect and ≥0.8 = large. For continuous variables d = (mean1-mean2)/SD, for binary data d = ln (OR12) × sqrt(3)/pi, where ln is the natural log, OR12 is the odds ratio of group 1 versus group 2, and pi is approximately 3.14. We compared demographics of the cohort sample to the 2018 Natality data from the Center for Disease Control and Prevention (CDC). The US Natality dataset reports statistics from birth certificates within the United States. Data are available by a variety of demographic characteristics, such as state and county of residence, mother's race, and age, as well as maternal health risk factors, such as tobacco use during pregnancy [44]. Analyses were carried out in SAS version 9.4.23 (SAS Institute, Cary, NC, USA). Results A total of 117 women completed consent to contact forms at the recruitment sites. In total, 45 (38.5%) women were not eligible to participate in the study, resulting in an eligibility rate of 61.5%. Most participants were excluded for being unable to reach or consent, for not being interested in the research, or for being a non-smoker. Among 72 eligible and consented participants, 62 completed the baseline questionnaire within 2 weeks and were enrolled in the study (see Figure 2). The cohort was comprised of 41 women who were currently smoking at baseline and 21 who were recent quitters. Table 2 presents demographic and psychosocial characteristics of the 62 women e rolled in the study. Almost half of participants were Black or African American (45.2% had some college education or a college degree (46.8%), and most were Medicaid-insur (64.5%). At baseline, participants were on average 18.9 weeks pregnant. Most women r ported financial strain, such as difficulty paying bills (50.0%), worrying about money pay rent (50%), or not having enough money to pay for meals (67.7%). Table 2 presents demographic and psychosocial characteristics of the 62 women enrolled in the study. Almost half of participants were Black or African American (45.2%), had some college education or a college degree (46.8%), and most were Medicaid-insured (64.5%). At baseline, participants were on average 18.9 weeks pregnant. Most women reported financial strain, such as difficulty paying bills (50.0%), worrying about money to pay rent (50%), or not having enough money to pay for meals (67.7%). Current smokers (n = 41) and recent quitters (n = 21) differed across a range of characteristics (Table 2). A higher percentage of White participants were former smokers compared with current smokers (52.4% vs 26.8%) with a medium effect size of 0.61. While 24.4% of current smokers had less than a high school diploma, only 4.8% of former smokers had the same level of education (Cohen's d = 1.03). More participants were employed among former smokers than among current smokers (90.5% vs 65.9%), with a large effect size of 0.88. Six in ten current smokers reported not having enough money to pay bills and rent (61.1% vs 28.6%) with a large effect size of 0.75. Six months before becoming pregnant, most participants were smoking every day (87.1%) (see Table 3). A higher percentage of current smokers smoked daily prior to pregnancy (92.7%) compared with former smokers (76.2%), with a medium effect size of 0.76. The use of other tobacco products during pregnancy was also higher among current smokers (24.4% vs 9.5%, Cohen's d = 0.62). More than a quarter of women used e-cigarettes 6 months before pregnancy (27.4%) and 11.3% used e-cigarettes after becoming pregnant. The association between smoking status and use of e-cigarettes before and after becoming pregnant had a small effect size but the mean number of days of e-cigarette use in the past 30 days was higher among current smokers; 3.8 (SD = 4.3) compared with former smokers at 0.3 (0.6), with a large effect size (Cohen's d = 0.96). Most current smokers were highly nicotine dependent (56%), had tried to quit smoking more than four times (26.8%), and had lower confidence in their ability to quit during pregnancy; 4.6 (SD = 2.24) compared with former smokers' confidence in staying quit during pregnancy at 6.0 (SD = 1.50). Importantly, 71% of women did not receive any smoking cessation treatment during pregnancy. The comparison between groups had a small effect size of 0.44, even though 81% of the former smokers reported not receiving any assistance, compared with 65.9% of current smokers. This study cohort and mothers who smoked prior to pregnancy in the CDC 2018 Natality dataset had similar distributions for most demographic characteristics. However, our sample had more Black/African Americans (45.2% vs 12.6%) (Cohen's d = 0.96), and more multiple race participants (19.4% vs 4.4%, Cohen's d = 0.92) (see Supplementary Table S1). This cohort included more women between 30 and 34 years of age (43.6%) compared with 21.7% in the 2018 Natality data (Cohen's d = 0.56) and more women with a college degree (11.3% vs 4.2%, Cohen's d = 0.59). Discussion Msgs4Moms successfully recruited its full sample size. We recruited a diverse sample that was similar to a national sample of pregnant women with a recent history of smoking. Similar to previous cohorts following smokers or recent quitters during pregnancy and postpartum, most participants enrolled in this cohort had lower education [11,15,18], were facing some kind of financial strain [11,15,18], and were on Medicaid [18]. However, our sample included a greater proportion of African Americans and multiracial participants. This cohort overrepresented Black pregnant women compared with the 2018 Natality data from the CDC. Pregnant women who have a smoking history [45] and are African American [46] are at greater risk of preterm birth, as such, this present cohort represents a group at high-risk of adverse pregnancy outcomes. Two-thirds of our cohort participants were current smokers and one-third had stopped either in pregnancy or up to 6 months prior. This differs markedly from a cohort assembled 10 years before in the UK in which 57% were current smokers and 43% were recent quitters [23]. Consistent with previously reported socioeconomic disparities in smoking cessation [16], current smokers entering our cohort had less education than recent quitters, fewer were employed and current smokers more often reported financial and housing insecurity. These results were similar to those from Orton et al. that found women who smoke during pregnancy are more likely to hold no educational qualifications, were less likely to own a home, and more likely to engage in unpaid work [23]. Socioeconomic disparities are even more prominent when smokers or recent quitters are compared with nonsmokers [11,14]. More than a quarter of women used e-cigarettes 6 months before pregnancy. E -cigarette use during pregnancy was similar to a national sample of pregnant smokers recruited in 2015 and 2016, that found 17% of pregnant cigarette smokers used e-cigarettes in the past 30 days [47]. In our study, current smokers were using e-cigarettes more often than recent quitters. However, we did not ask participants if they were using e-cigarettes to help them quit smoking. Even though current guidelines do not recommend e-cigarettes as quit assistance [48], pregnant women frequently describe e-cigarettes as safer compared with regular cigarettes and as a quit-smoking resource [47,49]. Most smokers in our study had high levels of nicotine dependence and had tried to quit smoking several times. However, seven out of ten women did not receive any smoking cessation treatment prior to completing our baseline survey. It is possible that they may have received quitting assistance later in pregnancy since most of our sample completed the baseline assessment around 4.5 months pregnant. However, our findings are in line with those of Naughton and colleagues who found that, in early pregnancy, less than half of smokers (43%) reported having talked to a midwife about stopping smoking and fewer had spoken to a general practitioner or nurse (27%). These numbers dropped even more later in pregnancy, with only 27% of smokers reporting speaking to a midwife about stopping smoking [22]. Unfortunately, even though the 5 "A"s (Ask, Advise, Assess, Assist, and Arrange follow-up) has been recommended in many countries as a strategy for health care providers to deliver all the important components of smoking cessation treatment, providers who work with pregnant women rarely address all 5 "A"s [21]. The present cohort has several strengths. First, we used weekly text messages to provide timely data on smoking status, reducing the effects of recall errors. Second, we prospectively collected a range of variables related to tobacco use at five timepoints from pregnancy to 1-year postpartum. Third, we assessed the quality of smoking cessation treatment in close proximity to six health care visits during the study period, rather than relying on retrospective reports at the end of pregnancy. Fourth, this was the first pregnancy cohort to recruit a majority of participants who are African American or Black. A limitation of this research and of our cohort is the fact that smoking status is self-reported. The social stigma of smoking during pregnancy may lead to under-reporting and therefore a response bias. However, other research has shown a high correlation between self-reported smoking and biochemical markers within pregnant populations [50,51]. Additionally, participants were reimbursed for their time responding to the surveys, which can lead to social desirability bias. Another limitation of our study is that we recruited more current smokers than quitters, possibly due to ease of identifying current smokers compared with patients who had already quit smoking in the clinical setting. Last, the small sample size and recruitment from only one region, the US Midwest, may limit the generalizability of the results from this cohort. When data collection and analysis are complete, this cohort study will extend prior research on smoking during pregnancy and postpartum by employing text message survey delivery to assess smoking at weekly intervals. Frequent assessments of smoking enables us to provide a detailed description of women's smoking patterns and determinants of both abstinence and smoking. The data collected at several timepoints during the first-year postpartum cover a broad range of smoking-related parameters (including smoking status, cigarettes per day, nicotine dependence, and smoking cessation assistance received during pregnancy), psychosocial (e.g., depression, anxiety, and stress), and socio-demographic characteristics (including but not limited to age, sex, and socioeconomic status). This study presents a novel application of text messages for intensive data collection that can be adapted in low-resource settings. Findings from the cohort study may help to identify potential time-sensitive intervention targets to support smoking cessation during pregnancy and the first year postpartum. Conclusions Msgs4Moms successfully recruited a complete and highly diverse study sample with few differences compared with national data. Baseline data showed that women who continue to smoke during pregnancy are coping with multiple social determinants of health that are well-known variables related to tobacco use. Findings from the cohort study yield insights into both sociodemographic and time-varying variables related to women's smoking patterns and may help to identify gaps in tobacco treatment and potential timesensitive intervention targets to support smoking cessation during pregnancy and the first year postpartum. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ijerph191610170/s1, Table S1: Comparison among survey sample versus CDC 2018 Natality data. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data used in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
2022-08-20T15:20:35.098Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "9110523e83943c759d8bcab82f9fc16afdfc6428", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/16/10170/pdf?version=1660724144", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "52c9b94e2a45e41352121727dc6ef05b93ff4778", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
244799879
pes2o/s2orc
v3-fos-license
The prognostic value of the lactate/albumin ratio for predicting mortality in septic patients presenting to the emergency department: a prospective study Abstract Objectives Lactate/albumin (L/A) ratio is a biomarker in sepsis that has been shown to outperform lactate. This prospective study aims to validate the superior prognostic value of the L/A ratio to lactate in sepsis and septic shock. Methods Prospective cohort conducted from September 2018 till February 2021 on adult patients presenting to the Emergency Department (ED) at a tertiary care centre with sepsis or septic shock. The primary outcome was the prognostic value of the L/A ratio compared to lactate with regards to mortality. Results A total of 939 septic patients were included throughout the study period. A total of 236 patients developed septic shock. The AUC value of the L/A ratio in septic patients was 0.65 (95% CI 0.61–0.70) and was higher than that of lactate alone 0.60 (95% CI 0.55–0.64) with a p < .0001. The optimal L/A ratio cut-off threshold that separated survivors from non-survivors was found to be 0.115 for all septic patients. The AUC of the L/A ratio was significantly higher for patients with a lactate ≥2 mmol/L: 0.69 (95% CI 0.64–0.74) versus 0.60 (95% CI 0.54–0.66) with a p < .0001 as well as for patients with an albumin level less than 30 g/L (AUC = 0.69 95% CI= 0.62–0.75 vs AUC= 0.66 95% CI= 0.59–0.73, p = .04). Among septic shock patients there was no statically significant difference in the AUC value of the L/A ratio compared to lactate (0.53 95% CI 0.45–0.61 vs 0.50 95% CI 0.43–0.58 respectively with a p-value = .11). Conclusions The L/A ratio is a better predictor of in-patient mortality than lactate in sepsis patients. This superiority was not found in the septic shock subgroup. Our results encourage the use of the ratio early in the ED as a superior prognostic tool in sepsis patients. Key messages We aimed to assess the prognostic usefulness of the Lactate/Albumin ratio compared to lactate alone in septic and septic shock patients. The L/A ratio proved to be a better predictor of in-patient mortality than lactate alone in sepsis patients. This pattern also applies across various subgroups in our study (malignancy, diabetics, age above 65, lactate level less than 2 mmol/L, albumin less than 30 g/L). Our results favour the use of the L/A ratio over lactate alone in patients with sepsis and the previously mentioned subgroups. Our results do not favour the use of the ratio instead of lactate in septic shock patients as there was no statistically significant difference between the AUCs of the ratio and lactate alone. Introduction Background A serum "biomarker" is a readily measurable laboratory analyte. When appropriately interpreted in a clinical setting, it has diagnostic and prognostic values, and guides patient management and physician decision-making [1]. This is particularly important in sepsis and septic shock where early identification and antibiotic administration in busy Emergency Departments (EDs), can improve patient mortality [2]. Sepsis, despite advances in medical care, remains a major healthcare burden with significant morbidity and mortality, and remains one of the most common presentations to the ED [3]. Lactate is one of the most studied sepsis biomarkers in the literature and several studies have shown that elevated levels are associated with increased mortality [4]. Several factors can influence lactate levels which can limit its prognostic value in patients with sepsis and septic shock [5][6][7]. Importance Previous studies have investigated the lactate/albumin ratio (L/A) as a biomarker in sepsis and septic shock; however, they were either retrospective or small in sample size. The ratio was shown to outperform lactate as a prognostic tool in sepsis. Goals of this investigation This study was prospective in nature and aimed to compare the prognostic value of L/A versus lactate in sepsis patients. Design This was a prospective cohort study of adult patients presenting to the ED with sepsis or septic shock. We aimed to evaluate the prognostic value of the L/A compared to lactate. Research assistants scanned the ED dashboard 24 h seven days a week. If a patient was suspected of having sepsis (flagged by the electronic medical record), the research assistant approached the family to obtain a written, voluntary and informed consent. This study was approved by the Institutional Review Board with a protocol number BIO-2018-0133. Study population and setting This study was conducted at the ED of a tertiary care centre between September 2018 and February 2021. All patients diagnosed with sepsis or septic shock were included in the study. Sepsis was defined according to the sepsis-3 definition as a life-threatening organ dysfunction caused by a dysregulated host response to infection [8]. Organ dysfunction can be identified as an acute change in total SOFA (sequential organ failure assessment-which incorporates six variables: the respiratory status, coagulation status, liver function, cardiovascular status, central nervous system status and renal status) score !2 points consequent to the infection [8]. The baseline SOFA score can be assumed to be zero in patients not known to have pre-existing organ dysfunction. Septic shock was defined as having sepsis with any of the following: the need for vasopressors to keep the mean arterial pressure ! 65 mmHg or a lactate level > 2 (mmol/L) given the patient is not hypovolemic (remains hypovolemic clinically despite adequate volume resuscitation). Adequate volume resuscitation was left to the discretion of the treating physician as there is variability in the literature on this topic [8]. The exclusion criteria were age < 18 years, cardiac arrest on presentation, pregnancy, trauma patients, patients discharged from the emergency department, patients not meeting sepsis-3 criteria and patients who did not have a final diagnosis of sepsis (antibiotics stopped at 24 h). Interventions and measurements We collected the following information from sepsis patients: vital signs upon presentation to the ED; comorbidities, infection site, blood work (Complete Blood Count (CBC), Blood Urea Nitrogen (BUN), Creatinine, Electrolytes, Bilirubin, Lactate, Liver Enzymes, two blood cultures, urine analysis and urine cultures) in addition to a blood albumin level, use of vasopressors, antibiotics, steroids as well as patient disposition. Patients were followed throughout their hospital stay to determine length of hospital stay and in-hospital mortality. All variables were collected from patient charts that can be accessed through the Electronic Health Record system. Outcome measures The primary outcome was the prognostic value of the L/A ratio (Albumin in g/L) compared to lactate (mmol/ L) with regards to in-hospital mortality. The secondary outcomes were to determine the optimal cut-off of the L/A ratio that discriminates between survivors and non-survivors, and to examine the prognostic value of the ratio in subgroup populations (Lactate < 2; Lactate ! 2; septic shock; diabetes; malignancy; chronic kidney disease; age; source of infection; albumin < 30; albumin !30; septic shock and end-stage liver disease). Data analysis In the univariate analysis, the distribution of the vital signs upon presentation to the ED, co-morbidities, laboratory analysis, blood and urine cultures, urine analysis, vasopressors, antibiotics and steroids use and patient disposition were presented as means ± standard deviation and frequencies and percentages for the continuous and categorical variables, respectively. Patients were divided into two groups: survivors and non-survivors. In the bivariate analysis, Student's t-test and Pearson's chi-square test were used to compare the differences in the independent variables between both groups (continuous and categorical, respectively). Both tests were interpreted at a predetermined significance level (alpha ¼ 0.05). A multivariate analysis using all statistically and clinically significant variables was performed using logistic regression to find the best model that fits the data and that explains the association between mortality and all predictor variables (including the L/A ratio). Variables included in the model were lactate/albumin ratio, Age, gender (reference: male), Chronic kidney disease, hypertension, dyslipidaemia, coronary artery disease, atrial fibrillation, malignancy history of stroke, history of Transient ischaemic attack (TIA), diabetes mellitus, chronic obstructive pulmonary disease, Systolic Blood Pressure (SBP) upon presentation, Heat rate (HR) upon presentation, O 2 saturation upon presentation, respiratory rate upon presentation, q SOFA score, haemoglobin, platelets, bun, creatinine, bicarbonate, magnesium, calcium, phosphate, Vasopressor use in the first 24 h, patient received steroids, intubation within the first 24 h, intubation within the first 48 h. The magnitude of association between the predictor variables and mortality were determined by calculating the odds ratios (OR) and their corresponding 95% confidence intervals (CI) ( Table 5). Receiver operating characteristic (ROC) curves were used to compare the accuracy of the L/A ratio and lactate in predicting mortality by obtaining their respective area under the curve (AUC). The ROC curve was used to determine the optimal cut-off of the L/A ratio (including sensitivity and specificity) that discriminates between survivors and non-survivors. Sample size calculation Based on the retrospective study done by Bou Chebl et. al. at the same tertiary care centre, the AUC of lactate and L/A ratio were found to be 0.61 and 0.67, respectively. This indicated a difference of 0.06 performance units. Choosing a power of 80% and significance level of 0.05 a minimum sample size of 800 patients would be needed to detect a difference in AUC-ROC curves of 0.06 performance units. Because of ongoing recruitment for sepsis studies in our department between September 2018 and February 2021. We recruited 939 patients in order to further increase the power of the study. Characteristics of study subjects A total of 2056 patients with suspected sepsis were approached. A total of 939 septic patients were included throughout the study period ( Figure 1). The average age of the included patients was 72.39 ± 15.62 years and 59.9% were males. 43.6% of the patients were smokers. The most common medical comorbidities were hypertension (63.6%), diabetes (40.3%), dyslipidaemia (40.1%) and current or a history of malignancy (39.6%). 23.3% (N ¼ 219) of the patients presenting with sepsis died during their hospital stay (Table 1). Patient outcomes Forty-two percent of the septic patients required intensive care unit admission and 15.3% required mechanical ventilation during their hospital stay. The percentage of patients that developed septic shock, required ICU admission and mechanical ventilation during their hospital stay was significantly higher in the non-survivor group (55.3% vs 16%, 74% vs 32.2%, 36.5% vs 8.9% respectively with a p-value < .0001 for all). The average length of hospital stay was significantly higher in the non-survivor group (16.25 ± 16.09 days vs 8.93 ± 10.96 days with a p-value < .001) ( Table 3). Prognostic value of L/A ratio and lactate The AUC value of the L/A ratio in septic patients was 0.65 (95% CI¼ [0.61 À 0.70]) and was higher than lactate alone 0.60 (95% CI¼ [0.55-0.64] with a p < .0001 (Table 4). The optimal L/A ratio cut-off threshold that separated survivors from non-survivors was found to be 0.115 for all septic patients (positive predictive value 39%, negative predictive value 83%, sensitivity 35%, specificity 81%) (Table 4, Figure 2). Prognostic value of L/A ratio and lactate (subgroup analysis) The AUC of the L/A ratio was significantly higher for patients with a lactate !2 mmol/L: 0.69 (95% CI 0.64-0.74) versus 0.60 (95% CI 0.54-0.66) with a p < .0001 (Table 4), as well as for patients with an albumin level less than 30 g/L (AUC ¼ 0.69 95% CI¼ 0.62-0.75 vs AUC¼ 0.66 95% CI¼ 0.59-0.73, p ¼ .04). In a similar manner, the ratio outperformed lactate Figure 3 shows the ROC aimed at comparing the AUC of lactate, albumin and lactate/albumin ratio among septic shock patients. (Figure 3). In addition, among patients with end-stage liver disease, there was no statically significant difference in the AUC value of the L/A ratio compared to lactate (0.77 95% CI 0.68-0.86 vs 0.74 95% CI 0.65-0.83, respectively, with a p-value ¼ .14). The optimal cut-off for the LA ratio in the ESLD group was 0.21 with a sensitivity and specificity of 50% and 92%, respectively (Table 4). Stepwise logistic regression for mortality Lactate to albumin ratio was found to be associated with hospital mortality. For every 0. Table 2 aims at comparing the initial vital and laboratory parameters upon presentation to the emergency department of both septic and septic shock patients between survivors and non-survivors. Table 3 shows the therapeutic measures and associated outcomes of septic/septic shock patients among survivors versus non-survivors. Discussion The results of this prospective study have shown that L/A ratio is a better prognostic marker than lactate alone in septic patients (AUC of L/A ratio 0.65, 95% CI¼ 0.61-0.70) versus lactate AUC¼ 0.60, 95% CI¼ 0.56-0.64) with a p <.0001. This superiority of the L/A ratio was also seen in several subgroups such as: lactate !2 mmol/L, albumin level less than 30 g/L, cancer patients, diabetic patients, patients older than 65 years of age and when stratifying by infection source. However, among septic shock patients, there was no statically significant difference in the AUC value of the L/A ratio compared to lactate alone. Furthermore, L/A ratio was found to be associated with in-hospital mortality (OR ¼ 2.17; 95% CI ¼ [1.69-2.80] p-value < .0001). Finally, the optimal L/A ratio cut-off threshold that separated survivors from non-survivors was found to be 0.115 for all septic patients. It is well established in the literature that a single venous lactate value can be used as a reliable riskstratification biomarker for patients who present to the ED with suspected sepsis and is an excellent prognostic biomarker for mortality and organ failure in the critically ill [4,[9][10][11]. However, serum lactate is affected by many patient-related factors. Lactic acidosis/hyperlactataemia can be induced by commonly used medications such albuterol and metformin [5,12]. Liver disease can also impair lactate clearance causing increased blood levels [6]. Furthermore, some patients may be critically ill and still have a normal venous lactate, which could lead to false patient prognosis [13,14]. This can limit the reliable use of lactate individually in a high acuity setting like the Emergency Department [4,15]. Our results are in line with multiple studies looking at the importance of the L/A ratio in several conditions such as: sepsis (prospective study including 155 patients), heart failure (retrospective study including Table 5 shows the multivariate logistic regression showing the variable associated with higher mortality rates; a higher L/A ratio, female gender, use of steroids and those who were intubated within the first 48 h. 4562 patients) and traumatic brain injury (retrospective study involving 273 patients). They found that the L/A ratio was a good predictor of mortality [16][17][18]. However, these studies were either retrospective in nature [16,18] or limited by their sample size [17]. The largest retrospective study was done by Gharipour et al. examined the role of the L/A ratio in 6000 septic patients and found that the ratio is significantly superior to a single lactate in predicting 28-day mortality (AUC: 0.69 vs 0.67 respectively) [19]. Albumin has been previously studied in sepsis and is even included in the APACHE II score commonly used to predict mortality in critically ill patients. Any hepatic dysfunction might affect its plasma level. It is also influenced by nutritional status and inflammation [14]. Given that several factors can influence both lactate and albumin levels, the L/A ratio can be used as a more reliable prognostic tool in septic patients. An interesting finding in our data was that the L/A ratio was a better prognostic marker than lactic acid alone in predicting mortality when sepsis was caused by respiratory, urinary and GI infections, and this is consistent with a previous retrospective study we did [13]. One potential explanation could be those infections are the most common ones among elderly patients, a subgroup of population where the L/A ration outperforms lactate. In our study, the AUC values of the L/A ratio and lactate in the septic shock subgroup were 0.53 and 0.50 respectively with no statistically significant difference between them. This is lower than what Wang et al. reported, where they found that the AUC of the L/A ratio in predicting mortality was 0.84 in septic shock patients [20]. This can be potentially explained by the low number of septic shock patients (N ¼ 236). But more importantly, the lack of difference between both biomarkers could be due to the marked elevation of lactate in septic shock which would overcome albumin's role. The accuracy of lactate in predicting mortality in sepsis has been studied using different ranges. Trzeciak Et al. showed that a lactate !4 mmol/L had a sensitivity and specificity of 35% and 95% respectively in predicting early mortality ( 3 days) and 19% and 93% respectively in predicting in-hospital mortality [9]. In another study, a lactate !4 mmol/L had a sensitivity and specificity of 36% and 92% (with regards to inhospital mortality) [4]. When a cut-off of 2.5 mmol/L or above was chosen sensitivity and specificity were found to be 59% and 71%, respectively) [4]. In our study, the optimal cut-off value of the L/A ratio that distinguishes survivors from non-survivors was 0.115 (which is equivalent to 1.15 due to the difference in the unit used for albumin in our study) with a sensitivity and specificity of 35% and 81% for all septic patients. This is slightly lower than Wang et al. (prospective study), Lichtenauer et al. (retrospective study), and Shin et al (retrospective study) who reported higher cut-off values at 1.32, 1.5 and 1.7, respectively [14,20,21]. Our cut-off was closer to Gharipour et al. (retrospective study) at around 1.01 (19). The optimal cut-off value is still a matter of debate and yet to be determined by future prospective studies. It is interesting to note that when we stratified patients based on chronic medical conditions, the end stage liver disease subgroup had the highest LA ratio cut-off (0.21). This can be explained by the impaired liver clearance of lactate as well as the lower synthesis of albumin resulting in an increased LA ratio in this subgroup [6,14]. This subgroup also had the highest AUC (0.77) when stratifying patients based on medical comorbidities. Limitations Our study has several limitations; it was conducted in a single tertiary-care centre that deals with complex and referral cases. We did not compare the AUC of the L/A ratio with validated scoring systems such as APACHE II in the ICU. Our study focussed on in-hospital mortality and did not include long term mortality after hospital discharge. Finally, the number of patients that developed septic shock was relatively low which may explain why there was no difference in the AUCs between L/A and lactate alone. A study involving a larger sample size with multiple centres would be the appropriate next step better evaluate the prognostic value of the L/A ratio and determine the optimal cut-off that discriminates between survivors and non-survivors. Conclusion The L/A ratio proved to be a better predictor of inpatient mortality than lactate alone in sepsis patients. This pattern also applies across various subgroups in our study (malignancy, diabetics, age above 65, lactate level less than 2 mmol/L, albumin less than 30 g/L). This would provide ED healthcare providers with tools to risk stratify patients and predict hospital course, thus tailoring early management and interventions accordingly. However, they do not favour the use of the ratio over lactate in septic shock patients. Further studies should be done to evaluate the prognostic value of the ratio in patients with sepsis and septic shock.
2021-12-03T06:22:41.851Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "716ae113a1794c03db07c0be46594a797dacc597", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/07853890.2021.2009125?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f949d00ba9ebe4c5e443e1c700b6c577a9c38136", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221722857
pes2o/s2orc
v3-fos-license
Silver dressings for the healing of venous leg ulcer Abstract This study was aimed to evaluate whether silver-containing dressings were superior to other types of dressings in the treatment of venous leg ulcers (VLU) and their specific advantages. Eight databases (Cochrane Library, PubMed, Web of Science, Ovid-Medline, Wanfang, VIP, China Biology Medicine, and China National Knowledge Infrastructure) were systematically reviewed from inception to May 2019 for randomized controlled trials (RCTs). The primary outcome was complete wound healing, and the secondary outcomes included absolute wound size changes (change of cm2 area since baseline), relative changes (percentage change of area relative to baseline), and healing rate. Two reviewers independently evaluated the risk of bias using the Cochrane Collaboration assessment tool and extracted the data according to the predesigned table. All analyses were performed using the latest Review Manager Software (version 5.3). A total of 8 studies qualified and were included in the meta-analysis, including 1057 patients (experiment: 526, control: 531). Both complete wound healing and wound healing rates were reported in 5 studies. Two and 3 studies reported the effect of silver dressings on absolute and relative wound size changes, respectively. Most of the studies used intention-to-treat analysis. There was sufficient evidence that silver-containing dressings can accelerate the healing rate of chronic VLU and improve their healing in a short duration of time. However, compared with other dressings, clinical trials with long-term follow-up data are needed to confirm whether silver dressings have advantages regarding complete wound healing. Introduction Venous Leg Ulcer (VLU) is a common type of chronic wound, and is always accompanied by long course, slow healing and easy recurrence. [1] A chronic venous ulcer is the most severe manifestation of chronic venous insufficiency and accounts for the vast majority of lower limb ulcerations. [2] The incidence of VLU increases progressively with age and is estimated to be 1% to 3%, in the adult population. [3] Moreover, venous ulcer can lead to pain, activity restriction, sleep disturbances, and other problems, which can seriously affect the quality of life of patients; the high cost of treatment is also a huge economic burden on patients and society. [4,5] Multilayer compression therapy is currently considered to be the gold standard for VLU treatment. [6,7] Wound contact dressings are usually placed underneath the compression devices and play a key role in promoting ulcer healing. [8,9] Several studies have demonstrated that in patients with VLUs, wounds may last for several years without any real improvement. [10,11] It is a wellknown fact that, infection is a major cause for slow wound healing and failure to heal. [12] Silver, as a broad-spectrum antimicrobial agent, covers almost all bacteria that colonize chronic wounds. In addition, silver ion has a strong antiinflammatory effect, and could also inhibit the metalloproteinases activity and promotes senescent cells apoptosis. Resistance to the silver ion rarely occurs due to its complex mechanism of action. [13] Therefore, silver-containing dressings have become increasingly popular for wound care in clinical practice. [14] Silver has a long history for wound management, but scientific evidence of its efficacy is lacking. A systematic review published in Cochrane Library in 2010 showed that there was insufficient evidence to determine whether silver dressings could promote wound healing or prevent wound infection. [15] However, Meta-Analysis of Observational Studies in Epidemiology Medicine ® OPEN Marissa et al showed that there is strong evidence that silvercontaining dressings or local silver agents can facilitate wound area reduction. [16] Furthermore, a meta-analysis published in 2017 including 31 randomized controlled trials (RCTs) and 8 cohort studies pointed out that the role of silver in wound treatment is significantly better than what was recognized in current scientific debates. If used correctly, silver not only has antimicrobial effects, but is also cost-effective and can improve the quality of life of the patient. [17] It is evident that the effect of silver in wound care has always been controversial and the effect of silver in patients with venous ulcer was not fully understood. Therefore, the purpose of this meta-analysis was to evaluate whether silver-containing dressing is superior to other types of dressings in the treatment of VLU, and also to elucidate its specific advantages. Search strategy Ethical review is not applicable for the current study, since all the data analyzed in this study acquired from published papers. This meta-analysis was performed based on the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA). [18] RCTs published from inception of the databases to May 2019 were retrieved. The Cochrane Library, PubMed, Web of Science, Ovid-Medline, Wanfang, VIP, China Biology Medicine (CBM), and China National Knowledge Infrastructure (CNKI) databases were systematically searched without any language limitations. The following search terms were used: "silver dressing" or "silver-based" or "silver-releasing" or "silver-impregnated" or "silver-containing" or "silver-donating" or "silver" in combination with "venous ulcer" or "leg ulcer" or "varicose ulcer" or "crural ulcer" or "stasis ulcer" or "VLU" Two reviewers performed a preliminary screening of the studies by reading the titles and abstracts. Full texts of articles that seemed to meet the inclusion criteria were obtained for further assessment. Additionally, the references of included studies were also searched. Participants Patients diagnosed with venous ulcer, without location or grade limitation were included. Studies were also included if the data of patients with venous ulcer could be extracted separately, or a predominant (≥70%) proportion of the participants in both groups (cases and controls) had leg ulcers of venous etiology. Interventions The experimental group was treated with various types of dressings containing silver, whereas the control group was treated with other types of dressings or local preparations. Both groups should have been treated with pressure therapy. Outcomes The primary outcome was that the ulcers were completely healed. The secondary outcomes included absolute wound size changes (change in cm 2 area since baseline), relative changes (percentage change in area relative to baseline), healing rate (e.g., cm 2 /week), and infection rate or reduction in infection. At least one of these outcomes should have been included in the trial. Wound dressings The classification of dressings usually depends on the key material in their construction. In the current study, all dressings containing silver were classified as the experimental group regardless of other characteristics. Usually, in the control group, the dressing had similar characteristics with the test dressing and the only difference was the silver content. The control group was divided into 3 subgroups according to the dressing characteristics: the traditional dressing group, the antibacterial dressing group and the other modern dressing group. The traditional dressing for the treatment of venous ulcer mainly refers to Vaseline gauze. It would not adhere to the wound, but can not promote the whole healing process of the wound. Antimicrobial dressings are composed of a gauze or low-adherent dressing impregnated with an ointment thought to have antimicrobial properties. [19] They are mostly used in chronic wounds and control wound infection. Modern dressings involve a series of dressings with special functions, including foam dressing, hydrocolloid dressing, alginate dressings and so on. Their functions include, but are not limited to, absorbing and containing exudates, optimizing wound pH, and relieving pain. Data extraction Two reviewers independently extracted the data according to the pre-designed table, which included the general characteristics of studies, key baseline participant data (age, gender, ulcer size, ulcer duration), number of participants, details of dressings or local preparations, duration of trials, primary and secondary outcomes, and withdrawal numbers. Quality assessment Two reviewers independently evaluated the risk of bias of included RCTs using the assessment tool provided by Cochrane Collaboration, [20] which assesses the following parameters: "selection bias, performance bias, detection bias, attrition bias, reporting bias, and other biases". Each aspect was evaluated in terms of "high-risk," "low-risk," and "unclear." Disagreements were discussed between the 2 reviewers and a third reviewer provided assistance in judging to reach a consensus, if necessary. Statistical analysis Meta-analysis was performed using the latest Review Manager Software (version 5.3). We used the risk difference (RD) and 95% confidence interval (CI) to calculate the results of dichotomous variables. Continuous variables were determined by weighted mean difference (WMD) or standardized mean difference (SMD) and their 95% CI. Chi-Squared test (Q test) was used to judge the heterogeneity of these studies. If P > .1 and I 2 < 50%, data were considered homogenous and fixed-effects model was adopted. If P < 0.1 and I 2 ≥ 50%, the random-effects model was adopted. If P < .1 and the source of heterogeneity could not be determined, or the outcomes could not be combined due to the inconsistent presentation methods, only descriptive analysis was performed on the data. Sensitivity analysis was used to investigate the effects of fixed-effects or random-effects models on heterogeneity. If sufficient number of studies were included, a funnel plot would be used to investigate publication bias. Literature search A total of 654 relevant studies were obtained by preliminary search of the literature and 2 related studies were supplemented by reading previous articles. After eliminating the duplicates, 342 papers remained. Then, 225 articles were excluded after reading the titles and abstracts due to apparent non-compliance with inclusion criteria. Finally, 117 articles were selected for full-text review, of which 108 articles were excluded for the following reasons: non-randomized studies (n = 45), not evaluating silver dressings (n = 14), not in combination with pressure therapy (n = 3), mixing with other chronic wounds or interventions (n = 34), insufficient end points (n = 5), and full text unavailable (n = 7). Therefore, 9 RCTs were included in qualitative synthesis, of which data in 1 RCT was unable to be integrated. Finally, 8 RCTs were included in this study [8,13,21,22,[24][25][26][27] (Fig. 1). Characteristics of the included trials A total of 1057 participants among the 8 RCTs were included in this study. The RCTs were conducted in France (n = 2), [13,21] Britain (n = 1), [8] Poland (n = 1), [22] Australia (n = 1), [23] China (n = 2), [25,26] and 1 [24] was a multinational trial in 5 Western countries. Details of the baseline characteristics of each study are provided in Table 1. The average age of participants ranged from 60 through 80, except for 1 study in which age of participants ranged from 18 through 90. The baseline size of ulcer varied from 6 cm 2 to 47 cm 2 , and ulcer duration also varied from a month to about 3 years. In the RCT of Kerihuel et al, [21] the ankle brachial pressure index (ABPI) of patients was above 0.7. Twenty seven (45%) of patients were already being treated with compression at inclusion and 22 (36.7%) had edema. Sixteen (53.3%) and 19 (63.3%) patients given the test and control dressing respectively had a history of ulceration. The research conducted by Krasowski et al [22] required the ABPI of participants above 0.8 and the leg wounds, which were 2 to 200 cm 2 , did not heal for at least 6 weeks. The requirements of Lazareth et al [13] for ABPI of participants were consistent with Krasawski et al. In their study, leg ulcers were present for almost 11 months on average (median 9.0 months) and 65% were recurrent. 51 ± 28% of the wound surfaces were covered with sloughy tissue (yellow appearance on colourimetric scale) and 2.9% presented with healthy perilesional skin. In the research carried out by Michaels et al, [8] overall 28.2% of the ulcer size was more than 3 cm and 38.0% of ulcers lasted longer than 12 weeks. A total of 53.1% patients reported previous episodes of leg ulceration. In a relatively big RCT by Miller et al, [23] the ABPI of patients was above 0.6 and the wound was 15 cm or less in diameter. In addition, patients had at least one signs of infection or critical colonization of the wound. In the research of Senet et al, [24] the ABPI of participants was above 0.8 and ulcers were between 2 cm and 13 cm in all directions or the ulcers have been properly treated within 4 weeks before recruitment, but the ulcer size reduced less than 20%. Zhang et al [25] reported the difference of baseline data between the 2 groups was not statistically significant (P > .05). However, the study did not show us any information about the wounds at baseline. Zhou et al [26] chose the patients with venous leg ulcer VLU who first went to the outpatient treatment center of the hospital. The ulcer lasted for 1 to 3 months. And the average wound area was 46.58 ± 0.68 cm 2 in the observation group and 47.13 ± 0.43 cm 2 in the control group. All studies reported the role of silver dressings in VLU wound healing. Five studies [8,[23][24][25][26] compared effects of silver-containing dressings on complete wound healing. Two [13,21] and 3 [13,21,24] studies reported the effect on absolute and relative wound size changes, respectively. Five studies [13,[22][23][24]26] analyzed the effect on wound healing rate. Four studies had a sample size of 60 [21,26] or 80, [22,25] while the other 4 studies had participants ranging from 102 through 281. [8,13,24,25] The duration of each trial ranged from 3 [26] through 12 [8,23] weeks and the drop out rate ranged from 0 [25,26] through 16.7% [13] ( Table 2). Wound dressings In the study of Kerihuel et al, [21] the hydrocolloid dressings were used in the control group, which were usually a breathable membrane or foam pad made of water absorbent colloidal matrix. While in the study of Krasowski et al [22] and Miller et al, [23] antibacterial dressings were used as control. The main antibacterial substances were iodine and iodine respectively. The lipidodoloid dressings used in Lazareths research [13] were composed of a polyester textile mesh impregnated with hydrocolloid particles and Vaseline, and the non-silver low-adherence dressings used in Michaelss study [8] usually consist of cotton pads that are placed directly in contact with the wound. The study of Senet et al. [24] used Biatain dressings, which are made of hydrophilic polyurethane hydrocellular and are covered by a plain polyurethane Biatain topfilm. In addition, 2 studies [25,26] used traditional dressings as control. Due to the small number of studies included, this metaanalysis would not be grouped according to the dressing characteristics of the control group. Risk of bias The risk of bias across the 8 included RCTs is shown in Figures 2 and 3. All studies had a low risk of bias regarding incomplete outcome data and selective reporting. Three studies [8,24,25] which reported the random sequence generation in detail had a low risk of bias. The risk of bias in the remaining 5 studies was unclear. As for the allocation concealment and the blinding of outcome assessment, 5 [13,21,[22][23][24] and 6 [8,13,[21][22][23][24] studies had a low risk of bias, respectively. Only 2 studies [23,24] mentioned the blinding of participants and personnel and 6 other studies, were considered to have a high risk, although they did not mention this point. In terms of other biases, 7 [13,[21][22][23][24][25][26] studies had a low risk. Because only 8 articles were included in this study, no funnel plot analysis was conducted, so it was not possible to determine whether there was potential publication bias. Analysis of complete wound healing Five studies [8,[23][24][25] reported complete wound healing. Statistical heterogeneity was present among the studies (P = .09, I 2 = 50%), so the random-effects model was used. Meta-analysis demonstrated that silver dressings had no meaningful effect on the proportion of ulcers completely healed, and there was no statistical significance in the combined effect (RD = 0.07, 95% CI [-0.00, 0.15], P = .06, Fig. 4). Analysis of absolute wound size changes Two studies [13,21] reported an absolute reduction in ulcer size. However, due to the differing presentation of the outcome, only descriptive analysis was carried out. In a study by Kerihuel et al, [21] the median area of ulcer reduction in the silver dressing group was -4.5 (-30.9, -22.5) cm 2 at the fourth week, which was higher than that in the control group -3.5 (-53.3, -18.5) cm 2 . Lazareth et al [13] showed that the ulcer area of the experimental group decreased (6.5 ± 13.4 cm 2 ) at the fourth week, which was higher than that of the control group (1.3 ± 9.0 cm 2 ). The difference was statistically significant (P = .023). Analysis of relative wound size changes (percentage) Three studies [13,21,24] reported relative reductions in ulcer size. One [21] of the studies differed in the presentation of the outcome, so we did a descriptive analysis for this study. There was no statistical heterogeneity (P = .28, I 2 = 13%) between the 2 RCTs entered in the meta-analysis, so the fixed-effects model was used. Meta-analysis showed that silver dressing could improve the relative reduction in ulcer size, and the combined effect was statistically significant (MD = 10.75, 95% CI [1.61, 19.89], P = .02, Fig. 5). When the same data were reanalyzed using a random-effects model, the results were still statistically significant (MD = 11.13, 95% CI [0.94, 21.31], P = .03). In the study by Kerihuel et al, the median ulcer area reduction rate of the silver dressing group was -35.6 (-100, -182.1)% at the fourth week, lower than that of the control group, which was -40.9 (-100, -308.3)%. Analysis of healing rate Five studies [13,22,[23][24][25][26] reported healing rate (per day) of ulcers. There was significant heterogeneity among the studies (P < .01, I 2 = 92%). After analysis, we found that the source of heterogeneity may have been related to the dressings used in the control group. Among the 5 studies included, 1 study [22] used octenidine dressing with strong antimicrobial ability in the control group, while the other 4 studies used dressings without antimicrobial activity or with general antimicrobial activity. Therefore, these 4 studies were analyzed by meta-analysis, and the 1 with use of octenidine was analyzed by descriptive analysis. There was no statistical heterogeneity (P = .28, I 2 = 21%) between the 4 RCTs entered in the meta-analysis, and the fixed-effects model was used. Meta-analysis suggested that silver dressings could improve the healing rate of ulcers, and the combined effect was statistically significant (MD = 0.23, 95% CI [0.07, 0.39], P = .004, Fig. 6). When the same data were reanalyzed using a random-effects model, the results were still statistically significant (MD = 0.24, 95% CI [0.06, 0.43], P = .009). Analysis of infection rate or reduction in infection Four studies [13,21,22,24] reported the information about wound infection. However, due to the differing presentation of the outcome, only descriptive analysis was carried out. In the study by Kerihuel et al, [21] there was 1 wound infection in both the silver dressing group and the control group. Krasowski et al [22] showed that on the 28th day of the trial, microbiological The difference was not statistically significant (P = .08). Lazareth et al [13] indicated no infection occurred in the silver group treatment vs 1 infection in the control group within 4 weeks. Senet et al [24] reported that the frequency of patients reporting at least 3 out of 5 pre-defined local inflammatory signs (pain, odour, erythema, oedema, and exudate) were equal in both the groups after 6 weeks treatment. Discussion Chronic venous ulcer of lower extremity is a common chronic disease prone to recurrence, accompanied by varying degrees of chronic pain, which seriously affects sleep and quality of life of patients. Though the application of silver dressings in the treatment of VLU has gradually become popular, and has progressively increased in recent years, the specific effect of this dressing on wound healing is still uncertain or controversial. [27] This may explain why this study only focuses on wound healing parameters. Overall, the quality of the 8 RCTs included in this study was relatively good. Most of the studies used intention-to-treat analysis, and explained the detailed reasons for each persons withdrawal. Though only 2 studies [23,24] explicitly mentioned the use of double blindedness, most [8,13,[21][22][23][24] of the outcomes were measured using blind methods. However, most studies had the problem of short intervention time. Though several studies [13,22,24] were conducted for a relatively long time (8-10 weeks), silver dressings were used only in the first 4 weeks, making long-term follow-up data unavailable. Finally, we found that different RCTs had different or even contrasting results related to the same outcome, which made it almost impossible to obtain a strong recommendation without meta-analysis. In this meta-analysis, there was no significant difference in complete wound healing between the experimental group and control group (P = .06). This may have been related to the duration of intervention. Because of the high cost of silver dressings and the difficulty of long-term follow-up, RCTs for evaluating silver dressings usually lasted for about several weeks, rather than a couple of months, which is usually the time needed for chronic wound healing. [16] In the current research, 5 original studies [8,[23][24][25][26] reported the proportion of complete healing of ulcer wounds, 3 [24][25][26] of which were treated with silver dressings within 6 weeks. Therefore, we believe that in order to observe the difference in complete wound healing, follow-up duration must be long enough. For example, in a 9-week RCT of silvercontaining dressings in the management of infected venous ulcers by Dimakakos et al, [28] statistical differences in complete wound healing were observed. It is suggested that future studies should lengthen the intervening time and increase the frequency of wound assessment in order to obtain higher quality clinical experimental data. In the absolute reduction of wound area, although only descriptive analysis was performed due to the differing presentations of the outcome, the results of the 2 4-weeks RCTs were in favor of silver dressings. In the study of Lazareth et al, [13] after week 4, all patients in the silver dressing group switched to the non silver-containing contact layer for 4 additional weeks treatment. At week 8, the median absolute wound area reduction was still significant different between the 2 groups (P = .002). With regard to relative wound area reduction and wound healing rate, our meta-analysis showed that silver-containing dressings could effectively reduce the wound area (P = .03) and accelerate wound healing (P = .004). In the study by Senet et al, [24] patients were treated for 6 weeks with either Biatain or Biatain-Ag followed by 4 weeks treatment with Biatain, relative area reduction and healing rate showed significant differences between the experimental group and control group in the subgroup of patients with older and larger ulcers (P < .05). And at the 10th week of follow-up, the different of the relative wound area reduction between the 2 groups was more significant compared with the results after 6 weeks treatment. This indicates that the effect of silver appears to continue at least for 4 weeks after the treatment. Similarly, Miller et al pointed out that silver dressings were associated with faster wound healing rates in the first 2 weeks. A systematic review also reported the same evidence [29] and no differences were found on long-term follow-up. These findings suggested that when patients had large leg ulcers or history of recurrent ulcers and rapid reduction in the size of the wounds was desired, silver dressings may be the best choice. Of note, the results in the experiment conducted by Krasowski et al were quite different from others. This was mainly due to the otinidine dressing used in the trial, which may have stronger antimicrobial activity and lower cytotoxicity compared with silver dressings. [30] Therefore, due to clinical heterogeneity, this study was excluded from the meta-analysis. As for the infection rate or reduction in infection of the ulcers, descriptive analysis of 4 studies [13,21,22,24] showed that silver dressing had no advantage in controlling wound infection. On the contrary, it is even less effective than the otinidine dressing in the antimicrobial effect, which may be related to the unique www.md-journal.com antibacterial properties of otinidine dressing. [22] As infection is an important factor in chronic wound healing, it is necessary to carry out more clinical studies to quantify this outcome and explore differences between various antimicrobial dressings in the treatment of chronic wounds in the future. Previous systematic reviews and meta-analyses have not always supported the role of silver-containing dressings in the management of chronic wounds. [15,31,32] However, consistent with the current results, several studies have proved that silver dressings have great advantages in accelerating wound healing and reducing wound area in certain circumstances; [16,17,33,34] even so, few RCTs have found statistical differences in complete wound healing due to the lack of high-quality long-term followup clinical data. Carter et als study, [16] which included not only VLUs, but also other types of leg wounds, showing that silver treatments and silver dressings can significantly reduce the size of the wounds. However, no significant advantages were found in complete wound-healing and healing rates. A recent Cochrane systematic meta-analysis [19] stated that silver dressings may increase the probability of VLU healing, compared with nonadherent dressings. However, when compared with foam dressings and hydrocolloid dressings, it is unclear whether the intervention increased the probability of healing. Different from other studies, this study focuses on the effects of various silver dressings on the wound of VLU compared with all other nonsilver dressings. Our results strengthen the proposition that silver containing dressings can improve the healing of chronic wounds, especially the chronic VLU wounds. In addition, silver dressings also have good acceptability and tolerance, and can reduce pain and wound exudates. [33,35] Some studies have pointed out that silver dressings can improve patients health-related quality of life, and are cost-effective in wound treatment, [17,36,37] whereas other studies have reported that there are no differences when compared with other dressings. [8,38] These conflicting conclusions may be due to the fact that wound types and dressings included in each study were different. Therefore, it is necessary to evaluate specific chronic wounds in order to get more accurate results, and to conduct more clinical trials to compare the effects of different silver dressings in wound management in the future. Though we conducted a comprehensive search of the literature on the treatment of VLU with silver-containing dressings, the current study still had some limitations. First, due to the limited number of high-quality studies retrieved, effective sub-group (different silver dressing group or antibacterial dressing and nonantimicrobial dressing group) analysis could not be performed; hence, we cannot draw conclusions about which silver dressing is the most effective for VLU and whether silver dressings are more beneficial in the management of chronic VLU than other antibacterial dressings. Second, of the 8 studies included, 4 studies were conducted for 4 weeks and 1 for only 3 weeks. Third, although meta-analysis showed that silver-containing dressings could significantly reduce the wound area and accelerate the healing rate of VLU, more RCTs are needed to support this result. In addition, Egger et al [39] have emphasized that if double blindness is not adopted or insufficient distribution concealment exists in the experiment, the results would be overestimated by 15% and 30%, respectively, which means that the therapeutic effect of silver dressings on chronic VLU may have been exaggerated in this evaluation. Nevertheless, our study provides a more accurate basis for patients with venous ulcer to choose silver dressing, and provides a certain direction for future research. In conclusion, the results of this meta-analysis showed that the function of silver dressing in VLU was similar to that in other chronic wounds. Though no differences were observed in the rate of complete wound healing, which was probably due to the lack of long-term follow-up data, there was sufficient evidence that silver-containing dressings could accelerate the healing rate of chronic VLU and improve healing in a short time. Future research should focus on extending intervention time and enlarging sample size, lay emphasis on differences between various silver dressings and whether silver-containing dressings have unique advantages in chronic wound management when compared with other antibacterial dressings.
2020-09-10T10:10:31.826Z
2020-09-11T00:00:00.000
{ "year": 2020, "sha1": "3227b75f3121fd176f37b75e01a2b7b56db00021", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7489700", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "3227b75f3121fd176f37b75e01a2b7b56db00021", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256707041
pes2o/s2orc
v3-fos-license
Probable detection of an eruptive filament from a superflare on a solar-type star Solar flares are often accompanied by filament/prominence eruptions (~104 K and ~1010−11 cm−3), sometimes leading to coronal mass ejections that directly affect the Earth’s environment1,2. ‘Superflares’ are found on some active solar-type (G-type main-sequence) stars3–5, but the filament eruption–coronal mass ejection association has not been established. Here we show that our optical spectroscopic observation of the young solar-type star EK Draconis reveals evidence for a stellar filament eruption associated with a superflare. This superflare emitted a radiated energy of 2.0 × 1033 erg, and a blueshifted hydrogen absorption component with a high velocity of −510 km s−1 was observed shortly afterwards. The temporal changes in the spectra strongly resemble those of solar filament eruptions. Comparing this eruption with solar filament eruptions in terms of the length scale and velocity strongly suggests that a stellar coronal mass ejection occurred. The erupted filament mass of 1.1 × 1018 g is ten times larger than those of the largest solar coronal mass ejections. The massive filament eruption and an associated coronal mass ejection provide the opportunity to evaluate how they affect the environment of young exoplanets/the young Earth6 and stellar mass/angular momentum evolution7. An energetic eruptive filament on EK Draconis most probably launched a coronal mass ejection with a mass ten times larger than the largest solar coronal mass ejection. Studying such ejections provides insight into stellar angular momentum loss and the habitability of orbiting planets. Letters Nature astroNomy (Fig. 1c-e and Extended Data Figs. 2a and 3a). Both ground-based spectroscopic observations simultaneously recorded the same spectral change, demonstrating that low-temperature and high-density neutral plasma above the stellar disk moves at high speed toward the observer before some parts finally start to fall back to the surface. In addition, the deceleration is not monotonic: it was 0.34 ± 0.04 km s −2 in the initial phase, dropping to 0.016 ± 0.008 km s −2 in the later phase (Fig. 1c,d and Extended Data Fig. 3b). This is interpreted in terms of changes in the height of the ejected mass. The observed deceleration is in good agreement with that due to the surface gravity of approximately 0.30 ± 0.05 km s −2 (ref. 9 ), although the initial value is slightly larger. How much do the stellar spectral changes obtained here actually resemble those of solar filament eruptions? Blueshifted Hα absorption profiles are often observed from solar filament eruptions 1,14 . As in Fig. 2, we generated spatially integrated Hα spectra of a solar flare/filament eruption that occurred on the solar disk using the SMART (Solar Magnetic Activity Research Telescope) data 15 (Extended Data Fig. 4 and Supplementary Video 1). We converted to the full-disk pre-flare-subtracted spectra by multiplying by the partial-region/full-disk ratio (that is, virtual Sun-as-a-star spectra). We found that the blueshifted absorption component at approximately 100 km s −1 was predominant soon after the solar flare, and the spatially integrated Hα EW showed enhanced absorption (Fig. 2a). These blueshifted profiles are unequivocally due to the filament eruption. Later, the blueshifted component decelerated and gradually turned into slow, redshifted absorption (Fig. 2b,c). The Hα EW returned to the pre-flare level in approximately 40 min (Fig. 2a). Although the energy scales and velocities are different, the solar data strongly resemble the spectral changes in the superflare on EK Dra (see Supplementary Information for another event). This similarity suggests that the stellar phenomenon is the same as the simply magnified picture of the solar filament eruption. A filament eruption is the only explanation for the blueshifted absorption component on EK Dra by solar analogy 1 . The hypothesis that the blueshifted absorption on EK Dra might come from up-/downflow in flare kernels must be rejected because they never show Hα absorption 16,17 . Also, downflow in cooled magnetic loops (known as post-flare loops) 14 shows redshifted absorption, so they cannot explain the blueshifted absorption. (However, the redshifted absorption in EK Dra in the later phase might be caused by post-flare loops 14 .) Rotational visibilities of prominences or spots also are not adequate to explain it, since the rotation speed of EK Dra is only 16.4 ± 0.1 km s −1 (ref. 9 ). Thus, we concluded that we detected a stellar filament eruption on the solar-type star. Some observational signatures for stellar filament eruptions or CMEs have been reported previously for cooler K-M dwarfs [18][19][20][21][22] and evolved giant stars 23 (see Methods and refs. 6,24 for reviews). Velocity (km s -1 ) Velocity (km s -1 ) Velocity (km s -1 ) 1,000 The 1σ value of the pre-flare light curve (−150 min to 0 min) is shown in blue. b, Light curves of the Hα EW observed using the medium-dispersion spectrograph MALLS (Medium and Low-Dispersion Long-Slit Spectrograph) at the Nayuta telescope (grey circles) and the low-dispersion spectrograph KOOLS-IFU (Kyoto Okayama Optical Low-Dispersion Spectrograph with optical-fibre Integral Field Unit) installed at the Seimei telescope (red triangles) during the same observing period as in a. The Hα emissions were integrated within ±10 Å from the Hα line centre (6,562.8 Å) after dividing by the continuum level, and the pre-flare level was subtracted. The positive and negative values represent emission and absorption, respectively, compared with the pre-flare level. The 1σ value of the pre-flare light curve (−150 min to 0 min) is plotted in red and black for Seimei and Nayuta data, respectively. c,d, Two-dimensional Hα spectra obtained using the Seimei telescope (c) and the Nayuta telescope (d). The red and blue colours correspond to emission and absorption, respectively. The dashed lines indicate the stellar surface gravity (g * ) and half of the surface gravity (0.5 g * ). c and d share the upper colour bar. e, Temporal evolution of the pre-flare-subtracted Hα spectra observed using the Seimei telescope (red) and the Nayuta telescope (black), with the spectra shifted by constant values for clarity. The spectra are binned in time, and the integration periods correspond to the horizontal axes of a-d. The intensities are normalized by the stellar continuum level. The vertical dotted line indicates the Hα line centre, and the horizontal dotted lines indicate the zero levels for each spectrum. The 1σ error bar around the line core, based on the residual scattering in the line wing, is also shown. Nature astroNomy The observation of a giant star shows a blueshifted X-ray emission line of 90 km s −1 in the post-flare phase and hotter CME is proposed as a possible explanation 23 . Recently, X-ray/extreme UV dimmings have been reported as indirect evidence of stellar CMEs on K-M dwarfs 22 . In M-dwarf flares, many blueshifted Balmer/UV line emission components have been reported [18][19][20][21]24 , which are interpreted as filament eruptions. Some M-dwarf flares share properties with the eruption on EK Dra: the blueshift emissions have high velocities of hundreds of kilometres per second, and some exhibit velocity changes and appear after the impulsive phase 20,21 . For M-dwarf events, the number of studies reporting highly time-resolved velocity variations of blueshift components is still insignificant (~5 min cadence), and a simultaneous white-light flare has never been detected. Our detection of a stellar filament eruption is reliable because we provided solar counterparts, highly time-resolved spectra (~50 s cadence) and a simultaneous TESS white-light flare. What properties does the filament eruption on EK Dra have? The maximum observed velocity of the blueshifted component was ~−510 km s −1 with a width of 220 km s −1 . This is larger than the typical velocities of solar filament eruptions (10-400 km s −1 ) associated with CMEs 2 , although it is a little smaller than the escape velocity at the surface on EK Dra (~670 km s −1 ). The cool plasma reached at least ~1.0 stellar radius from the stellar surface (or the initial height) as derived by integrating the velocity over time (or ~3.2 stellar radii from the stellar surface on the basis of the deceleration rates). In this case, a projection angle of at most 45° can be allowed when we assume that the event occurs on the disk centre. On this projection angle, the velocity can be up to ~−720 km s −1 , so there is a possibility that the velocities of some components of the EK Dra eruption could exceed the escape velocity. However, it should be noted that there are weak redshifted components with a velocity of a few 10 km s −1 in the late phase, indicating that some materials fell back to the star. This is often observed in the case of solar filament eruptions with CMEs 25 . The filament area is estimated to be 1.6 × 10 21 cm 2 (5.6% of the stellar disk), and the erupted mass is calculated to be 1.1 +4.2 −0.9 × 10 18 g on the basis of the absorption components. The mass is more than ten times larger than those of the largest solar CMEs 26,27 (it should be noted that the mass can be somewhat under-/overestimated; Methods). This mass estimate is in reasonable agreement with those predicted from empirical 26,27 and theoretical 28 solar scaling relations between CME mass and flare energy within the error bars (~9.4 +3.2 −2.4 × 10 16 and 3.1 +1.6 −1.1 × 10 17 g for refs. 27 and 26 , respectively) ( Fig. 3a). This suggests that the stellar filament eruption can share a common underlying mechanism with smaller-scale filament eruptions/CMEs (that is, magnetic energy release 1,28 ) although the absolute values of most physical quantities are very different. Moreover, the kinetic energy is calculated to be 3.5 +14.0 −3.0 × 10 32 erg, which is 16% of the radiation energy in white light. The magnetic energy stored around the starspots on EK Dra can be at least 8.0 × 10 35 erg, which is enough to produce superflares and filament eruptions with energy of ~10 33 erg. In addition, this value is slightly smaller than those extrapolated from the solar CME scaling law (4.8 +1.1 −0.9 × 10 33 erg; ref. 27 ) (Fig. 3b), which is similar to the filament eruption/CME candidates on other stars 24 . In previous studies, it has been argued that kinetic energy can be reduced by overlying magnetic fields 24,29 . The deceleration of our events was a few times 10% larger than the stellar gravity (Extended Data Fig. 3b). The strong magnetic fields on EK Dra have been reported before 9 and may support the above explanations. However, its small kinetic energy can also be understood through a solar analogy: the velocities of (lower-lying) filament eruptions are usually four to eight times lower than those of the corresponding (higher-lying) CMEs 2 , and therefore the kinetic energies of filament eruptions are typically smaller (green symbols in Fig. 3b). Did a CME occur in this event? Obviously, the line-of-sight velocity of ~510 km s −1 was lower than the escape velocity and some masses fell back, which may indicate a so-called 'failed' filament eruption 29 . However, this does not necessarily mean that a CME did not occur, again by solar analogy. In fact, the erupted filaments often fall back to the Sun when CMEs happen. For example, a well studied solar event on 7 June 2011 involved a 200-600 km s −1 filament eruption where much filamentary material fell back to the Sun, but some mass clearly escaped as a CME with velocities of ~1,000 km s −1 (ref. 25 and Supplementary Information). The event Letters Nature astroNomy on EK Dra may correspond to this solar event. In addition, ref. 30 showed that whether a solar filament eruption leads to a CME can be simply distinguished by a parameter of (V r_max /100 km s −1 ) (L/100 Mm) 0.96 , where V r_max is the maximum radial velocity and L is the length scale (Fig. 4). When the parameter is more than ~0.8, the probability that a filament eruption leads to a CME is more than 90% (ref. 30 ). The value of the parameter of eruption on EK Dra is ~18, meaning that our detection of the fast and sizable stellar filament eruption is indirect evidence that mass escapes into interplanetary space as a CME. Finally, we summarize future directions of our findings (see Supplementary Information for details): It is speculated that the filament eruptions/CMEs associated with superflares can severely affect planetary atmospheres 6 . Our findings can therefore provide a proxy for the possible enormous filament eruptions on young solar-type stars and the Sun, which would enable us to evaluate the effects on the ancient, young Solar System planets and the Earth, respectively. Further, it is also speculated that stellar mass loss due to filament eruptions/CMEs can affect the evolutionary theory of stellar mass, angular momentum and luminosity 7,26 more importantly than can stellar winds. At present, frequency and statistical properties of CMEs on solar-type stars are unknown, but important insights into these factors will be obtained by increasing the number of samples in the future. The red square represents the superflare on EK Dra, the black crosses denote solar CME data, the green triangles signify data for solar prominence/filament eruptions and surges taken from previous studies and the green plus sign is the solar filament eruption/surges displayed in Fig. 2 and Supplementary Fig. 9 (Velocity, mass, and kinetic energy: solar data), respectively ( . The kinetic energy of eruption on EK Dra is calculated to be 3.5 +14.0 −3.0 × 10 32 erg, which is outside the error range of the predicted value of 4.8 +1.1 −0.9 × 10 33 erg (ref. 27 ). The error bars are derived as the model errors (see 'Velocity, mass and kinetic energy: stellar data'). Nature astroNomy Methods TESS light-curve analysis. TESS observed EK Dra (TIC 159613900) in its sectors 14-16 (18 July 2019-6 October 2020) and 21-23 (21 January 2020-15 April 2020). The TESS light curve from the 2 min time-cadence photometry was processed by the Science Processing Operations Center pipeline, a descendant of the Kepler mission pipeline based at the NASA Ames Research Center 12,31 . Extended Data Fig. 1 shows the light curve of EK Dra from BJD 2458945 (JD 2458944.997, 5 April 2020 11:56 ut; Sector 23), and the stellar superflare detected by TESS, the Seimei telescope and the Nayuta telescope in Fig. 1 is indicated with the red arrow in this figure. The quasiperiodic brightness variation is thought to be caused by the rotation of EK Dra with the asymmetrically spotted hemisphere 3,5 . The rotation period is reported as about 2.8 d (ref. 9 ). Although the superflare occurred near the local brightness maximum, some of the starspots are expected to be visible from the observer [32][33][34][35] . In Extended Data Fig. 1, other flares are also indicated using black arrows with more than two consecutive observational points whose flaring amplitude is more than three times the TESS photometric errors 3,36 . The white-light flare energy was calculated by assuming the 10,000 K blackbody spectra 36,37 (Flare energy). The pixel-level data analysis is shown in TESS pixel-level data analysis. The estimated occurrence frequency of superflares (>10 33 erg) in the TESS band was about once per 2 d, which means that about 12 nights' monitoring observations are necessary on average to detect one superflare from the ground-based telescope under a clear-sky ratio of 50%. This implies that our datasets are highly unique. Spectroscopic data analysis. Here, we present the utilization of low-resolution spectroscopic data from KOOLS-IFU 38 of the 3.8 m Seimei telescope 13 at Okayama Observatory of Kyoto University and MALLS 19,39 of the 2 m Nayuta telescope at Nishi-Harima Astronomical Observatory of the University of Hyogo. KOOLS-IFU is an optical spectrograph with a spectral resolution of R (λ/Δλ) ~ 2,000 covering a wavelength range from 5,800 to 8,000 Å; it is equipped with Ne gas emission lines for wavelength calibration and instrument characterization. The exposure time was set to be 30 s for this night. The sky spectrum was subtracted by using the sky fibres for each spectrum. The data reduction follows the prescription in ref. 40 . During this observation, the signal-to-noise ratio (S/N) for one frame is typically 172 ± 6. The observations using the Seimei telescope ended just after 133.7 min (Fig. 1b-d). MALLS is an optical spectrograph with R ~ 10,000 at the Hα line covering a wavelength range from 6,350 to 6,800 Å; it is also equipped with Fe, Ne and Ar gas emission lines for wavelength calibration and instrument characterization. The sky spectrum was subtracted using a nearby region along the slit direction for each observation. The exposure time was set to be 3 min for this night. The MALLS data reduction follows the prescription in ref. 19 . The S/N for one frame is typically 86 ± 8 during this observation. For the MALLS data, the wavelength corrections are also performed for each spectrum using the Earth's atmospheric absorption lines. We corrected the wavelength for the proper motion velocity of −20.7 km s −1 of EK Dra on the basis of Gaia Data Release 2 (ref. 41 ). Continuum levels are defined by a linear fit between the wavelength ranges of the Hα line wing (6,517.8-6,537.8 and 6,587.8-6,607.8 Å). We take the continuum level as the wavelength range between 6,517.8-6,537.8 and 6,587.8-6,607.8 Å to measure the EW (=∫(1 − F λ /F 0 ) dλ, where F 0 is the continuum intensity on either side of the absorption feature, while F λ represents the intensity across the entire wavelength range of interest). The original spectra are shown in Stability of pre-flare spectra. Extended Data Fig. 2 shows the pre-flare-subtracted Hα spectra during and after the superflare on EK Dra with higher time cadence than Fig. 1e. The narrowband Hα EW (Hα − 10 Å-Hα + 10 Å) is used for the measurements of the radiated energy and duration of the Hα flare because of the high S/N, and the broadband Hα EW (Hα − 20 Å-Hα + 10 Å) is used for the measurements of the amount of absorption (that is, mass and kinetic energy). Solar data analysis. In the main text, we showed the data of a C5.1-class solar flare (that is, the peak GOES soft X-ray flux F GOES is 5.1 × 10 −6 W m −2 ; hereafter 'Event 1' , see Table 1) and associated filament eruption around 07:56 ut, 7 July 2016, observed using the SDDI 15 Table 1). This paper used 70 min time series of the SDDI images taken from 07:30 ut on 7 July 2016 (Supplementary Video 1). As in Extended Data Fig. 4, the C5.1-class flare occurred around an active region, named 'NOAA 12561' , on the solar disk, and was accompanied by a typical filament eruption 15,42 . The spectra from the event are integrated over a spatial region that is large enough to cover the visible phenomena (the magenta region in Extended Data Fig. 4a,b). The spectra are reconstructed by using the template solar Hα spectrum convolved with the SDDI instrumental profile. Here, we define L(λ, t, A) as the luminosity at a wavelength of λ and time of t that is integrated for the region A (that is, L(λ, t, A) = ∫ A I(t) dA; I(t) is intensity). We now define A local as the integration region (magenta region in Extended Data Fig. 4a,b), and A full-disk as the solar full disk. We first obtain the local (partial-image) pre-flare-subtracted spectra ΔS local , which are normalized by the local (partial-image) total continuum level (L(6,570.8 Å, t, A local )): where t 0 is a given time of the pre-flare period. Then, the (virtual) full-disk pre-flare-subtracted spectra ΔS full-disk are obtained by multiplying by the ratio of the partial-image continuum to full-disk continuum (total continuum ratio): and we obtain a virtual pre-flare-subtracted spectrum of this phenomenon as if we observed the Sun as a star. The EW of the Hα is also calculated using the ΔS full-disk , and we obtained the virtual Sun-as-a-star ΔHα EW (that is, differential Hα flux normalized by the full-disk continuum level). Stellar velocity, mass and kinetic energy data. For the stellar filament eruption, the velocity is derived by fitting the absorption spectra obtained using the Seimei telescope with the normal distribution N(λ, μ, σ 2 ), where μ is the mean wavelength and σ 2 is the variance. In Extended Data Fig. 3a, we plotted the temporal evolution of the velocity ((μ − λ)/λc, where λ is 6,562.8 Å and c is light speed) for the fitted absorption feature with the width of σ. We only plotted the data whose absorption we can expect that the filament is flying in our direction perpendicularly to some extent, so there would not be such a large difference between radial velocity and line-of-sight velocity. We expect that the radial velocity can be larger than the line-of-sight velocity if we assume the projection effect, while it will be about √2 times smaller at most if it erupts at a 45° tilt in the radial direction, which does not change our discussion. The solid line indicates the threshold that can roughly distinguish filament eruption with and without CMEs derived in ref. 30 . The threshold can be expressed as (V r_max /100 km s −1 )(L/100 Mm) 0.96 = 0.8, which is determined by using the Linear Support Vector Classification algorithm (see ref. 30 for the detailed method). Nature astroNomy features are clear enough to fit the shape with the threshold of the fitted absorption amplitude of >0.01 and fitted velocity dispersion of <500 km s −1 and >100 km s −1 . The threshold was determined by trial and error, and we find that many missed detections of absorption features occur when we select threshold values other than this one. The amplitude value of 0.01 corresponds to the detection limit when considering the typical S/N ~ 170 of the Seimei telescope/KOOLS-IFU, and the lower limit of 100 km s −1 is determined to avoid detecting the sharp noisy signals. About 27% of data points were discarded due to this threshold from the initial points (22 min) to final points (110 min), especially for the latter decaying phase. Here, the maximum observed velocity and its errors are calculated as 510 ± 120 km s −1 with a width of 220 ± 90 km s −1 from the mean values of the μ and σ of the first five points (t = 22-26 min in Fig. 1), respectively. The mean value of the velocity when the absorption becomes strong (t = 25-50 min in Fig. 1) is estimated as 258 km s −1 . The plasma mass is simply calculated from the total Hα EW. We used the simple Becker cloud model 43 with optical depth at the line centre of the ejected plasma τ 0 of 5 (which is slightly more optically thick than solar filament eruptions; compare ref. 44 ), two-dimensional aspect ratio of 1 (that is, cubic), local plasma dispersion velocity W of 20 km s −1 and source function S of 0.1 on the basis of the solar observations 45 . The observed half-width of 220 km s −1 of the stellar blueshifted component is larger by one order of magnitude than the solar value, but here we use the solar value as a template. The dispersion velocity of 220 km s −1 is considered to be the upper limit of the local velocity dispersion because the ejected mass would have a complex two-dimensional velocity distribution, which can cause larger W in the integrated spectra. First, the modelled EW of enhanced absorption is calculated by using the Becker cloud model, when the plasma velocity v shift is −258 km s −1 , as where I 0λ is background intensity and I 0,Cont. is continuum intensity. This is the EW value for an extreme case when the full disk of the star is completely covered with absorbing, cool ejected plasma. By comparing the modelled EW (equation (3)) with the lowest observed stellar EW value of −0.16 Å (integrated for Hα − 20 Å-Hα + 10 Å; Supplementary Fig. 4c), the cool-plasma filling factor is calculated to be 5.9% of the stellar disk (that is, modelled EW/observed EW; area = 1.6 × 10 21 cm 2 ). Using the length scale of the ejected plasma, 3.9 × 10 10 cm (=area 0.5 ), the hydrogen column density is derived as 4.0 × 10 20 cm −2 from the assumed optical depth based on the plasma model 46 . In the model of ref. 46 , hydrogen/electron density is calculated by assuming an ionization equilibrium for a population of hydrogen atoms due to a balance between recombination and radiative photoionization through the Balmer/Lyman continuum. It should be noted that the ionization equilibrium of filaments on active stars may be somewhat different from the solar observations due to their high UV radiations, which may affect the evaluation of the mass of the ejecta. By multiplying the hydrogen column density by the filament area, we then obtained the plasma mass of 1.1 × 10 18 g. If the two-dimensional aspect ratio becomes 0.1, similar to a jet-like feature (x width:y width:z depth = 1:0.1:0.1), then the estimated mass becomes larger by a factor of 1.78. If optical depth ranges from 0.8 to 10 (ref. 44 ), the source function takes values of 0.02 or 0.5 and the dispersion velocity is 10 or 220 km s −1 (ref. 45 ), then the estimated masses change by a factor between 0.15 and 4.9. In Fig. 3a, we used the mass of 1.1 +4.2 −0.9 × 10 18 g for an optical depth of 5, and the uncertainties of the model (0. 15-4.9) are used as the error bars since the model-based errors are expected to be much larger than the observational errors. It should be noted that this mass estimate could be either a significant overestimate of the mass of an affiliated CME due to most of the filament falling back to the star, or a significant underestimate due to most of the CME actually being hot coronal material rather than cool filament. The plasma kinetic energy is then calculated as 3.5 +14.0 −3.0 × 10 32 erg by using the velocity of 258 km s −1 . The observed maximum velocity was 510 km s −1 in the early phase, so the kinetic energy can be larger by a factor of 4 although the absorption component was weak at that time. A CME signature was reported from a blueshifted emission component of the cool X-ray O viii line (4 MK) in the late phase of a stellar flare on the evolved giant star HR 9024 23 . Although the time evolution of the blueshifted velocity is not obtained there, they detected the blueshifted emission component with a velocity of 90 km s −1 (escape velocity 220 km s −1 ) and interpreted it as a CME. The blueshifted plasma components at a few MK are also emitted from the upward flow in the confined flare loops (called 'chromospheric evaporation') in the case of solar flares, but they exclude the possibility considering that the other hotter lines do not show the blueshifted component in the post-flare phase. Although the spectral type of HD 9024 (evolved giant star) is very different from that of EK Dra and the velocity (90 km s −1 ) is smaller than our observation (510 km s −1 ), the two observations share the trend that the mass ejection signature is dominant in the post-flare phase. Blueshifted emission components of chromospheric lines have been reported in association with Balmer-line flares mostly on active M/K dwarfs 18-21,48-57 (see refs. 24,44 for a summary). Time-varying blueshifted hydrogen emission components have also been reported with high time cadence on M dwarfs (for example, refs. 19,21 ). A similar case is reported for a UV flare on an M dwarf 20,70 . These may be evidence of stellar prominence eruptions/CMEs. It seems quite possible that the blueshifted emission lines on M dwarfs are closely analogous to the Hα absorption signatures studied in this Letter. The fundamental difference between G-dwarf and M-dwarf blueshift signatures is that for hotter G dwarfs Hα in an erupting filament will only be detectable in absorption, whereas for the cooler M dwarfs even the quiescent Hα line is in emission, so an erupting filament might be observed in emission as well (compare ref. 44 ). Blue-wing enhancements of M-dwarf flares are characterized by high velocity of several hundred kilometres per second (sometimes more than this) 18,53,55 , which cannot be explained by chromospheric evaporation flow associated with the chromospheric-line blueshift phenomenon observed in solar flares 16,44,[71][72][73][74] . The high velocities of M-dwarf flares are similar to that detected on EK Dra in this study (~510 km s −1 ). In addition, not all but some of the blueshift events on M dwarfs appear after the impulsive phase 20,21 , which shares properties with filament eruption events on EK Dra and the Sun in this study. Therefore, at present the blueshifted emission lines in M-type stars are most probably prominence eruptions. In some cases of binary stars, eclipses of the white dwarf component have been interpreted as obscuration by stellar mass ejected from the late-type companion star 61,85 . Other than this, pre-flare dips have been reported in stellar flares, suggesting potential prominence eruptions/CMEs 86,87 . Radio observations have recently investigated the type II radio bursts associated with shocks in front of CMEs as possible indirect evidence of CMEs, but no significant signature has been obtained so far 47,62-65,67-69 . Recently, a stellar type IV burst event from the M-type star Proxima Centauri was reported and may be evidence for a stellar CME 56 . Data availability Source data are provided with this paper. In addition, all raw spectroscopic data are available either in the associated observatory archive (https://smoka.nao.ac.jp/ index.jsp for KOOLS-IFU data in Fig. 1 (available after January 2022); https://www. hida.kyoto-u.ac.jp/SMART/T1.html for some of the SDDI data in Fig. 2) or upon request from the corresponding author (for MALLS data in Fig. 1 and full raw data of SDDI). The TESS light curve is available at the MAST archive (https://mast.stsci. edu/portal/Mashup/Clients/Mast/Portal.html).
2023-02-10T14:30:21.199Z
2021-12-09T00:00:00.000
{ "year": 2021, "sha1": "0ed01d52daf9af4ab47168718dbbdfcbf068de0c", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41550-021-01532-8.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "0ed01d52daf9af4ab47168718dbbdfcbf068de0c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
58536733
pes2o/s2orc
v3-fos-license
A case report of successful treatment of necrotizing fasciitis using negative pressure wound therapy Abstract Rationale: Necrotizing fasciitis is a destructive tissue infection with rapid progression and high mortality. Thus, it is necessary that high-performance dressings be introduced as possibilities of treatment. Patient concerns: Female patient, 44 years of age, admitted to hospital unit complaining of lesion in the gluteal region and drainage of purulent secretion in large quantity followed by necrosis. Diagnoses: The diagnosis of necrotizing fasciitis was carried out with the computerized tomography examination result and its association with the patient's clinical condition. Interventions: Initially, successive debridements were carried out in lower limbs as well as primary dressing with enzymatic debriding action until indication of negative pressure wound therapy, for the period of 2 weeks in the right lower limb and for 5 weeks in the left lower limb, with changes every 72 h. Dressing with saline gauze was used at the end of this therapy until hospital discharge. Outcomes: After the use of negative pressure wound therapy, we observed the presence of granulation tissue, superficialization and reduction of lesion extension. The patient presented good tolerance and absence of complications. Lessons: Negative pressure wound therapy constituted a good option for the treatment of necrotizing fasciitis, despite the scarcity of protocols published on the subject. Introduction Necrotizing infections of soft tissues are characterized by acute, diffuse, edematous, suppurative, and disseminated inflammation. An example is necrotizing fasciitis (NF), described as a severe infection characterized by progressive purulent necrosis of the fascia and subcutaneous tissue. [1] In this context, high-performance dressings need to be introduced as possibilities of treatment, collaborating to the reduction of exudate, odor and pain, reduction of the number of dressing changes and improvement of patients social interactions as well as early hospital discharge and return to daily activities. Negative pressure wound therapy was introduced commercially after studies carried out by Argenta and Morykwas in 1997. [2] This therapy is an important adjuvant method in the treatment of wounds, accelerating the process of repair and wound bed preparation until definite coverage by several methods of tissue reconstruction. [3] Thus, this report aims to describe the use of negative pressure wound therapy in the case of NF. Case report Female patient, 44 years of age, married, housewife, resident in Campo Grande-MS, with the diagnosis of systemic lupus erythematosus (SLE) for 24 years and systemic arterial hypertension for 10 years, in regular treatment with prednisone 20 mg/day, captopril 50 mg/day and acetylsalicylic acid once a day. She was admitted to the hospital unit on September 5, 2015, complaining of purulent lesion in the right gluteus starting in July 2015, with symptoms of edema, heat, and pain. The patient related that cyclophosphamide pulse therapy was carried out on August 24, 2015, due to a clinical condition of vasculitis, and on September 1, 2015, there was skin disruption in the gluteus region with drainage of purulent secretion in large quantity followed by necrosis. Editor: N/A. Still, during admission, she reported pain in the right knee with difficulty to perform extension, edema, and skin hyperpigmentation. She reported prior use of ciprofloxacin 500 mg every 12 h for 7 days, though without improvement of abscess. Patient image use was authorized in written informed consent and the Research Ethics Committee linked to the Federal University of Mato Grosso do Sul approved the publication of this case report. During the hospital stay, she denied alcoholism, smoking, drug allergy, and family morbid history. The patient possessed regular general and nutritional conditions, she was conscious, oriented, and mucous membranes were hydrated and pallid +1/+4. She was acyanotic and anicteric, heart rate was 128 beats per minute, respiratory rate 20 respiratory incursions per minute, axillary temperature was 39.5°C, with oxygen saturation at 96%. There were no alterations in cardiopulmonary auscultation. Abdomen was flat and bowel sounds were present; it was flaccid, painless to palpation and without masses or visceromegalies. She presented crusted lesions without secretion in the posterior region of the forearm; abscess in the right and left posterior coxofemoral region, with a necrotic crust and purulent secretion drainage through orifice; and abscess in gluteus and in the right and left perineal regions, with areas of coagulation necrosis and fetid odor (Fig. 1). Pain was intermittent and sensitive to touch. There was also the presence of necrotic tissue and fibrin spots as well as perilesional skin dyschromic alteration. She underwent computerized tomography of lower limbs on September 6, 2015, and it was not identified lesion of soft tissues or bone involvement. In association with the clinical condition, NF was diagnosed and the patient underwent emergency surgical debridement for removal of necrotic tissue in the right lower limb and empirical antibiotic therapy with meropenem 1 g every 8 h and vancomycin 500 mg every 12 h, both for 18 days. Right gluteal lesion material collection was performed during the procedure, with positive culture result for Staphylococcus aureus. Figure 2 presents the aspect of lesion after urgent surgical debridement. Conventional dressing was used in post-debridement lesion and it was performed aseptically with the application of papain solution 10% and silver polyethylene mesh, with changes every 48 h and daily changes for secondary dressing. On September 17, 2015, surgical debridement of the necrotic tissue of the medial and posterior thigh region and right gluteus was carried out. On the day of September 29, 2015, the patient presented lesions in both posterior regions of thigh and gluteus, granulation tissue in the right lower limb and necrotic tissue in the left lower limb (coagulation and liquefaction), as presented in Figure 3. Conventional dressing was instituted with cleaning of lesions using physiologic solution 0.9% in conditions of asepsis and antisepsis, as well as topical chlorhexidine digluconate solution 2% in the perilesional area, instrumental debridement of necrotic tissue and application of papain solution 10% in the left lower limb and occlusive dressing with medium-chain triglycerides on the right lower limb. Dressings were carried out under analgesia; with tramadol hydrochloride 50 mg endovenously administered 30 min before dressing. Afterwards there were 3 surgical approaches on dates October 03, 09, and 20, 2015. On October 20, surgical incision was carried out in gluteus and extended until the distal region of the thigh (posterior aspect), as shown in Figure 4. There were occlusive dressing among performed debridements, cleaning with The dimensions of the wound before negative pressure wound therapy installation were 35 centimeters (cm) in length and 15 cm in width in the right lower limb and 37 cm in length and 8 cm in width in the left lower limb. Cleaning of the lesion with a physiological solution 0.9% and degerming chlorhexidine, as well as friction with gauze, soap removal, and drying of the area were performed before the installation of negative pressure wound therapy. The negative pressure wound method was applied by placing polyurethane hydrophobic sponge with silver on wound bed covering its whole extension. It was sealed with a transparent film, thus obtaining a hermetic seal connected to the suction pump in continuous mode with the pressure of 125 mm Hg. Dressing was changed every 72 h with aseptic technique. Interruption of therapy in the left lower limb occurred on October 30, 2015, for removal of suture stitches, and on November 04, 2015, therapy was suspended in the right lower limb, with the application of conventional daily dressing with non-adherent gauze. On November 06, 2015, therapy in the left lower limb was reinstalled in intermittent mode at 125 mm Hg, with the suspension on November 30, 2015, keeping the conventional dressing method with humid saline gauze until hospital discharge on December 23, 2015. The dimensions of the wound after negative pressure wound therapy suspension were 18 cm in length and 6 cm in width in the right lower limb and 10 cm in length and 12 cm in width in the left lower limb (Fig. 6). The observed improvements were both wounds with granulation tissue throughout their length with healing of first and second intention; absence of exudate; epithelial, irregular and preserved edges; intact periphery and absence of phlogistic signs. A 24-day home visit was performed after hospital discharge when complete healing of the lesion in both lower limbs was verified. Discussion NF is a severe and potentially fatal infection of soft tissues characterized by a rapid progressive necrosis of fascia and the subcutaneous tissue along fascial extents. Infection site is mainly seen in lower extremities, followed by abdomen and perineum. [4] Incidence is 0.4 at 1 person in 100,000 per year and it is responsible for elevated mortality and morbidity rates. Mortality associated with NF ranges from 11% to 36%. [5] Few cutaneous symptoms reported by the patient in the onset of NF occur due to the deep infection, which is generally disproportional to the skin lesion. For that reason, clinical judgement is the most important element for diagnosis. Etiology is not yet fully understood and is not identified in many cases. However, it might result from the previous history of traumatism and certain conditions, such as immunosuppression, diabetes melito, malignity, drug abuse, and kidney disease. [6] The patient did not report the history of the prior trauma common in most cases, but the presence of previous vasculitis and the use of immunosuppressant agents might have been factors that contributed for the development of the infectious state. Despite greater propensity for patients with SLE to develop common and opportunistic infections, NF is rare in this group of patients, with few cases reported in literature, [7][8][9] but there are cases of NF in other rheumatic diseases, such as polymiositis, systemic sclerosis, rheumatoid arthritis, dermatomyositis, and ankylosing spondylitis. [10,11] Primary management of NF involves urgent surgical debridement of affected tissues. The aim of surgical intervention is to remove all necrotic tissues, including muscle, fascia, and skin, in order to preserve the viable skin and reach hemostasis. [12] Treatment of NF includes the intravenous administration of broad-spectrum empirical antibiotics taking into account the microbiological classification. Type I, with greater prevalence, corresponds to polymicrobial NF (presence of 2 or more agents, where these can be gram-positive, anaerobic or gram-negative) and affects mainly patients with comorbidities. Type II occurs with less frequency and affects mainly patients that are young or without comorbidities. It is caused by beta-hemolytic Streptococcus group A or S aureus, which may occur both in association. Type III is extremely rare, its evolution is the most aggressive (septic condition with insufficiency of multiple organs in less than 24 h) and it is caused by Vibrio vulnificus, associated with contact with sea water and marine animals. [9] Vacomycin combined with a carbapenem, in this case "meropenem", were the empirical choices of medications due to the purulent secretion and wound phlogistic signs, and they were continued after culture results. Postoperatory wound management and adequate nutritional support are vital for the patient's survival. [12] When associated with surgical debridement, conventional dressing had enzymatic debriding action. Tissue invasion associated with necrosis, severe septic systemic repercussions and patient's multiple basal comorbidities warn of the gravity of the condition. Faced with the constant difficulty to achieve good results in the treatment of NF lesions, the utilization of negative pressure wound therapy was proposed as an auxiliary method for treatment. [13] The use of negative pressure wound therapy in the treatment of NF has been efficient. [13,14] Complex wounds of other etiologies are also successfully treated with this technique, such as burns, open fractures, fasciotomies, diabetic foot wounds, and pressure ulcers. [3,14,15] In 4 cases of NF, treatment with negative pressure wound therapy was applied immediately after debridement to avoid the future formation of pseudo-eschars and necrotic membranes that could require other debridements. After adopting this technique, the patient's general condition and wound condition dramatically improved. [15] Still, a prospective study on 35 patients with Fournier's syndrome treated with negative pressure wound therapy demonstrated a meaningful decrease in mortality when compared to conventional dressings. [16] The negative pressure wound therapy application provides uniform negative pressure to the wound bed and its mechanism of action eliminates the extravascular edema and improves microcirculation blood supply. Besides, it gives rise to a cellular cytoskeleton microdeformations responsible for initiating potent stimulus to cellular proliferation and angiogenesis. It also stimulates the formation of granulation tissue and reduces the wound size and bacterial load. However, in extensive wounds, the foam may cause persistent infections. Other disadvantages of the device include high cost of material, inconvenience for ambulation, dressings changes with intensive work, difficulty to maintain a hermetic sealing, pain and the discomfort caused by suction. Even so, the total cost of negative pressure wound therapy can be 3 times lower in comparison with traditional wounds treatment for post-operative patients of acute treatment in the long term. [3,17] The therapy may be applied with different clinical objectives, as a bridge until definite surgical completion, or for the progress of wound closure by the second intention. [3,17] However, just as important as therapy beginning is the decision about ending it. The therapy allows the movement of keratinocytes to be organized and that they re-epithelialize more rapidly, increasing healing speed and encouraging the growth of healthy granulation tissue. [17] When reaching this level, it is important to discuss the possibility of definite closure procedures, for example, cutaneous graft, local skin flap, and wound edges approximation. The ideal wound for repair is that in which we observe granulation tissue with little or no fibrin and with non-existent or low output, and that of a serous aspect. Small wounds are likely to progress by the second intention, but an extensive lesion would need bed preparation for graft, which shall be carefully indicated. [18,19] We chose progression by the second intention for several reasons, including the lack of access to surgical closure, countraindication of surgery to the patient or countraindication for wound surgical closing. Absence of adequate flap or presence of complex comorbidities makes the patient an unfit candidate for reconstruction. In these cases, negative pressure wound therapy may be the ideal solution, mainly in lesions that involve extensive bodily areas. [17] Many clinical circumstances do not allow wound closure in the first surgical procedure, including the patient's serious condition, meaningful wound infection, the need of additional debridement procedures, the need of edema reduction, among other factors. [17][18][19] In the described case, graft was contraindicated due to the clinical condition, with relevant autoimmune characteristics and malnutrition, associated with absence of adequate flap. Thus, therapy was extended until wound tunnel superficialization, which may also occur when there is exposure of bone tissue and tendon, providing these structures with granulation tissue formation. The hermetically closed dressing changed every 72 hours, in contrast with wounds treatment with conventional technique, allows the covering not to be altered daily, keeping the wound in an isolated environment, impeding contamination and offering better comfort to the patient. This therapy is an excellent management tool for post-surgical wounds and it is able to accelerate healing time, reducing the number of dressings changes. Studies are not conclusive as for the level of negative pressure that we shall apply in the dressing, but early granulation tissue formation is evident under pressure of 125 mm Hg when compared to pressures of 25 mm Hg and 500 mm Hg. Vacuum dressing at the 125 mm Hg value has greater use in clinical practice, [5,17] just as it had in the described case, with good results obtained for healing and wound granulation. Because of the cost, though, it is necessary that the therapy use be standardized, contemplating mainly complex wounds of difficult management in order to help in granulation tissue formation and abbreviate the time of hospital stay. Negative pressure wound therapy has proved to be a safe and effective alternative with satisfactory scar evolution, wound tunnel superficialization, formation of granulation tissue and epithelialization, edges approximation, ensuring perilesional skin integrity, with less manipulation of patient during dressings change.
2019-01-22T22:31:14.826Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "8528e267afb8aabcae409baa5b6983c05d65c6c6", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/md.0000000000013283", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8528e267afb8aabcae409baa5b6983c05d65c6c6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220057352
pes2o/s2orc
v3-fos-license
DNA Fiber Assay for the Analysis of DNA Replication Progression in Human Pluripotent Stem Cells Human pluripotent stem cells (PSC) acquire recurrent chromosomal instabili-ties during prolonged in vitro culture that threaten to preclude their use in cell-based regenerative medicine. The rapid proliferation of pluripotent cells leads to constitutive replication stress, hindering the progression of DNA replication forks and in some cases leading to replication-fork collapse. Failure to over-come replication stress can result in incomplete genome duplication, which, if left to persist into the subsequent mitosis, can result in structural and numerical chromosomal instability. We have recently applied the DNA iber assay to the study of replication stress in human PSC and found that, in comparison to somatic cells states, these cells display features of DNA replication stress that include slower replication fork speeds, evidence of stalled forks, and replication initiation from dormant replication origins. These indings have expanded on previous work demonstrating that extensive DNA damage in human PSC is replication associated. In this capacity, the DNA iber assay has enabled the development of an advanced nucleoside-enriched culture medium that increases replication fork progression and decreases DNA damage and mitotic errors in human PSC cultures. The DNA iber assay allows for the study of replication fork dynamics at single-molecule resolution. The assay relies on cells incorporating nucleotide analogs into nascent DNA during replication, which are then measured to monitor several replication parameters. Here we provide an optimized protocol for the iber assay intended for use with human PSC, and describe the methods employed to analyze replication fork parameters. © 2020 Wiley Periodicals LLC. Human pluripotent stem cells (PSC) acquire recurrent chromosomal instabilities during prolonged in vitro culture that threaten to preclude their use in cellbased regenerative medicine. The rapid proliferation of pluripotent cells leads to constitutive replication stress, hindering the progression of DNA replication forks and in some cases leading to replication-fork collapse. Failure to overcome replication stress can result in incomplete genome duplication, which, if left to persist into the subsequent mitosis, can result in structural and numerical chromosomal instability. We have recently applied the DNA iber assay to the study of replication stress in human PSC and found that, in comparison to somatic cells states, these cells display features of DNA replication stress that include slower replication fork speeds, evidence of stalled forks, and replication initiation from dormant replication origins. These indings have expanded on previous work demonstrating that extensive DNA damage in human PSC is replication associated. In this capacity, the DNA iber assay has enabled the development of an advanced nucleoside-enriched culture medium that increases replication fork progression and decreases DNA damage and mitotic errors in human PSC cultures. The DNA iber assay allows for the study of replication fork dynamics at singlemolecule resolution. The assay relies on cells incorporating nucleotide analogs into nascent DNA during replication, which are then measured to monitor several replication parameters. Here we provide an optimized protocol for the iber assay intended for use with human PSC, and describe the methods employed to analyze replication fork parameters. © 2020 Wiley Periodicals LLC. INTRODUCTION Human pluripotent stem cells (PSC) possess the ability to endlessly renew and differentiate into any cell type of the body, making them a promising resource for cell-based regenerative medicine (Takahashi et al., 2007, Thomson et al., 1998. To capitalize on this potential, researchers may need to expand PSC for long periods of time in a genetically stable and undifferentiated state. However, recurrent genetic changes have been reported to arise during prolonged in vitro culture (Draper et al., 2004, Olariu et al., 2010. These changes occur through mutation and subsequent selection of variant cells in which the genetic change provide a growth advantage (Avery et al., 2013, Blum, Bar-Nur, Golan-Lev, CldU (20 minutes) IdU ( & Benvenisty, 2009). Although extensive resources have been applied to understanding the mechanism of selection, comparatively little is known about the mechanism of mutation in these cells. DNA damage and genome stress, including replication stress, can lead to genetic instability like that often observed in cancer (Burrell et al., 2013). Previous studies have highlighted that human PSC are prone to replication stress and DNA damage (Halliwell et al., 2020, Simara et al., 2017, Vallabhaneni et al., 2018, which may also drive the high frequency of mitotic errors that has been reported elsewhere (Lamm et al., 2016, Zhang et al., 2019. Yet, despite this, the underlying mutation rate in these cells is low (Thompson et al., 2020). This apparent contradiction can be reconciled by observations that cell cycle checkpoints such as CHK1, responsible for relay signaling in response to replication stress, are relaxed in human PSC, causing these cells to die in response to genomic damage , Desmarais, Unger, Damjanov, Meuth, & Andrews, 2016. These characteristics may relect the requirements of early embryogenesis, where a relentless need to proliferate while maintaining genetic stability is critical to ensure successful development. Nevertheless, this system is not perfect and mutations still arise both in vivo and in vitro. To better understand the origins of genome instability in human PSC, the appropriate assays must be optimized for use in these systems. The DNA iber assay has become the gold standard in directly monitoring DNA replication fork dynamics, and is thus an important tool for the understanding of DNA replication stress. However, its application has been primarily focused on understanding replication in cancer cell lines (Nieminuszczy, Schwab, & Niedzwiedz, 2016). This article describes the basic protocols required to perform the DNA iber assay in human PSC: the sequential pulse labeling of actively replicating DNA (Basic Protocol 1), spreading of labeled DNA ibers onto glass slides (Basic Protocol 2), immunolabeling of nascent DNA ibers (Basic Protocol 3) for visualization by confocal microscopy (Support Protocol 1), and data analysis and replication event characterization (Support Protocol 3) will be described ( Fig. 1). Using our protocol, DNA iber length, replication fork speed, inter-replication origin distance, and fork symmetry can be measured, and replication events including the detection of stalled/terminated forks, new origins, bidirectional forks, and fork terminations can be quantiied. The protocols in this article have been successfully used to monitor replication dynamics in several human iPSC and ES cell lines cultured in feeder-free conditions using mTeSR, Nutristem, or E8 on a matrix of recombinant vitronectin−coated plates (Halliwell et al., 2020). DNA FIBER LABELING This protocol describes the pulse labeling of DNA and harvesting of the labeled human PSC intended for use in the DNA iber assay. Human PSC should be seeded at least 48 hr before starting the experiment. Pulse labeling is performed sequentially with two thymidine analogs-chlorodeoxyuridine (CldU) and iododeoxyuridine (IdU)-for 20 min each (Fig. 1A). DNA labeling exploits the ability of in vitro−cultured cells to incorporate nucleotide analogs into newly replicating DNA strands. To ensure consistent and robust labeling, the cells should be at ∼50% conluency at the start of the experiment: this will ensure the cells are still in the growth phase and replication is not suppressed. Further, the labeling time should be strictly monitored. A deviation in timing will introduce inaccuracy and confound experimental results. Once labeled, the cells should be dissociated into single cells using TrypLE, to minimize stress and allow for accurate cell counting. Dissociated cells should be kept on ice to inhibit further enzymatic activity before spreading (Basic Protocol 2). Halliwell et al. 3of16 Current Protocols in Stem Cell Biology This protocol should produce a homogenous single-cell suspension of pulse-labeled cells in ice-cold PBS for DNA spreading (Basic Protocol 2). 2. Aspirate the cell culture medium from the T-12.5 lask of human PSC. 3. Add 1 ml TrypLE to the T-12.5 lask and incubate for 3 min at 37°C in a 5% CO 2 incubator. 4. Add 4 ml of human PSC culture medium to the T-12.5 lask to detach the cells. Transfer the cell suspension into a 15-ml tube. 5. Count the total cell number and centrifuge the cells 3 min at 300 × g, room temperature, then resuspend the pellet in 1 ml human PSC culture medium. 6. Seed 80,000 cells per well of the 6-well plate prepared in step 1. Culture for 72 hr (minimum of 48 hr, although cell numbers will need to be adjusted accordingly) with daily batch feeding, removing the Y-27632 after 24 hr. 11. Add IdU stock solution to the seeded cells for a inal concentration of 250 µM (Fig. 1A). The concentration of IdU is 10-fold higher than the concentration of CldU. This is to ensure that IdU displaces CldU during the second pulse-labeling step. The addition of the second analog must be performed promptly following the 20-min incubation with the irst analog. Halliwell et al. 4of16 Current Protocols in Stem Cell Biology 13. Aspirate the medium. 14. Wash twice, each time with 2 ml ice-cold PBS, aspirating medium after each wash. Steps 13 to 14 should be performed rapidly to avoid over-labeling of nascent DNA. 15. Add 1 ml pre-warmed TrypLE solution to each well. Coat the wells thoroughly and aspirate off the excess solution. 17. Aspirate off the excess TrypLE and add 0.5 ml ice-cold PBS to release the cells from the 6-well plate. Transfer the cell suspension to a 15-ml conical tube. 18. Count the total cell number and dilute with ice-cold PBS to 400,000 cells per ml. 19. Keep on ice and spread (Basic Protocol 2) within 30 min. DNA SPREADING This protocol aims to describe the technique of DNA spreading (Parra & Windle, 1993). Following DNA labeling (Basic Protocol 1), a small droplet of cell suspension is lysed on a glass slide ( Fig. 1B) and then tilted to allow spreading. The droplet of cell lysate solution generates cohesive tension with the slide as it runs its length, stretching the DNA into ibers that can be later stained and visualized by immunoluorescence (Basic Protocol 3 and Support Protocol 1). The major factor requiring optimization in this protocol is the type of glass slide used. Before initiating this protocol, different brands and batches of slides should be tested to ensure the droplet runs slowly down the length of the glass slide when tilted at 25°-40°. If this protocol is conducted properly, a droplet of cell lysate solution should run the length of a glass slide in 3 to 5 min. Once dried, the path of the droplet should leave behind a cloudy precipitate. Depending on the temperature and humidity, it may be necessary to alter the volume of the spreading buffer or the angle at which the slide is tilted to ensure that the droplet runs at a slow, constant rate. Super premium microscope slides (BDH Laboratory Supplies) Other slides are also suitable, although each batch should be tested to optimize spreading. The lid of a multi-well plate (or something similar) to hold the tilted slides in place Cell lysis 1. Working with one sample at a time, pipette 2 µl of ice-cold cell suspension at the top of each of 3-5 microscope slides, depending on the number of replicates to be performed (Fig. 1B). Be sure that the droplet does not touch the frosted end of the slide, as this will inhibit its ability to spread. It is a good idea to mark with pencil the position of the cell suspension, to aid in inding the ibers on the microscope. This step will require optimization. Look for the edges of the droplet beginning to dry and for the droplet to become tacky. The time that this takes may vary depending on room temperature and humidity. However, do not allow the droplet to completely dry. Halliwell et al. 5of16 Current Protocols in Stem Cell Biology 3. Add ∼7 µl of spreading buffer and stir with a pipette tip (Fig. 1B). To ensure that the droplet spreads down the slide slowly (step 5 to 6), the volume of spreading buffer may need to be adjusted. IMPORTANT NOTE: Stir with the pipette tip; do NOT pipette up and down. Incubate for 2 min. Spreading 5. Tilt each slide at an angle of 25°to 40°and hold in place on the lid of a multi-well plate. 6. Let the droplet run down the slide slowly and at a constant speed. The droplet should reach the bottom edge after 3 to 5 min (Fig. 1C). It is important to ensure that the droplet spreads slowly and at a constant speed, so that the ibers do not clump and become uncountable during the analysis steps. 7. Lay the slides lat and allow to air dry completely (approximately 15 min). 8. Go back to step 1 and prepare the next sample. Fixation 9. Using a 1-ml pipette, gently add 1 ml methanol/acetic acid ixative to the bottom right-hand corner of the slide and allow it to spread over the entirety of the slide (Fig. 1D). 11. Tilt the slide to allow the excess ixative to run off, and allow to air dry completely. At this point, the slides can be stored at 4°C overnight. IMMUNOSTAINING This protocol describes the immunolabeling of DNA ibers spread onto glass slides. The protocol begins with an acid treatment to denature the DNA ibers before blocking for unspeciic binding and labeling the CldU and IdU labeled ibers with rat and mouse anti-BrdU primary antibodies, respectively. Detection of the labeled DNA ibers is possible by immunostaining with luorescently labeled secondary antibodies (Fig. 1E). Following this immunostaining protocol, it will be possible to detect the labeled DNA ibers by confocal microscopy. The labeled DNA should be detectable as bi-labeled ibers in the 555 and 488 Alexa Fluor visible spectrum. Further details and expected results from this protocol can be found in the "Understanding Results" section. Glass staining troughs with slide racks (e.g., Sigma, BR472200) Coplin staining jar (e.g., Sigma, S5766) Coverslips, 60 × 22 mm (ThermoFisher Scientiic, BB02200600A113MNT0) Wash, denature, and block 1. Wash slides twice with double-distilled H 2 O in a staining trough. All wash steps should be performed by placing slides in a slide rack, submerging in the wash solution within a staining trough, and agitating back and forth twice. 2. Rinse the slides once in 2.5 M hydrochloric acid in a staining trough. All rinse steps should be performed by dipping the slides, held in a slide rack, in and out of the rinse solution within a staining trough. 3. Denature in 2.5 M hydrochloric acid in a staining trough for 1 hr. Wash the slides twice with blocking solution in a Coplin jar. To wash, place slide inside Coplin jar then add blocking solution; after irst wash, pour out the buffer and replace with fresh buffer. 6. Incubate the slides in fresh blocking solution in the Coplin jar for 1 hr. Primary antibody immunolabeling 7. Make a fresh solution of primary antibodies by mixing rat monoclonal anti-BrdU antibody (1:500) and mouse monoclonal anti-BrdU antibody (1:500) in blocking solution. Antibodies should be titered, as the optimal concentration will depend on the source and nature of the antibody. 8. Using a 1-ml pipette, gently add 750 µl of the antibody solution to the bottom righthand corner of the slide and allow it to spread over the entirety of the slide. To stop the solution from evaporating, this step should be performed on a damp paper towel covered with a plastic lid. Alternatively, a slide staining tray can be used (e.g., Sigma, Z670146). 10. Rinse three times with 1× PBS in a staining trough. Remove the slides and stand them upright on a paper towel to remove any excess PBS. 11. Using a 1-ml pipette, gently add 1 ml of 4% PFA to the bottom right-hand corner of the slide and allow it to spread over the entirety of the slide. 13. Rinse three times with 1× PBS in a staining trough. 14. Wash three times with blocking solution in a Coplin jar. Secondary antibody immunolabeling All steps should be performed in the dark to reduce photobleaching of the luorophore labels. Use a dark staining trough and Coplin jar or wrap in aluminum foil to block the light. Halliwell et al. 7of16 Current Protocols in Stem Cell Biology Antibodies should be titered, as the optimal concentration will depend on the source and nature of the antibody. 16. Using a 1-ml pipette, gently add 750 µl of the secondary antibody mix to the bottom right-hand corner of the slide, and allow it to spread over the entirety of the slide (Fig. 1E). MICROSCOPY/DATA ACQUSITION In support of the basic protocols above, we provide microscopy parameters and dataacquisition instructions to facilitate accurate data analysis (Support Protocol 2). It is important to take pictures of ibers from across the entire slide where ibers do not overlap, in order to ensure that robust measurements will be obtained. In total, 150-200 ibers should be acquired per experimental condition to enable reliable estimation of replication parameters. Three or more independent repeats should be performed. Microscopy parameters The parameters in Table 1 have provided robust measurements when using an Olympus FV1000 confocal microscope. DATA ANALYSIS This protocol will describe the data analysis and replication-event characterization that can be achieved from DNA iber assays as described above. In particular, we will describe how to measure DNA iber length, replication fork speed, fork symmetry, and interreplication origin distance, and how to detect and quantify different replication events. Figure 2 Representative image showing a single field of DNA fibers. The image was taken using the described microscopy parameters (see Support Protocol 1 and Table 1). The fibers within the field show minimal overlap and can be easily measured. Analysis of fields with a high density of DNA fibers can confound measurements, as they are likely to overlap, hindering their measurement from end-to-end. The DNA fibers pictured were labeled sequentially with CldU and IdU for 20 min each. Scale bar = 50 µm. In each case, a total of 150-200 ibers should be analyzed per condition for a reliable estimation of iber length, and to account for the range of iber lengths that can be attributed to the varied dynamics of replication forks. Only clear ibers that do not overlap should be included (Fig. 2). It is also important to image ibers from across the entire slide to ensure robust measurement of replication fork dynamics. Data should be plotted as scatter dot plots, or in box-and-whiskers plots. Statistical signiicance between two groups is normally assessed using the Mann-Whitney U test (unpaired and nonparametric). Determining DNA iber length (µm) The following steps are for determining total length covered by the replication fork within the 40-min pulse labelings. 1. Open each TIFF image from Support Protocol 1 in ImageJ. 2. In the Analyze menu, select measure. Halliwell et al. 9of16 Current Protocols in Stem Cell Biology 3. For each iber, measure the length of the CldU (red) followed by the IdU (green). 4. Find the sum of the red and green lengths (Fig. 3A). Determining replication fork speed of a progressing fork The following steps are for determining the average speed of the replication fork over the two sequential 20-min pulse labelings. 6. Find the average length of each of CldU and IdU. 8. To convert from µm/min to kb/min, a commonly used conversion factor of 2.59 kb per µm (Jackson & Pombo, 1998; Fig. 3A) can be applied. Fork symmetry The following steps are for analysis of the synchronized progression of sister forks emanating from a single origin. Fork asymmetry suggests replication-fork stalling events. This analysis is performed on ibers that have contiguous IdU-CldU-IdU (green-redgreen) signals. 10. Measure the length of the two IdU (green) tracts emanating from a single CldU (red) origin (Fig. 3B). 11. Calculate the ratio between the two green tracts. Inter-origin distance The following steps are for determining distance between origins in two consecutive DNA tracts. Care should be taken when measuring the inter-origin distance using the DNA spreading technique, selecting only consecutive DNA tracts that are certain to be on the same DNA iber. Ideally, DNA should be counterstained with a luorescent dye, such as YOYO-1. Replication events The frequency of replication events, detailed in Figure 3, can be quantiied and normalized to the total number of ibers present. These measurements can provide insight into replication response to loss of protein or replication stress. Blocking solution Thoroughly dissolve 5 g BSA and 0.5 ml of Tween 20 in 500 ml of 1× Dulbecco's PBS (see recipe). Make fresh and chill to 4°C prior to use. Figure 3 Replication dynamics analysis and replication event characterization. Representative images of DNA replication events with the directionality of the CldU and IdU labeling presented. A description of the replication event is shown, and where relevant the formula required in the calculation of the specified replication dynamic is shown. (A) Ongoing replication fork. Analysis of these forks can be used to determine the fiber length and replication fork speed using the formulae shown. (B) Example of a bi-directional replication fork. Analysis of these fibers allows for the quantification of fork symmetry; loss of symmetry indicates that replication fork stalling has occurred. (C) Double replication origins allow for the distance between replication initiation sites to be measured. Inter-origin distance is a measure of the density of origin firing, which is often a symptom of replication stress in response to oncogene activation. (D) A representative image of replication fork termination. Two forks travelling in opposite directions, which meet and terminate fork progression. (E) The image shows the firing of a replication fork from a new origin of replication, which has occurred during the second pulse labeling with IdU. The presence of IdU-only (See next page for legend) Halliwell et al. of 16 Current Protocols in Stem Cell Biology tracts is a sign of origin firing, as the fork was not progressing during the period of the first labeling. (F) CldU-only labeled fibers indicate the stalling or collapse of replication forks prior to the addition of the second, IdU label. Quantification of CldU-only fibers, relative to the total number of fibers, can be used to indicate replication stress, and may be a prerequisite to genome instability. (G) Hydroxyurea arrests DNA replication in human PSC. The image shows the severely slowed or stalled progression of replication forks of hydroxyurea-treated human PSC. Scale bars = 10 µm. Hydrochloric acid, 2.5 M Slowly add 83.2 ml hydrochloric acid (Sigma, 320331) to 316.8 ml double-distilled H 2 O. Prepare fresh for each use. Methanol/acetic acid ixative Mix methanol (Sigma, 179337) and acetic acid (Sigma, 695092) at a 3:1 ratio. For example, to make 50 ml of ixative, mix 37.5 ml of methanol with 12.5 ml acetic acid. Store at room temperature in an air-tight container for up to 1 year. Paraformaldehyde solution, 8% Add 40 g of paraformaldehyde (Sigma, P6148) to 400 ml double-distilled H 2 O. Heat the solution to 60°C and stir at a medium speed under a fume hood. Add ∼10 drops of 1 N NaOH to dissolve the PFA granules. Continue to mix for 2 hr. Allow the solution to cool, and add double-distilled H 2 O for a total volume of 500 ml. Aliquot and store at −20°C for up to 1 year. Rho-associated, coiled-coil containing protein kinase 1 inhibitor (Y-27632) Dissolve Y-27632 dihydrochloride (TOCRIS, 1254) in DMSO (Sigma, 472301) for a 10 mM stock solution. Store for up to 1 year at −20°C. Combine solution 1 and 2 and add 0.5 g SDS. Make the solution up to 100 ml with double-distilled H 2 O and vortex to dissolve. Store at room temperature for up to 1 year. Vitronectin-coated 6-well plate Thaw 120 µl vitronectin recombinant protein (ThermoFisher Scientiic, A14700) at room temperature and combine with 11.88 ml of 1× Dulbecco's PBS (see recipe). Mix well and add 2 ml to each well of a 6-well cell culture plate. Incubate at room temperature for 1 hr. For longer-term storage, keep at 4°C and use within 2 weeks. Halliwell et al. of 16 Current Protocols in Stem Cell Biology COMMENTARY Background Information Human PSC hold great promise in the ield of regenerative medicine. Yet, in order to reach the full potential of these cells, we must irst capitalize on their ability to rapidly and endlessly renew to generate large numbers of genetically stable and undifferentiated cells. However, it has become apparent that human PSC acquire genetic changes during long-term culture, which raise concerns over the safety of stem cell−derived products that are destined for the clinic (Draper et al., 2004, Olariu et al., 2010. The recurrent nature of certain karyotypic changes, such as ampliications to chromosomes 1q, 12p, 17q, and 20q, have highlighted that certain mutations provide a growth advantage to the variant cell, which becomes selected for in a culture over time (Amps et al., 2011, Baker et al., 2016, Olariu et al., 2010. Despite the mechanism of selection now being well deined, relatively little is known about the underlying mutational mechanisms, although the observations of replication stress and genomic damage in human PSC are similar to the oncogene-induced model of genetic instability in cancer development and progression (Halazonetis, Gorgoulis, & Bartek, 2008). The self-renewal of human PSC is characterized by an abbreviated G1 phase that bypasses the Rb/E2F checkpoint and is driven by high expression of cyclin D2 and constitutive expression of cyclin E (Becker et al., 2006, Filipczyk, Laslett, Mummery, & Pera, 2007. Maintaining this rapid proliferation over extensive culture periods may expose these cells to replication stress, characteristics of which, such as reduced replication rates, have been deined in human PSC using the DNA iber assay (Halliwell et al., 2020). Furthermore, we have recently identiied regions of microhomology at the breakpoint of chromosome 20 tandem ampliication, which implicates that template-switching mechanisms at stalled or collapsed forks are responsible for these mutations (J.A. Halliwell, D. Baker, P.W. Andrews, I. Barbaric, unpub. observ.). Collectively, these studies have determined that genetic stability in human PSC is overtly linked to DNA replication. This article provides experimental details and protocols required to perform the DNA iber assay, which have been optimized for studying replication stress in human PSC. The DNA iber assay has become the goldstandard assay in direct monitoring of DNA replication-fork dynamics. The protocols described here are based on the previously described DNA iber assay (Merrick, Jackson, & Difley, 2004), which itself was modiied from the original DNA iber labeling (DIRVISH) technique (Jackson & Pombo, 1998). Two distinguishable modiied nucleotides, CldU and IdU, are added sequentially to the cell culture medium to pulse label nascent DNA strands. The labeled nascent DNA is then removed from the cells by lysis and stretched onto glass slides. By tilting the glass slide, the droplet of DNA solution runs down its length under the force of gravity, stretching the ibers as it runs (Parra & Windle, 1993). Stretching the DNA by spreading is fast, and requires little material or preparation. It is therefore affordable and accessible to most research laboratories. However, the DNA iber assay described here has relatively low resolution and does not detect ssDNA discontinuities, which means that analysis of the results must be done carefully to ensure that broken ibers and crossed ibers are not included. The addition of replication-stalling and DNA-damaging agents prior to and during the DNA-labeling procedure can facilitate the study of fork progression following stalling, and when encountering DNA lesions. Human PSC are particularly sensitive to genotoxic agents, radiation, or agents used to induce replication block, even at doses that have little effect on cancer or somatic cells Desmarais et al., 2016;Hyka-Nouspikel et al., 2012;Luo et al., 2012;Simara et al., 2017). However, these experiments have been reviewed extensively elsewhere (Quinet, Carvajal-Maldonado, Lemacon, & Vindigni, 2017). The procedure described in this article has been optimized for use with human PSC, and has been successfully applied to several human iPSC and human ESC cell lines (Halliwell et al., 2020). We have utilized this assay to improve culture conditions: supplementing cultures with nucleosides alleviates replication stress and decreases the frequency of mitotic errors, highlighting that these events are linked in human PSC (Halliwell et al., 2020). The DNA iber assay has revealed approaches that can be used to reduce the appearance of genetic instability in human PSC, which is necessary for the safe application of human PSC in regenerative medicine. Halliwell et al. of 16 Current Protocols in Stem Cell Biology Critical Parameters We have observed little difference in iber assay results whether human PSC are cultured in mTeSR, E8, or Nutristem cell culture medium. Currently, feeder layer−dependent cell culture practices have not been tested. The whole assay can be done in a single day, although we have included a stop point following the ixation step in Basic Protocol 2, which allows the protocol to be run over a period of 2 days. We advise allowing the human PSC to recover for at least 48 hr following plating. This will permit 24 hr in culture without Rho-associated, coiled-coil containing protein kinase 1 inhibitor (Y-27632). It will also allow recovery from stressful re-plating, and for the cells to re-enter logarithmic growth phase. It is critical, where the results of experiments are to be compared, that the density of cells initially seeded and the conluency at the point of starting the experiment be consistent. Higher conluency may cause cell proliferation and DNA replication to slow. With regard to labeling of DNA, it is possible to increase or decrease the pulse labeling times, but this will result in longer and shorter DNA ibers, respectively. Again, the labeling time must be constant across comparable experiments. It is also important that the labeling time be accurately measured. Deviations one of 1 min will increase labeling by 5%, and will confound the results of the experiment. To ensure timely labeling, it is advised that the reagents be prepared well ahead of starting the experiment. Also, when adding the second nucleotide analog (IdU), the plate should be removed from the incubator 1 min before the end of the irst incubation period, to give a time buffer when preparing the second label. Once harvested, the cells should be suspended in ice-cold PBS and kept on ice. DNA spreading should then be performed within 30 min, to minimize cell death. When spreading DNA ibers, the user should select slides where the droplet spreads down the length of the slide in 3 to 5 min at a constant rate. In our experience, batches of slides can differ from one another, and each batch should be optimized. Troubleshooting Table 2 describes problems that can arise with various steps in the assay, along with their possible causes and Solutions. Statistical Analysis It is important to take images of ields with minimum iber cross overs across the whole slide to ensure a robust measurement of the progressing replication forks. A minimum of 150-200 individual ibers must be measured to account for the heterogeneity between progressing forks. The conversion factor for stretched DNA ibers is 2.59 kb/µm (Jackson & Pombo, 1998). Consideration should be given to the presentation of data. Histograms and scatter plots are appropriate for capturing normal differences in replication fork progression. Statistical signiicance can be calculated using an unpaired Mann-Whitney U-test. Understanding Results The DNA ibers produced can be variable but, in our experience, under normal growth conditions, they should contain equal lengths of CldU and IdU staining. When measured, we ind the replication fork speed to vary between 0.5 and 0.75 kb/min. A reduced iber length or fork speed can indicate replication stress. Monitoring the frequency of replication events can provide valuable information regarding the replication processes going on during the culture of human PSC (Fig. 3). A strict control of replication origin density is required to maintain chromosomal stability (Prioleau & MacAlpine, 2016). Excessive replication-origin iring can deplete necessary protein and metabolites required for eficient DNA replication (Sørensen & Syljuåsen, 2012). A decrease in inter-origin distance or a higher density of IdU-only labeled ibers is a measure of increased origin density, suggesting greater numbers of simultaneously iring origins of replication. Fork stalling is a prerequisite for DNA breakage, which can become a substrate for genetic instability (Toledo, Neelsen, & Lukas, 2017). Measuring the frequency of CldU-only ibers or the ratio of the IdU labels on a bi-directional fork can be used to measure fork stalling. A shorter IdU iber on one side of a bi-directional fork implies a fork-stalling event, and will result in a greater ratio between the IdU tracts. Basic Protocol 1 Cells should be grown for a minimum of 48 hr following seeding; we have found 72 hr to be optimal. Steps 1 to 6: ∼40 min will be needed for cell seeding. ∼20 min per day will be needed to refresh the medium. Steps 7 to 18: ∼1 hr will be needed to pulse label and harvest the cells. Basic Protocol 2 Step 1 to 4: preparation of cell lysate for DNA spreading will require ∼15 min. Step 5 to 7: DNA spreading will require ∼20 min for spreading one set of three slides simultaneously. Support Protocol 1 and 2 Several hours will be required for image acquisition and data analysis, but this will be variable depending on the quality of the DNA ibers and the biological question being asked.
2020-06-26T13:01:53.692Z
2020-06-25T00:00:00.000
{ "year": 2020, "sha1": "fd1debf35906eb0efe02ee8157b334958c06f168", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/cpsc.115", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "76df2e15b28d79e1ae371a5c5f71f3e5808d05e0", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
4639375
pes2o/s2orc
v3-fos-license
Helicobacter pylori infection and esophageal adenocarcinoma: a review and a personal view Esophageal adenocarcinoma (EAC) is etiologically associated with gastroesophageal reflux disease (GERD). There is evidence to support the sequence GERD, Barrett’s esophagus (BE), dysplasia, and finally EAC, with Helicobacter pylori (H. pylori) being implicated in each step to EAC. On the other side of this relation stands the hypothesis of the protective role of H. pylori against EAC. Based on this controversy, our aim was to review the literature, specifically original clinical studies and meta-analyses linking H. pylori infection with EAC, but also to provide our personal and others’ relative views on this topic. From a total of 827 articles retrieved, 10 original clinical studies and 6 meta-analyses met the inclusion criteria. Original studies provided inconclusive data on an inverse or a neutral association between H. pylori infection and EAC, whereas meta-analyses of observational studies favor an inverse association. Despite these data, we consider that the positive association between H. pylori infection and GERD or BE, but not EAC, is seemingly a paradox. Likewise, the oncogenic effect of H. pylori infection on gastric and colon cancer, but not on EAC, also seems to be a paradox. In this regard, well-designed prospective cohort studies with a powered sample size are required, in which potential confounders should be taken into consideration since their design. Introduction Helicobacter pylori (H. pylori) is a common bacterium and infects almost half of the global population [1], being strongly associated with upper gastrointestinal morbidity. Its prevalence is still high in most countries; there were approximately 4.4 billion individuals with H. pylori infection worldwide in 2015, and H. pylori remains highly prevalent in certain ethnic populations and in migrants moving from high prevalence countries [1]. The primary pathogenic role of H. pylori in peptic ulcer formation is supported by robust evidence [2], and H. pylori was recognized as a true class I carcinogen for gastric cancer by the International Agency for Research on Cancer [3] and the World Health Organization in 1994. On top of this, numerous studies claim to have implicated H. pylori in a long list of systemic disorders, including cardio-cerebrovascular [4,5], degenerative [6][7][8], and metabolic syndrome (MetS)-related conditions [4,9]. Likewise, the accumulated oncology literature suggests an etiological relation of H. pylori with extra-gastric neoplasms, such as pancreatic [10], colorectal [11][12][13], and esophageal cancers, at least in some subpopulations [14]. Esophageal cancer is among the most frequent neoplasms, a main cause of cancer-related deaths worldwide and a clinically challenging disease requiring a multidisciplinary approach [15]. Esophageal cancer is divided into two histological types: esophageal squamous cell carcinoma (ESCC), associated mostly with environmental risk factors (e.g., smoking and alcohol consumption), and esophageal adenocarcinoma (EAC) located close to the gastroesophageal junction, etiologically coupled with gastroesophageal reflux disease (GERD). In the westernized population, the incidence of EAC increased sharply, displacing ESCC, the latter accounting for most of the incidence of esophageal cancer 50 years ago [16,17]. Current evidence for the protective or harmful effect of H. pylori on EAC is conflicting. On this basis, we aimed to review the literature, specifically original clinical studies and meta-analyses linking H. pylori infection with EAC, but also to provide our personal and others' relative views on this topic. Materials and methods A literature search was carried out in the PubMed database using the following query, developed from a combination of MeSH and non-MESH terms: [(Helicobacter pylori) OR (Hp) OR (H. pylori)] AND [(esophageal neoplasm) OR (esophageal carcinoma) OR (esophageal cancer) OR (esophageal adenocarcinoma)]. Additional studies were identified by hand search from references of the eligible articles and commentaries on the current topic ("hand searching"). The search was completed on June 25, 2017. The selection process was performed independently by two researchers (CZ and JK). Eligibility was based on the following inclusion criteria: clinical studies or meta-analyses reporting on the association between H. pylori and EAC; and histological confirmation of EAC. Exclusion criteria were: studies in languages other than English, abstracts of conferences; reviews; commentaries; editorials; and experimental studies. Subsequently, a quality evaluation of the eligible original studies was conducted. For the purposes of the quality assessment, the Methodological Index for Non-Randomized Studies (MINORS) was used. MINORS is a validated and established index for evaluating the methodological quality of non-randomized studies. This index involves 12 criteria, 8 of which have been designed for non-comparative studies, whereas the other 4 criteria apply to comparative studies. These criteria are scored on a scale developed by Slim et al [18]: 0 (not reported), 1 (reported but inadequate), and 2 (reported and adequate). The maximum score for comparative studies is 24 and for the 8-item index is 16, while the minimum score is 0. The aforementioned two reviewers (CZ and JK) independently evaluated each study according to the MINORS index and any scoring differences were discussed until consensus was reached. With regard to the 12-item index, a score greater than 16 was indicative of well-designed studies [19,20]. No threshold is currently proposed for the 8-item index. We aimed to evaluate the randomized controlled trials (RCTs) with the Cochrane tool, but no RCT was retrieved. Selection process The initial search in PubMed resulted in the retrieval of 607 articles. Through manual searching, 220 articles were added, bringing the total to 827. After the initial screening on the basis of their title and/or abstract, 784 articles were excluded and the full text of 43 articles was evaluated for eligibility. Finally, 10 original studies and 6 meta-analyses were selected. A flowchart illustrating the selection process is presented in Fig. 1. No RCTs fulfilling the eligibility criteria were identified. This was not unexpected, since the preferred study design for investigating prognostic and risk factors is the cohort study, followed by the case control study in evidence-based medicine (www.cebm.net/ocebm-levels-of-evidence). Methodological quality The MINORS 8-item index applied to all selected original studies and the results of the MINORS scoring are presented in Table 1. MINORS scores ranged from 3-11. The major limitations on the methodology of the selected studies were a retrospective design and a non-calculated or small sample size. Cohen's kappa coefficient, measuring the inter-rater agreement (CZ and JK) for each MINORS item, ranged between 0.84 and 0.92 (all P<0.01). Summary of included meta-analyses The first meta-analysis of the association between H. pylori infection and EAC was published in 2007 [31]. Another four later meta-analyses of observational studies were retrieved on the same topic [32][33][34][35]. All the meta-analyses reported lower rates of EAC in H. pylori-positive compared with H. pylori-negative individuals. Furthermore, all meta-analyses showed lower had similar EAC rates, as shown in one meta-analysis [33]. As expected, there was overlap of the included studies in all metaanalyses. Although there was heterogeneity in some of the metaanalyses, meta-regression to assess the source of heterogeneity was not performed in any of them. Personal view Although some studies and all meta-analyses reported an inverse association between H. pylori infection and EAC, interpreted as a protective effect by some authors, our personal relative consideration and those of others do not agree. Our position starts from a simple question: would a physician propose the potential contamination of H. pylori infection in high-risk populations (e.g., obese, cigarette smokers, consumers of high quantities of red or processed meat), so as to protect them from EAC? In our opinion, a randomized controlled trial would never be assigned to answer this question, since it would transgress certain ethical considerations. In this regard, Prof. David Y. Graham maintained that H. pylori is not and never was "protective" against anything [36]. Fig. 2 summarizes the main results of the review, but also the main points of our consideration. The principal hypothesis posed by most authors of the aforementioned meta-analyses and a critical review on esophageal cancer epidemiology [37] is that H. pylori infection, with concomitant atrophy of the gastric corpus and loss of parietal cells, results in a reduction in reflux acidity and consequently in reflux esophagitis, Barrett's esophagus (BE), and EAC development. There is evidence supporting the sequence GERD → BE → dysplasia → EAC and the implication of H. pylori separately in each single step to EAC, at least in certain subpopulations. BE is a complication of long-standing GERD and a well-known precursor lesion of EAC [38,39]; GERD plays an essential role in the pathophysiology and the clinical identification of BE, which represents the only known complication derived from GERD [38,39]. The effect of H. pylori on BE varies according to geographic location. We showed that H. pylori infection is common in Greek patients with GERD, even in those without endoscopically proven reflux disease [40], and H. pylori eradication results in adequate control of GERD symptoms and improves esophagitis [41]. Consistent findings were reported by Schwizer et al [42], who also observed improvement in GERD symptoms after H. pylori treatment. Interestingly, other authors, previous supporters of the hypothesis that H. pylori "protects" against GERD, relented, claiming that H. pylori therapy does not cause or protect against GERD, and recommending H. pylori eradication in GERD [43]. Moreover, there are epidemiologic studies supporting our and others' data: a large-scale study (approximately 21,000 cases) reported that the decline in H. pylori infection parallels the reduction in peptic ulcer prevalence, and that the rise in GERD and/or reappearance of GERD following H. pylori therapy is rare. Contrary to expectations, patients hospitalized with duodenal ulcers (approximately 61,500 cases), apparently attributed to H. pylori infection, had a 70% increased risk of EAC [44]. Malaysians, who for a long time have had a low prevalence of H. pylori infection, also show a low incidence of GERD, BE and distal esophageal cancers, signifying that H. pylori infection is not protective against the abovementioned conditions and its absence may be beneficial [45]. The prevalence of EAC with persistent H. pylori infection is higher than that of EAC after eradication therapy [36,38]. Evidence further potentiates the consideration that H. pylori is not "protective" against anything, including GERD [36] and possibly its complications BE and EAC. Apart from H. pylori, a number of other environmental agents (e.g., upper gastrointestinal microbiota) seem to play a role in GERD and BE pathogenesis; the presence of esophageal nitrate-reducing Campylobacter species in BE patients might suggest a connection with BE induction, maintenance, or exacerbation [41]. Beyond epidemiologic data, H. pylori might be involved in GERD pathophysiology via diverse mechanisms, such as: a) induction of mediators, cytokines and nitric oxide, which might disturb the lower esophageal sphincter (LES); b) direct injury of the esophageal mucosa by bacterial products; c) augmented release of prostaglandins that sensitizes afferent nerves and decreases LES pressure; and d) increased acidity due to gastrin induction that aggravates GERD [40]. At the molecular level, gastrin, caused by H. pylori infection, is an oncogenic growth agent that promotes upper and lower gastrointestinal tract oncogenesis. Specifically, gastrin appears to play an important role in neoplastic progression in BE. Gastrin stimulates proliferation via Janus Kinase (JAK)2 and Akt-dependent nuclear factor-kappa B (NF-κB) activation in Barrett's EAC cells, displays an anti-apoptotic effect via upregulation of Bcl-2 and survivin, and induces mitogenic and oncogenic cyclo-oxygenase (COX)-2 expression [38,39]. In this regard, H. pylori infection activates NF-κB, an oxidant-sensitive transcription regulator of inducible expression of inflammatory genes, including COX-2 that regulates gastrointestinal neoplasm cell growth and proliferation. Specifically, H. pylori infection promotes the expression of NF-κB and COX-2 in esophageal epithelial cells, playing a role in the inflammatory process associated with BE and esophageal oncogenesis [38]. Upon colonizing the esophagus, H. pylori increases the severity of esophageal inflammation and the BE prevalence [38], as could be derived from the following data: a) H. pylori infection prevalence is high in BE; b) neither H. pylori infection nor H. pylori infection by CagA positive strains decreases the risk of BE in some populations that have a high incidence of H. pylori infection; c) H. pylori infection might induce specific molecular changes (genetic instability, E-cadherin methylation, monoclonal antibody Das-1) linked with BE pathophysiology; and d) H. pylori promotes Ki-67 expression and greater Ki-67 esophageal expression was reported in BE patients compared with GERD controls. A progressive Ki-67 proliferation fraction was observed in the normal esophageal epithelium → BE → dysplasia → EAC sequence [38,39]. Our Insulin resistance (IR), the key MetS component [46], is connected with GERD, BE and EAC [4]. Since relative data indicate a relationship between H. pylori and IR [46] and other parameters of MetS [4], H. pylori-related MetS may contribute to the GERD → BE → EAC sequence in some ethnic populations [9]. Recent data show that lower serum adiponectin levels are associated with BE progression, while experimentally adiponectin induces an antitumor effect in Barrett's cell lines and prevents growth-factor signaling [4]. H. pylori therapy leads to an increase in levels of serum total adiponectin and its isoforms, thereby displaying a possible protective effect against malignant progression of BE [4]. Several studies support an association between BE and colonic neoplasms, including adenomas and adenocarcinomas [47][48][49]. It is conceivable that BE and colorectal neoplasms share a common, unidentified factor promoting the oncogenesis of BE-associated EAC and colorectal neoplasms. A potential association of both pathological conditions may be attributed to genetic predisposition or common environmental risk factors. H. pylori infection might promote both diseases [50]. Both H. pylori infection and BE are linked with an increased risk of the development of colorectal adenoma (CRA) and colorectal cancer (CRC) [12,[50][51][52]. H. pylori infection appears to contribute to the GERD → BE → EA and CRA → CRC sequences, at least in certain populations, and its eradication may abrogate these oncogenic properties [12,51,52]. Specifically, active H. pylori infection appears to be involved in the pathogenesis of the normal colon epithelium → CRA → CRC sequence [12]. Excessive nicotinamide adenine dinucleotide phosphate (NADPH) oxidase activity and the production of reactive oxygen species (ROS) may promote oncogenic signaling, driving colorectal oncogenesis [13]. Likewise, in the H. pylori-related GERD → BE → EAC sequence, NADPH activation and NADPH-derived ROS may cause DNA damage, thereby contributing to the progression from BE to EAC [13]. In conclusion, existing epidemiologic studies provided inconclusive data on an inverse or a neutral association between H. pylori infection and EAC, whereas metaanalyses of observational studies favor an inverse association. A particular drawback of most original studies is confounding factors, i.e., multiple factors that were not taken into consideration in the study design or the analysis of data, but may possibly contribute to the pathogenesis of EAC. This might have affected the results of the metaanalyses, since they included original studies that did not adequately adjust for potential confounders. Furthermore, the source of heterogeneity, when it was observed, was not evaluated in the meta-analyses. In this regard, well-designed prospective cohort studies with a powered sample size are required, in which potential confounders should be taken into account. This may resolve the paradox of the positive association of H. pylori infection with GERD or BE, but not with EAC, as well as the paradox of the oncogenic effect of H. pylori infection on gastric cancer and CRC, but not on EAC. Metabolomics may also prove helpful in this direction in the near future, as the H. pylori-related metabolites may provide further data.
2018-04-03T06:11:38.213Z
2017-11-16T00:00:00.000
{ "year": 2017, "sha1": "eb4c7eeae369c981f1c350e8ce7eab9807e8d553", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.20524/aog.2017.0213", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb4c7eeae369c981f1c350e8ce7eab9807e8d553", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258355769
pes2o/s2orc
v3-fos-license
Monitoring multidimensional aspects of quality of life after cancer immunotherapy: protocol for the international multicentre, observational QUALITOP cohort study Introduction Immunotherapies, such as immune checkpoint inhibitors and chimeric antigen receptor T-cell therapy, have significantly improved the clinical outcomes of various malignancies. However, they also cause immune-related adverse events (irAEs) that can be challenging to predict, prevent and treat. Although they likely interact with health-related quality of life (HRQoL), most existing evidence on this topic has come from clinical trials with eligibility criteria that may not accurately reflect real-world settings. The QUALITOP project will study HRQoL in relation to irAEs and its determinants in a real-world study of patients treated with immunotherapy. Methods and analysis This international, observational, multicentre study takes place in France, the Netherlands, Portugal and Spain. We aim to include about 1800 adult patients with cancer treated with immunotherapy in a specifically recruited prospective cohort, and to additionally obtain data from historical real-world databases (ie, databiobanks) and medical administrative registries (ie, national cancer registries) in which relevant data regarding other adult patients with cancer treated with immunotherapy has already been stored. In the prospective cohort, clinical health status, HRQoL and psychosocial well-being will be monitored until 18 months after treatment initiation through questionnaires (at baseline and 3, 6, 12 and 18 months thereafter), and by data extraction from electronic patient files. Using advanced statistical methods, including causal inference methods, artificial intelligence algorithms and simulation modelling, we will use data from the QUALITOP cohort to improve the understanding of the complex relationships among treatment regimens, patient characteristics, irAEs and HRQoL. Ethics and dissemination All aspects of the QUALITOP project will be conducted in accordance with the Declaration of Helsinki and with ethical approval from a suitable local ethics committee, and all patients will provide signed informed consent. In addition to standard dissemination efforts in the scientific literature, the data and outcomes will contribute to a smart digital platform and medical data lake. These will (1) help increase knowledge about the impact of immunotherapy, (2) facilitate improved interactions between patients, clinicians and the general population and (3) contribute to personalised medicine. Trial registration number NCT05626764. they pilot tested or validated? 4. Data analysis plan is also rather rudimentary. The paper would be much stronger if the authors included one or two examples of specific hypotheses and then described the data elements and the specific analytic approaches used to test those hypotheses. 5. It is hard to judge the potential impact of the planned study without any information on the anticipated cohort size and statistical power. In the STROBE statement, item #10 is marked N/A. The authors are encouraged to provide at least a preliminary estimate of how many people are expected to be recruited. One minor point: The bolded text in the introduction looks strange. This is typically done for grant applications. The reader may be left wondering if some sections of the paper were copied and pasted from an earlier proposal. Reviewer 1 Dr. P. Zarogoulidis, Aristotle Univ Thessaloniki Comments to the Author: An excellent manuscript in its field, I have no corrections We thank Dr. Zarogoulidis for taking the time to review our manuscript. Your compliments are much appreciated. Reviewer 2 Dr. Michael Goodman, Emory University Rollins School of Public Health Comments to the Author: Thank you for the opportunity to review this paper. Although the idea of creating a multicenter cohort of immunotherapy patients is praiseworthy, the description of the proposed study methods is somewhat vague. Below, I am offering a few suggestions that in my opinion will strengthen this paper. We thank Dr. Goodman for taking the time to review our manuscript. We have provided our response to the comments and suggestions raised below. 1. I am afraid that a reader who only has time to review the abstract will come away without a clear understanding of the study design and its methods. For example, the abstract lacks a clear explanation of is meant by "a tailored questionnaire completed at regular intervals", and the actual methods of data analysis are not mentioned. Thank you for your comment. We have updated the methods section to better explain the design of the study. Since a broad variety of analyses will be performed, we try to briefly clarify but could not detail more. Abstract, line 68-80: : This international, observational, multi-centre study takes place in France, the Netherlands, Portugal and Spain. We aim to include about 1800 adult cancer patients treated with immunotherapy in a specifically recruited prospective cohort, and to additionally obtain data from historical real-world databases (i.e. databiobanks) and medical administrative registries (i.e. national cancer registries) in which relevant data regarding other adult cancer patients treated with immunotherapy has already been stored. In the prospective cohort, clinical health status, HRQoL and psychosocial well-being will be monitored until 18 months after treatment initiation through questionnaires (at baseline and 3, 6, 12 and 18 months thereafter), and by data extraction from electronic patient files. Using advanced statistical methods, including causal inference methods, artificial intelligence algorithms and simulation modelling, we will use data from the QUALITOP cohort to improve the understanding of the complex relationships between treatment regimens, patient characteristics, irAEs and HRQoL. 2. Section on Patient Selection also lacks specifics. Who will be recruiting the participants and through which methods? Thank you for pointing this out. We have included the following statement under the header "Patient selection": b. The methods of clinical data collection are also not clear. Will these data come from electronic health records collected by specialized computer programs, or will this be done by human data abstractors or perhaps a combination of the two methods? The data collection relies on manual extraction from electronic health records. This has been clarified in the methods section. Page 10, lines 207-209: "Clinical data will be manually extracted from electronic patient files for both cohorts." Page 11, line 222: "Clinical data will be manually extracted from electronic patient files for each routine visit in the first 6 months of treatment and at fixed timepoints in the following year (9, 12 and 18 months)." c. The sources of the ad hoc sections of the questionnaire ( We included the following information regarding the questionnaire development in the manuscript: Page 14, lines 263-268: "Ad-hoc items are used for domains for which no suitable validated questions/questionnaires were available. The items are based on expert opinions and prior experience with research in similar patient populations. Especially for domains 5 ("Medication and treatment") and 6 ("Opinions on cancer treatment and care"), clinicians' knowledge and experience with immunotherapy treatment was of key importance in developing and evaluating the ad hoc items." 4. Data analysis plan is also rather rudimentary. The paper would be much stronger if the authors included one or two examples of specific hypotheses and then described the data elements and the specific analytic approaches used to test those hypotheses. We acknowledge that the analytic strategies are not described in detail in the manuscript, owing to the diversity of hypotheses and methods that the QUALITOP project encompasses. For instance, the different parts of the project pertain to the three types of statistical goals: description, explanation and prediction. As such, we could not provide a complete description of the hypotheses and statistical approaches used. However, below is a specific example of the use of joint modelling to study the associations between ICI treatment and changes in health related QoL, using the OncoLifeS study (described in Table 1). The data comes from the data biobank OncoLifeS which is a prospective cohort study conducted in University Medical Centre Groningen, Netherlands. We will include patients diagnosed with lung cancer, older than 18 years and receiving treatment with ICI for any given duration, who filled at least one questionnaire on QoL shortly before (<6 weeks), during or after their ICI treatment. The aim will be to study the associations between ICI treatment and changes in health related QoL in the two years after treatment initiation among patients diagnosed with lung cancer. First, the evolution of the different components of QoL over time for individual patients will be described using spaghetti plots and boxplots. Then, joint models will be used to estimate the effect of ICI treatment on QoL over two years' time since the initiation of ICI treatment. 5. It is hard to judge the potential impact of the planned study without any information on the anticipated cohort size and statistical power. In the STROBE statement, item #10 is marked N/A. The authors are encouraged to provide at least a preliminary estimate of how many people are expected to be recruited. We understand your concern and have modified the abstract and the manuscript accordingly. One minor point: The bolded text in the introduction looks strange. This is typically done for grant applications. The reader may be left wondering if some sections of the paper were copied and pasted from an earlier proposal. Thank you for pointing this out. However, we do not see any bolded text in the version of the manuscript we submitted. This may have happened in the online submission system, we will pay attention to this during resubmission.
2023-04-28T13:04:26.531Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "6281e96fd487f0413d79305b91d56b04a9df4e8c", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b7adbaff756ebb641aa38a13306e47d91a70c6c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232110529
pes2o/s2orc
v3-fos-license
Type Ia supernovae in the star formation deserts of spiral host galaxies Using a sample of nearby spiral galaxies hosting 185 supernovae (SNe) Ia, we perform a comparative analysis of the locations and light curve decline rates $(\Delta m_{15})$ of normal and peculiar SNe Ia in the star formation deserts (SFDs) and beyond. To accomplish this, we present a simple visual classification approach based on the UV/H$\alpha$ images of the discs of host galaxies. We demonstrate that, from the perspective of the dynamical timescale of the SFD, where the star formation (SF) is suppressed by the bar evolution, the $\Delta m_{15}$ of SN Ia and progenitor age can be related. The SFD phenomenon gives an excellent possibility to separate a subpopulation of SN Ia progenitors with the ages older than a few Gyr. We show, for the first time, that the SFDs contain mostly faster declining SNe Ia $(\Delta m_{15}>1.25)$. For the galaxies without SFDs, the region within the bar radius, and outer disc contain mostly slower declining SNe Ia. To better constrain the delay times of SNe Ia, we encourage new studies (e.g. integral field observations) using the SFD phenomenon on larger and more robust datasets of SNe Ia and their host galaxies. INTRODUCTION It is believed that the progenitor of Type Ia supernova (SN Ia) is a carbon-oxygen white dwarf (WD) in close binaries, whose properties and explosion channels are still under debate (e.g. Livio & Mazzali 2018). SNe Ia show an important relation between the luminosity at B-band maximum and their light curve (LC) decline rate ∆m15: faster declining SNe are fainter (Phillips 1993). The ∆m15 is the difference in magnitudes between the maximum and 15 d after, and is considered as a practically extinction-independent parameter (e.g. Hakobyan et al. 2020, hereafter H20). Much work has been done to determine the nature of SN Ia progenitors by studying the relations between the properties of SNe Ia and characteristics of galaxies in which they are discovered (e.g. Gallagher et al. 2005;Gupta et al. 2011;Rigault et al. 2013;Uddin et al. 2017;Kang et al. 2020). In particular, the SN Ia LC decline rate can be linked to the global age of host galaxy (e.g. Shen et al. 2017), which is usually considered as a rough proxy for the SN Ia delay time (i.e. time interval between the progenitor formation and its subsequent explosion). Recently, in H20, we showed that the correlation between the ∆m15 of normal SNe Ia and hosts' global age appears to be due to the superposition of at least two distinct populations of faster and slower declining SNe Ia from older and younger stellar populations, respectively. For the most common peculiar SNe Ia, we showed that 91bg-like (subluminous and fast declining) events probably come only from the old population, while 91T-like (overluminous and slow declining) SNe originate only from the young population of galaxies. Such results have been obtained also from more accurate age estimations of SNe Ia host populations, using the local properties for SN sites (e.g. Rigault et al. 2013;Panther et al. 2019; E-mail: artur.hakobyan@yerphi.am (AAH); a.karapetyan@yerphi.am (AGK) Rose et al. 2019). Eventually, the SN LC properties, delay time distribution (DTD), and relations with other host characteristics allow to constrain the SN Ia progenitor scenarios (see discussion in H20). In this Letter, for the first time, we link the ∆m15 of SN Ia with the progenitor age from the perspective of star formation desert (SFD) phenomenon. In short, the SFD, observed in some spiral galaxies (e.g. James & Percival 2015, is a region swept up by a strong bar with almost no recent star formation (SF) on both sides of the bar. There are increasing evidences in observations and simulations that SFD consists of old stars, and the quenching of SF in this region was due to the bar formation (e.g. Donohoe-Keyes et al. 2019;George et al. 2020), which dynamically removed gas from SFD over a timescale of ∼ 2 Gyr (e.g. Donohoe-Keyes et al. 2019). The bar can show SF through its length, or SF can be found only at the bar ends, or the entire bar might not show SF (e.g. Díaz-García et al. 2020). Some bars might even dissolve during the evolution (e.g. Shen & Sellwood 2004), leaving the central SFD in galactic disc. On the other hand, it can be considered that the SFD is practically not contaminated by the radial migration of young stars from the outer disc (e.g. Minchev et al. 2018). Therefore, from the dynamical ageconstrain of SFD ( ∼ > 2 Gyr), we consider that the DTD of its SNe Ia is truncated on the younger side, starting from a few Gyr, in comparison with those outside the SFD, where mostly young/prompt SNe Ia occur (delay time of ∼ 500 Myr, Raskin et al. 2009). Given this, and if the progenitor's age is the main driver of the decline rate, the SNe Ia discovered in the SFDs should have faster declining LCs. In this study, we simply demonstrate the validity of this assumption according to the picture briefly described above, which provides an excellent new opportunity to constrain the nature of SN Ia progenitors. SAMPLE SELECTION AND REDUCTION We selected the sample for our study from a well-defined sample of H20, which includes data on the spectroscopic subclasses of nearby ( 150 Mpc) SNe Ia (normal, 91T-, 91bg-like, etc.) and their Bband LC decline rates (∆m15), as well as homogeneous data on the host galaxies (distance, corrected ugriz magnitudes, morphological type, bar detection, etc). The SFDs are observed in some barred Sa-Scd galaxies (e.g. James & Percival 2015, therefore we restricted the morphologies of SNe hosts to the mentioned types, with barred and unbarred counterparts. We also ignored hosts with strong morphological disturbances, which may add undesirable projection effects and complicate the assignment of an SN Ia to the SFD. As seen in Wang et al. (1997); Anderson et al. (2015); Hakobyan et al. (2016), the vast majority of SNe Ia in spiral galaxies belong to the disc, rather than the bulge (spherical) component. Here, we checked this observational fact for the most common SN Ia subclasses separately. If SNe Ia belong mostly to the disc component, where SFD can be located, one would expect that the distributions of projected and R25-normalized galactocentric distances 1 of SNe Ia along major (|U |/R25) and minor (|V |/R25) axes would be different, being distributed closer to the major axis (i.e. smaller |V |/R25 in comparison with |U |/R25 , see Hakobyan et al. 2016, for more details). Table 1 shows the results of the two-sample Kolmogorov-Smirnov (KS) and Anderson-Darling (AD) tests on the comparison of the major versus minor axes distributions of 238 SNe Ia (based on a subsample from H20). The P -values of the tests suggest that the SNe Ia distribution along the major axis is inconsistent with that along the minor axis in Sa-Scd host galaxies with different inclinations, showing that the SN Ia subclasses in these hosts originate mostly from the disc population. It should be noted that, because of the absorption and projection effects in the discs, the SFDs are observed in some spiral galaxies only with low/moderate inclinations (e.g. James & Percival 2015. Therefore, we also limited our host galaxy sample to inclinations i < 70 • . In total, there are 185 normal, 91T-and 91bg-like SNe Ia meeting the above criteria, of which 79 and 106 events have barred and unbarred hosts, respectively. These SNe Ia are discovered in 180 host galaxies, five of which host two events in each. For these host galaxies, we used archival Galaxy Evolution Explorer (GALEX) far-and near-UV (Martin et al. 2005), Swift UV (Roming et al. 2005), and available Hα images (e.g. Sánchez-Menguiano et al. 2018) to visually classify the morphology of their ionized discs into four SF classes: i) SF is distributed along the entire length of unbarred disc, from the center to the edge (97 SNe Ia hosts); ii) like in the first case, but for barred disc, SF along the bar, without SFD (36 objects); iii) SF along the bar, or SF might occur only at the bar ends, with SFD (43 objects); iv) SF is distributed along the unbarred disc, except the central SFD (9 objects). In all the cases, the circumnuclear SF is also possible. The r-band and UV images representing the classes 2 of galaxies can be found in Fig. 1. Note that the cosmic surface brightness dimming is insignificant for our galaxy sample, since hosts' z ∼ < 0.036 ( z = 0.017 ± 0.009). Based on the optical g-band images, we measured bar radii of host galaxies using ellipse fitting to the bar isophotes with maximum ellipticity (see Díaz-García et al. 2016 and references therein, for more details on the bar radius measurement method). Then we deprojected Notes: The P MC KS (P MC AD ) is the two-sample KS (AD) test probability that the distributions are drawn from the same parent sample, using a Monte Carlo (MC) simulation with 10 5 iterations as explained in H20. The respective mean values and standard errors are listed. The statistically significant differences (P 0.05) between the distributions are highlighted in bold. each bar radius for host inclination and normalized it to the disc radius, i.e. r bar = R bar /R25. For unbarred hosts with the central SFD (class iv in Fig. 1), we used the UV images to roughly estimate the radii of SFDs ( rSFD = RSFD/R25), where almost no UV fluxes are detected. Note that for our sample rSFD ≈ r bar = 0.30. For further simplicity, we define a demarcation radius as: r dem = r bar , for ii and iii disc classes, rSFD, for iv class. For SNe Ia, we deprojected and normalized their galactocentric distances as well, i.e. rSN = RSN/R25 (see Hakobyan et al. 2016). Based on the host disc classification and the definition of demarcation radius, we grouped SNe according to their locations as follows: 97 SNe Ia are found in the disc of galaxies without a bar or SFD; 61 SNe are in the outer disc of hosts, which have either a bar or SFD; 13 SNe are found in bar or star-forming regions inside r dem ; and 14 SNe Ia are in SFD. Table 2 displays the distribution of the SN Ia subclasses according to their locations in the SFD 3 or beyond. RESULTS AND DISCUSSION With the aim of linking the ∆m15 of SN Ia with the progenitor age, we study the SN decline rates that exploded in SFDs and other regions of hosts. In addition, we compare the SN galactocentric distances between the spectroscopic subclasses, and check the possible correlations between the ∆m15 and galactocentric distances. SNe Ia in the SFDs and beyond To link the LC properties of SN Ia with the progenitor age from the perspective of the dynamical age-constrain of SFD, in Table 3, Table 2). - Notes: Since, the r dem = 0.30 for class ii-iv discs, we define inner and outer class i discs when r SN < 0.30 and 0.30, respectively. The explanations for P -values are the same as in Table 1. we compare the ∆m15 distribution of normal SNe Ia in the SFD with that in the bar/SF (see also the upper panel of Fig. 2). The KS and AD tests show that these distributions are significantly different. Normal SNe Ia that are in the SFD, dominated by the old population ( ∼ > 2 Gyr; Donohoe-Keyes et al. 2019), have, on average, faster declining LCs compared to those located in the bar/SF, where UV/Hα fluxes are observed (i.e. age ∼ < a few 100 Myr; Kennicutt 1998). Table 3 also shows that the ∆m15 distribution of normal SNe Ia that are in the outer disc population is consistent with that in the bar/SF and inconsistent with that in the SFD (see also Fig. 2). Interestingly, any inconsistency vanishes when we combine the bar/SF and SFD subsamples and compare the LC decline rates with those in the outer disc population (Table 3). This suggests that the discs of Sa-Scd hosts are indeed outnumbered by normal SNe Ia with slower declining LCs (e.g. ∆m15 < 1.25, outside the r dem in Fig. 2) whose progenitor ages peak below 1 Gyr, corresponding to the young/prompt SNe Ia (e.g. Childress et al. 2014). In addition, even for discs of class i (without demarcation radius), the KS and AD tests, in Table 3, show that the ∆m15 distributions are consistent for normal SNe Ia in the inner and outer discs, excluding a radial dependency of ∆m15 (see also Section 3.2). For class i discs, the ∆m15 values are sufficiently consistent with the same values in the corresponding radial intervals for hosts having a demarcation radius (see Table 3). Thus, the SFD phenomenon gives an excellent possibility to separate a subpopulation of normal SNe Ia with old progenitors from a general population of host galactic disc, which contains both young and old progenitors. On average, the LCs of this SN Ia subpopulation decline faster, whose DTD is most likely truncated on the younger side, starting from a several Gyr ( ∼ > 2 Gyr). These results are qualitatively agree with the theoretical predictions. In particular, for sub-Chandrasekhar mass (M Ch ≈ 1.4M ) explosion models in double WD systems, the luminosity of SN Ia is directly related to the exploding WD's mass, which decreases with age (e.g. Sim et al. 2010;Blondin et al. 2017;Shen et al. 2017Shen et al. , 2021. This is because WD's mass is directly linked to the mainsequence (MS) mass of the progenitor star, which is in turn related to the MS lifetime. Therefore, older stellar populations would host less luminous SNe Ia, i.e. faster declining events (e.g. Shen et al. 2017). Note that, we prefer sub-M Ch explosion models, because different mechanisms of the M Ch explosions do not reproduce the observed distribution in the luminosity-decline rate relation for various SN Ia subclasses (e.g. Livio & Mazzali 2018 and references therein). Despite the small number statistics of peculiar 91T-and 91bglike SNe, Table 2 shows that the old SFDs of Sa-Scd galaxies host along with faster declining normal SNe Ia also two 91bg-like (fast declining) events. While the bar/SF hosts along with slower declining normal events also one 91T-like (slow declining) SNe. Outer disc population hosts all the SN Ia subclasses (see Table 2). The latter is also correct for the entire class i disc. These results can be explained from the perspective of the SFD's properties in addition to the previously known relations between the SNe Ia and the global (or SN local) properties of their hosts. In particular, the discovery of 91bg-like events (progenitor age is greater than several Gyr, e.g. Crocker et al. 2017;Panther et al. 2019;Barkhudaryan et al. 2019; H20) and a population of faster declining normal SNe Ia in the SFDs can be explained within the scenario of SF suppression by bar, where the SFDs of galaxies show a sharp truncation in SF histories and contain mostly old stellar population of several Gyr ( ∼ > 2 Gyr; Donohoe-Keyes et al. 2019). On the other hand, the discovery of 91T-like SNe (progenitor age is less than a Gyr, e.g. Han & Podsiadlowski 2004;Ruiter et al. 2013;Fisher & Jumper 2015) and a population of slower declining normal events in the bar/SF (Tables 2-3), can be explained in the context of SF suppression scenario, where the recently formed bar, within the ∼ 1.5 Gyr timescale, has not yet completely removed the gas and quenched ongoing SF inside the demarcation radius (Donohoe-Keyes et al. 2019). Recall that in the bar/SF regions, the UV fluxes are observed that trace the SF up to a few 100 Myr (Kennicutt 1998). The outer disc of Sa-Scd galaxies (or entire class i disc), contains stellar populations of all ages (e.g. González Delgado et al. 2015). Therefore, the appearance of all the SN Ia subclasses in this region is not unexpected ( Table 2). Note that the results in Table 3 remain statistically unchanged when we combine (following Shen et al. 2017) normal, 91T-, and 91bg-like SNe together. To test different galaxy properties that could affect the results in Tables 2-3, we compare the distributions of morphologies, masses, colours, and ages (available in H20) of classes ii-iv hosts with and without SFD. The KS and AD tests show that the global parameters of hosts are not statistically different (P > 0.1), thus could not be the main drivers behind our results. On the other hand, the bar/SF regions have higher surface brightness and dust content in comparison with the SFDs, and therefore the discovery of intrinsically faint (faster declining) SNe in the bar/SF can be complicated, biasing the statistical results in Table 3. However, this does not affect the result that the SFD's SNe Ia are mostly faster declining (fainter) events. Let us now briefly address the possible effects of the progenitor metallicity, which theoretically might cause a variation in the SN Ia LC properties. The mean radial metallicity profile of Sa-Scd galaxies declines from solar to ∼ 0.3 dex below solar from the galactic centre up to the disc end, respectively (e.g. González Delgado et al. 2015). On the other hand, the simulation by Di Matteo et al. (2013) shows that the metallicity on both sides of the bar, i.e. in SFD, is only ∼ 0.15 dex below solar. For any progenitor model, such metallicity variations can account for less than 0.2 mag in SN Ia maximum brightness and about 0.1 mag in ∆m15 (e.g. Timmes, Brown & Truran 2003;Kasen, Röpke & Woosley 2009), which is not enough to be the main reason for the observed differences in ∆m15 values in SFD and beyond ( Fig. 2 and Table 3). Thus, our results support earlier suggestions that the progenitor age is most probably the decisive factor shaping the observed distribution of SN Ia decline rates (e.g. Gallagher et al. 2005). Nevertheless, we would like to stress that the discussed effect of metallicity is heavily based on a very limited number of models. Therefore, further modelling of the impact of metallicity on the LC properties of SNe Ia would help to place our findings in context. The radial distribution of SNe Ia In spiral discs, a radial gradient of physical properties of stellar population (e.g. age gradient; González Delgado et al. 2015) might be a useful tool and has been used in the past to probe the possible dependencies of SNe Ia decline rates on their galactocentric distance (e.g. Gallagher et al. 2005;Galbany et al. 2012;Uddin et al. 2017). However, in these studies, the authors were unable to find a significant correlation between the decline rate and rSN, which is correct also for our sample (Table 4). Moreover, the radial distributions of peculiar (extreme decliners) and normal SNe Ia in Sa-Scd galaxies are consistent with one another (P > 0.2) (e.g. Pavlyuk & Tsvetkov 2016). Note that these results remain statistically insignificant (P > 0.1) when we perform the same tests after separating the hosts into barred/unbarred, and early/late-types. The ∆m15 is not correlated with rSN (P > 0.4) also for the class i disc only, where no bar/SFD phenomena are observed. In this context, it should be taken into account that a significant correlations between SNe Ia decline rates (stretch parameters) and the global ages of hosts have been observed when the ages range from about 1 to ∼ 10 Gyr (e.g. Gupta et al. 2011;Pan et al. 2014;Campbell et al. 2016;H20). In the stacked discs of Sa-Scd galaxies, however, the azimuthally averaged age of the stellar population ranges roughly from 8.5 to 10 Gyr from the disc edge to the center, respectively (e.g. González Delgado et al. 2015). Most likely, this narrow average age distribution across the mean (stacked) host disc does not allow to see a significant correlation between the ∆m15 and rSN in Table 4. It is clear that such a mean disc contains an overlaid components of old and young stars at any radius. On the other hand, as shown in della Valle & Livio (1994); Aramyan et al. (2016), a considerable fraction of SNe Ia in spiral galaxies is (observationally) linked to the young/star-forming disc population, rather than to the population of old disc or bulge. These SNe Ia exhibit an average delay time of 200 − 500 Myr (prompt events, e.g. Raskin et al. 2009) and should have slower declining LCs (smaller ∆m15 values, e.g. Shen et al. 2017). For this reason, the SNe Ia host disc is outnumbered by slower declining events outside the SFD (Fig. 2). Given these results, we check the ∆m15 -rSN/ r dem correlation for normal SNe Ia in the SFD+outer disc, bar/SF+outer disc, and combined samples. Table 4 shows that the mentioned correlation is statistically significant for the first sample, while it is not significant for the second and combined samples. Thus, the old SFD population ( ∼ > 2 Gyr), which contains mostly faster declining SNe Ia (larger ∆m15), in combination with the younger outer disc, which is outnumbered by SNe Ia with slower declining LCs (smaller ∆m15), cause the observed trend in the SFD+outer disc ( Fig. 2 and Table 4). CONCLUSIONS In this Letter, using a sample of nearby Sa-Scd galaxies hosting 185 SNe Ia and our visual classification of the ionized (UV and/or Hα) discs of the galaxies, we perform an analysis of the locations and LC decline rates (∆m15) of normal and peculiar SNe Ia in the SFDs and beyond. As in earlier studies, we confirm that in the stacked spiral disc, the ∆m15 of SNe Ia do not correlate with their galactocentric radii, and such disc is outnumbered by slower declining/prompt events. For the first time, we demonstrate that from the perspective of the dynamical timescale of the SFD, its old stellar population ( ∼ >2 Gyr) hosts mostly faster declining SNe Ia (∆m15 > 1.25). By linking the LC decline rate and progenitor age, we show that the SFD phenomenon gives an excellent possibility to constrain the nature of SNe Ia. We encourage further analysis (e.g. integral field observations) using the SFD phenomenon on larger datasets of SNe Ia and their host galaxies to better constrain SN Ia progenitor ages. SUPPORTING INFORMATION Supplementary data are available at MNRAS online. Table A1. The database of 185 individual SNe Ia and their 180 host galaxies. Please note: Oxford University Press is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article. APPENDIX A: ONLINE MATERIAL The database of our analysis is available in the online supplementary material of the Letter. The first 10 rows of the database of 185 SNe Ia (SN name, location, deprojected and R25-normalized galactocentric distance) and their 180 hosts (galaxy name, morphological type, bar detection, disc's class, and demarcation radius) are shown in Table A1. The full table is available in an CSV format. Recall that more data on these SNe Ia and their host galaxies are available in H20 (e.g. SN spectroscopic subclass, ∆m15, galaxy distance). This paper has been typeset from a T E X/L A T E X file prepared by the author.
2021-03-05T02:41:24.237Z
2021-03-04T00:00:00.000
{ "year": 2021, "sha1": "18d777f039161ca6414e93befb0e8dc3d82e90ce", "oa_license": null, "oa_url": "https://academic.oup.com/mnrasl/article-pdf/505/1/L52/38338048/slab048.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "fc7b600c76739351c9af5742273dcdd67b5f0b8f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
3672812
pes2o/s2orc
v3-fos-license
NEAR REAL-TIME DETERMINATION OF EARTHQUAKE SOURCE PARAMETERS FOR TSUNAMI EARLY WARNING FROM GEODETIC OBSERVATIONS Exemplifying the tsunami source immediately after an earthquake is the most critical component of tsunami early warning, as not every earthquake generates a tsunami. After a major under sea earthquake, it is very important to determine whether or not it has actually triggered the deadly wave. The near real-time observations from near field networks such as strong motion and Global Positioning System (GPS) allows rapid determination of fault geometry. Here we present a complete processing chain of Indian Tsunami Early Warning System (ITEWS), starting from acquisition of geodetic raw data, processing, inversion and simulating the situation as it would be at warning center during any major earthquake. We determine the earthquake moment magnitude and generate the centroid moment tensor solution using a novel approach which are the key elements for tsunami early warning. Though the well established seismic monitoring network, numerical modeling and dissemination system are currently capable to provide tsunami warnings to most of the countries in and around the Indian Ocean, the study highlights the critical role of geodetic observations in determination of tsunami source for high-quality forecasting. * Corresponding author INTRODUCTION 1.1 The Indian Tsunami Early Warning Centre The Indian mainland and Islands are located in a zone of significant seismic activity, where many earthquakes and accompanying tsunamis have been observed and recorded.The two subduction zones, the Andaman-Nicobar-Sumatra island arc and the Makran region have been identified as tsunamigenic zones in the Indian Ocean based on historical tsunamis, earthquakes, and their locations on fault lines. A great shallow foci earthquake of magnitude Mw 9.2 occurred on 26 December 2004 on Andaman-Sumatra subduction zone and generated a massive tsunami that caused extensive destruction all along the Indian mainland and Andaman & Nicobar Islands.In response, India has successfully set up a tsunami warning centre at Indian Ocean at Indian National Centre for Ocean Information Services (INCOIS), Hyderabad.Since its inception in October 2007 till April 2016, the Indian Tsunami Early Warning Centre (ITEWC) successfully monitored 502 earthquakes of M ≥ 6.5, out of which 83 were in the Indian Ocean region (both on land and under-sea).For all these major events in the Indian Ocean, timely advisories were generated based on estimated time of wave arrivals & wave heights and later revised based on water level observations and informed the stake holders through subsequent bulletins.This approach avoided false alarms and unnecessary evacuations of people from the coastal areas. Though the awareness has been increased dramatically, among the public after 2004 tsunami, still we hear about death tolls caused by tsunamis.One of the most critical aspects of tsunami warning system is the quick estimation of earthquake parameters with reasonable accuracy in shortest possible time.The consequences of diametrically opposite behaviours of large earthquakes, in terms of tsunamis, in the recent times (Tohokuoki (2011) in the Pacific and Northern Sumatra (2012) in the Indian Ocean) demand the development of modern and robust tsunami early warning systems. Monitoring the crustal deformation in real-time makes it feasible to achieve rapid estimation of actual earthquake scales, since the measured permanent displacement directly gives us the true size of the earthquake by seismic moment, which in turn, can be used for tsunami warning.The real-time deformation monitoring technique is based on near-field Global Position System (GPS) to study co-seismic displacements.Using coastal GPS stations data near the epicenter, the new method estimates the energy transferred by undersea earthquake to the ocean to generate tsunami (Sobolev et al., 2006).Recent analysis showed that by using GPS displacements, it is possible to calculate how far the stations moved because of the quake and that in turn helped in deriving an earthquake's true size, called moment magnitude.This magnitude is directly related to earthquake's potential for generating tsunamis (Blewitt et al., 2009;Singh et al., 2012).This method allows the rapid estimation of seismic moment tensor solutions and the earthquake source determination in a shortest possible time compared to the traditional approaches. Background & General approach The conventional method of estimating tsunamigenic potential of an earthquake involves two main steps: First, Estimating the earthquake magnitude using seismic inversion method.Second, observing the sea level changes using Bottom Pressure Recorders (BPRs) and Tide gauges.The drawback of this method is, it takes few minutes to hours for accurate estimation of magnitude, Mw, which is directly related with displacement.And moreover Earthquake magnitude is not always a reliable indicator of tsunami potential.For example, the 11 March 2011 Tohoku-oki earthquake of magnitude Mw 9.0 was initially underestimated by Japan Meteorological Agency (JMA), which is inarguably one of the very advanced and well experienced centres for Earthquake Early Warnings as well as for Tsunami Early Warnings.The earthquake detection centers elsewhere also estimated much lower magnitudes (Mw 7.9 -8.0) for this event initially.The magnitude of the earthquake was underestimated at least by an order one to two (M 7.2 after 8.6 sec and revised to M 8.1 after 116.8 sec) (Hoshiba et al., 2011), which in turn underestimated the expected tsunami wave height as 3.0 m -6.0 m.But in reality, the sudden sea-floor displacement generated a massive tsunami that overtopped the tsunami protection walls and broke through as far as 10.0 km inland along the coast.Though JMA could issue the warnings within 3 minutes unfortunately, that was based on the gross under estimation of the magnitude (8.6 M instead of 9 M).The accurate estimate of size of the earthquake could have resulted in an accurate estimate of tsunami wave height.The actual wave height reported from the adjacent areas of Tohoku was 39.7 m at Miyako. On the other hand, in the case of Northern Sumatra earthquake of magnitude Mw 8.5 on 11 April 2012 only a small ocean-wide tsunami (~30 cm at Sabang, Indonesia) was generated in contrast to the estimated wave heights of 6.0 -8.0 m initially.Later when more data became available, it was realized that that was a strike-slip earthquake though the magnitude was higher.The strike-slip earthquake generates very little or no motion of ground in vertical and hence avoids the sudden disturbance of water column essential for the generation of tsunami.Similar was the case with the earthquake (8.6 M) on 28 March 2005 in Nias, Indonesia.That event too did not generate a sizable tsunami as expected.Rather, it generated a relatively small tsunami that caused very little damage (Konca et al., 2007). To overcome such difficulties and to understand the fault geometry that governs the tsunamis, it is essential to estimate the seismic moment tensor solutions.However, the moment tensor solution estimate requires larger amount of data that becomes available only after certain amount of time.Longer wait for sufficient data to make a decision at the warning centre is unfeasible as the warning centre is expected to provide warnings at the earliest.Often the procedure to predict tsunami wave height and travel time depends on the worst cases represented by pure dip slip reverse fault mechanisms and they might overestimate the cases that deviate from such scenarios, especially the strike-slip cases. ESTIMATION OF SOURCE PARAMETERS USING GEODETIC OBSERVATIONS Monitoring the crustal deformation in real-time makes it feasible to achieve rapid estimation of actual earthquake scales, since the measured permanent displacement directly gives us the true size of the earthquake by seismic moment, which in turn, can be used for tsunami warning.The real-time deformation monitoring technique is based on near-field Global Position System (GPS) to study co-seismic displacements. Using coastal GPS stations data near the epicenter, the new method estimates the energy transferred by undersea earthquake to the ocean to generate tsunami (Sobolev et al., 2006).Recent analysis showed that by using GPS displacements, it is possible to calculate how far the stations moved because of the quake and that in turn helped in deriving an earthquake's true size, called moment magnitude.This magnitude is directly related to earthquake's potential for generating tsunamis (Blewitt et al., 2009;Singh et al., 2012).This method allows the rapid estimation of seismic moment tensor solutions and the earthquake source determination in a shortest possible time compared to the traditional approaches. Data The GPS stations operated by the GPS Earth Observation Network (GEONET) of Japan recorded the deformation caused by the 11 March 2011 Tohoku earthquake.To generate static solutions we used GAMIT/GLOBK software package (Herring et at., 2005;King and Bock, 2005).Also, we have taken the processed data from the Advanced Rapid Imaging and Analysis (ARIA) project at the NASA Jet Propulsion Laboratory and Caltech (Owen et al., 2011), which indicated large-scale ESE seaward displacements as large as 5.2 m horizontally and 1.1 m vertically downward.These data indicated a very large coseismic rupture offshore and were subsequently validated by later available seafloor geodetic observations using the GPS/acoustic combination technique at five sites, which measured between 5 and 24 m of east-southeast horizontal motion and between -0.8 and 3 m of uplift (Sato et al., 2011). The measured displacements of these data are taken for the comparison of our final results. Methodology In our approach, we solve for fault slip with no a priori information on the fault geometry.We start the procedure by computing Green's function to relate the deformation at depth with surface.We continuously invert the coseismic displacements for moment tensors at grid points representing virtual sources distributed over a region.At each station, the data 'u' are represented as the convolution of the Green's Function tensor, G, describing the displacement propagation between the source and the receiver and the moment tensor components 'm' of the source.We extract the Green's Functions using EDGRN/EDCMP (Wang et al., 2003) from the output and set up the kernel matrix G for the inversion (1) where i = x , y, z; j = 1, 2, 3, 4, 5 ; k = 1, 2, ….N The least square solution is obtained by Where (G T G) -1 G T matrix for each point source can be computed in advance knowing a priori the grid distribution and the set of pre-selected GPS stations at that location.However, we apply weighting based on distance from the source to the receiver.An additional weighting is applied based on displacement at each of the station.The inversion is performed when we have significant coseismic displacements at least from four GPS stations in case of real-time scenario to avoid false alarming and the inversion is restricted to deviatoric matrix such that Moment Tensor 'M' is composed of only five components.The inversion method solves for the hypocentre, strike, dip and rake and magnitude of the event using the coseismic displacements as outlined in Melgar et al. 2012.The type of faulting and geographical extent of rupture are determined by finite fault slip model (Figure2) (Crowell et al., 2012) while calculating the finite extent CMT solution.The seismic moment M o is computed using scaled Frobenius norm of the MT (Silver et al., 1982) and estimate moment magnitude (M w ) using the Hanks & Kanamori, 1979 relationship. (3) Figure 1: Results of observation vs sythetic displacements for a test case The most important implication of new methodology is the speed with which we can obtain the basic earthquake source parameters compared with traditional seismic methods. However, it is beneficial to have prior knowledge of the geometry of the subduction zone in case of real-time scenario. In such case, the relevant segment of the slab is extracted from the regional model based on the moment release of the finite extent CMT.For major earthquakes such as of magnitude ≥ 8, we invert for moment tensors only at the grid points that are distributed in the slab.This significantly lowers the number of calculations and permits us to focus on the most hazardous region of the grid.It is then possible to consider earthquake scenarios in advance and to invert for them in real-time. CASE STUDY: THE 2011 TOHOKU-OKI EARTHQUAKE To test the approach, we apply the proposed methodology to the great earthquake of March 11, 2011 at Tohoko-Oki, Japan.The megathrust earthquake ruptured the entire width of the offshore of northeastern Japan, producing many geophysical observations that have been rapidly analyzed to constrain the rupture process.Over 400 high-rate three-component GPS estimated ground motion time series were made available courtesy of GPS-Solutions.The ground motion records were obtained from 1Hz GEONET data, provided by the Geospatial Information Authority of Japan (GSI).For this great earthquake, the proposed method is very stable and provides a robust centroid locations and focal mechanism estimates.The obtained results are compared with ARIA solutions to evaluate final solution.Green's Functions were computed from EDGRN on a 10 km horizontal and vertical grid using four layer velocity model.In this test case of Tohoku-oki earthquake, regional 3D slab model for the subduction zone is established as priori from Hayes et al. (2012).With this approach model is determined at 157 s (Figure 3) after earthquake origin time (Melgar et al., 2013b) and in the timeline of warnings provides the first estimate of the slip distribution. CONCLUSION As illustrated above, traditional methods of earthquake magnitude estimation only based on seismic data and the prediction of tsunami wave heights can go wrong if the earthquake mechanism is not taken in account in addition to its magnitude.This is more serious for near source regions like Andaman and Sumatra coast as they lie very close to the subduction zone and the available time for warnings and response are too short.This was precisely the limitation faced during the 2011 Tohoku earthquake and the 2012 Northern Sumatra earthquake that necessitated the development of new tools and techniques for determining the true size of an undersea earthquake and the actual displacement of ground.Such techniques call for receiving and analyzing data from multiple sensors like seismometers, GPS sensors, strong motion sensors, etc. in real time. Here we prove that near-source real-time GPS measurements fill an important gap for early earthquake detection, characterization, and rapid response for major and great earthquakes where significant fault rupture occurs and tsunamigenic potential exists.The Inverse Method performs better for the Tohoku-Oki earthquake and has the benefit of requiring no a priori information on fault geometry, making it the ideal method in complex tectonic environments such as Sunda subduction zone.We obtained sensible results for the example case in terms of slip, magnitude, and rake estimates and an order of magnitude improvement compared to existing seismic methods for monitoring large earthquakes in the near field, thereby allowing for more effective earthquake response and tsunami warning. Table 1 : Comparison of new approach and Global CMT The results of applied methodology and Global CMT solution are compared in Table1.
2018-03-04T00:18:35.400Z
2016-06-22T00:00:00.000
{ "year": 2016, "sha1": "eee88ee878f2623b198fcf928bb1ffe7266eb4b4", "oa_license": "CCBY", "oa_url": "https://isprs-archives.copernicus.org/articles/XLI-B8/117/2016/isprs-archives-XLI-B8-117-2016.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "eee88ee878f2623b198fcf928bb1ffe7266eb4b4", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [ "Geology" ] }
247001041
pes2o/s2orc
v3-fos-license
Risk factors for epiretinal membrane in eyes with primary rhegmatogenous retinal detachment that received silicone oil tamponade Background/aims This study investigated the risk factors for epiretinal membrane (ERM) in eyes with primary rhegmatogenous retinal detachment (RRD) that received silicone oil (SO) tamponade. Methods This retrospective analysis included 1140 patients (1140 eyes) with RRD who underwent primary vitrectomy and SO tamponade. The prevalence of ERM was estimated and possible risk factors (eg, type 2 diabetes, proliferative vitreoretinopathy (PVR), SO tamponade time (SOTT), photocoagulation, vitreous haemorrhage, choroidal detachment, cryotherapy and retinal tear size) were analysed via multiple logistic regression. Results The prevalence of ERM was 12.3% (140/1140), and the accuracy of preoperative ERM diagnosis was 40.5%. Multivariate logistic regression analysis showed that risk factors for ERM in eyes with SO tamponade included preoperative PVR (OR=4.336, 95% CI 2.533 to 7.424, p<0.001), type 2 diabetes (OR=3.996, 95% CI 2.013 to 7.932, p<0.001), photocoagulation energy (OR=1.785, 95% CI 1.306 to 2.439, p<0.001) and SOTT (OR=1.523, 95% CI 1.261 to 1.840, p<0.001). No statistically significant associations were observed between the incidence of ERM and other risk factors. Preoperative PVR showed the strongest association with risk of ERM. The risk of ERM was positively associated with SOTT, photocoagulation energy and preoperative PVR grade. Conclusion In eyes with RRD that received SO tamponade, the prevalence of ERM was 12.3%, while the accuracy of preoperative ERM diagnosis was low. Preoperative PVR, type 2 diabetes, photocoagulation energy and SOTT were the main risk factors for ERM. INTRODUCTION Silicone oil (SO) is a tamponade for retinal detachment repair that helps to heal detached retinas. Unlike long-acting gases, SO does not spontaneously reabsorb; therefore, it requires a second surgery for removal. However, the advantages of SO over longacting gases include no air travel restriction and avoidance of the requirement for strict prone positioning. 1 Because of SO pressure, the detached retina can be appropriately reattached for an extended duration after surgery; this approach is now widely used in various vitreoretinal surgeries. During SO tamponading, patients can experience hyperopia, as well as various pathological complications, such as epiretinal membrane (ERM), SO maculopathy, SO emulsification, SO migration, cataracts, glaucoma, corneal lesions, or re-detachment of the retina. [2][3][4][5][6] Corneal oedema is noted after SO removal in eyes with SO touch when the aqueous layer comes back into contact with the damaged corneal endothelium. 7 Sachdeva et al reported that SO is involved in the formation of proliferative vitreoretinopathy (PVR) as an adjunct to retinal detachment repair. 8 Because there is increasing evidence of possible detrimental effects caused by SO endotamponade, a safety study is required. [9][10][11][12] Importantly, we speculate that the onset of ERM is not solely caused by SO. Age, smoking and other factors have also been identified as risk factors for ERM. [13][14][15] Although there is no obvious explanation, we have encountered many patients with retinal detachment on whom SO was used to help the retina heal; the SO tamponade generated the preretinal proliferating membrane. However, the operative and baseline characteristics (eg, age, diabetes and hypertension) conditions can differ among SO tamponading procedures; thus, there is a need to analyse the operative and basic conditions of patients who have eyes with SO tamponade and ERM, enabling acquisition of desired results. The formation of a proliferative membrane in front of the retina is generally not well understood. Here, we compared the basic and operative conditions of patients who had eyes with SO tamponade, without and with ERM. The purpose of this study was to determine the risk factors for ERM in eyes with primary rhegmatogenous retinal detachment (RRD) that received SO tamponade. MATERIALS AND METHODS In this retrospective cohort study, we reviewed all medical records of patients with primary RRD who underwent vitrectomy and SO tamponade in our hospital from June 2017 to February 2020. Patients with varying degrees of PVR were also included. Exclusion criteria included history of trauma, history of severe eye infections or inflammatory disease, diabetic retinopathy, type 1 diabetes and severe data loss. Preoperative data were obtained from medical records, including name, age, gender, medical history, visual acuity, preoperative vitreous haemorrhage (VH), PVR grade, lens status, surgical procedure and surgical parameters, SO tamponade time (SOTT), best-corrected visual acuity before and after surgery, preoperative choroidal detachment (CD), preoperative and postoperative intraocular pressure, and Clinical science perioperative complications. Postoperative data included visual acuity, intraocular pressure, morphology of the macular area and postoperative complications at 3 months after SO removal. The surgery was performed by two experienced vitreoretinal surgeons. We used a 23-gauge vitrectomy system to remove SO from the vitreous cavity. If an ERM was present, we used 23-gauge pincers to remove the ERM. Furthermore, if the ERM was involved in the macular area, we also removed the internal limiting membrane with the aid of indocyanine green staining. In accordance with the surgeon's judgement, RT SIL-OL 5000 (5000 cm; Carl Zeiss Meditec AG company, Germany) SO was used for retinal detachment. If a patient was required to undergo multiple vitreoretinal surgeries, all surgeries were performed by the same surgeon. The primary endpoint of measurement was the macular condition at 3 months after the last SO removal. The presence of a proliferative membrane in front of the retina was determined during the oil extraction surgery. In eyes with SO tamponade, the following factors were evaluated: SOTT, preoperative CD, presence of VH before and after the first surgery, photocoagulation energy during surgery, number of photocoagulation points, whether electrocoagulation was performed, and whether cryotherapy was performed. Photocoagulation energy was divided into four groups according to the energy used during the first operation: first level, 120-165 mV; second level, 166-210 mV; third level, 211-255 mV and fourth level, 256-300 mV. The condition of the retina after SO tamponading was evaluated by a trained professional. Proliferative membrane found in the macular area of the fovea, the peripheral retina, and any other areas was defined as ERM. Before and after surgery, macular optical coherence tomography was used to observe the morphology of the macular area. A panoramic 200 scanning laser ophthalmoscope (Opel) was used to observe the state of the retina, and a B-ultrasound scan was used to confirm the retinal morphology and eyeball state after surgery. All patients underwent medical optometry and intraocular pressure examinations before and after surgery. Statistical analysis was performed using SPSS Statistics V.24.0. The Kolmogorov-Smirnov test was used to determine whether continuous numerical variables exhibited normal distributions. Univariate analysis of categorical variables was performed by the χ 2 test or Fisher's exact test. Univariate analysis of continuous variables was performed using the Wilcoxon rank-sum test; Student's t-test was used to compare the mean values of normally distributed variables. Logistic regression analysis was used to determine the risk factors for ERM formation. Stepwise regression analysis was used to rule out the effects of collinearity of related factors, prior to the final multivariate logistic regression analysis. Statistical significance was determined using a threshold of p<0.05. General results In total, 1446 eyes with SO tamponade in 1446 patients were reviewed. Sixty eyes were excluded because of a history of trauma, 144 eyes were excluded because of diabetic retinopathy and 102 eyes were excluded because of a history of serious eye infections or inflammatory diseases. Thus, 1140 eyes with primary RRD were included in the analysis. The incidence of ERM in all 1140 eyes with SO tamponade was 12.3% (140/1140). The success rate of the first operation was 94.1% (1073/1140), and recurrent retinal detachment was found in 67 eyes (5.9% of 1140) during SO removal surgery. After the recurrent retinal detachment had been repaired, gas (C 3 F 8 ) tamponade was performed in 60 eyes (90.0% of 67), and all eyes were cured. The remaining seven eyes (10.0% of 67) received SO tamponade; all eyes were cured after SO removal 3 months later. Baseline data analysis results Because some data were missing for 457 eyes, 683 eyes were included in the baseline data analysis (figure 1). The mean follow-up interval for all patients was 12±6 months. Furthermore, 79 eyes with ERM (11.6% of 683) were intraoperatively diagnosed using the operating room microscope, while only 32 eyes with ERM (4.7% of 683) were preoperatively diagnosed using optical coherence tomography and Opel (online supplemental table 1). The incidence of ERM significantly differed between the operating room microscope and the optical coherence tomography/Opel diagnostic methods (χ 2 test, p<0.001). Among peeling during the original operation due to ERM involved the macula. Recurrent macular ERM were found in five eyes (6.6% of 76) during SO removal surgery. Among the other 21 eyes (21.6% of 97) without peeling ILM during the original operation, only 1 eye (4.8% of 21) exhibited macular ERM during SO removal surgery. The incidence of recurrent macular ERM was similar between the two groups (Fisher's exact test, p=0.357). SOTT, photocoagulation energy and number of photocoagulation points were all positively associated with the incidence of ERM (p<0.001). There were no statistically significant associations of ERM with postoperative VH, preoperative CD or retinal tear size (table 1). Multivariate logistic regression analysis results To identify risk factors for the formation of ERM, 1140 eyes were included in the logistic regression analysis. Collinearity was suspected among diabetes, preoperative VH, preoperative PVR and other factors; therefore, we used stepwise regression analysis to rule out the effects of collinearity among potentially related factors prior to the final multivariate logistic regression analysis. Finally, preoperative VH was excluded because it demonstrated collinearity with diabetes. The results showed that ERM in eyes with SO tamponade was associated with preoperative PVR (p<0.001), type 2 diabetes (p<0.001), photocoagulation energy (p<0.001) and SOTT (p<0.001). There were no statistically significant associations of ERM with other risk factors (table 2). The risk of ERM was positively associated with preoperative PVR grade, type 2 diabetes, photocoagulation energy and SOTT. Preoperative PVR showed the strongest association with risk of ERM. Eyes with preoperative PVR had a 1.467-fold increased risk of ERM (Exp(B)=4.336) (figure 2). DISCUSSION Complications after retinal detachment surgery and the relationship with SO have been extensively investigated, and the incidence of SO-related visual loss is reportedly 30%. 16 SO tamponade may cause ERM formation, leading to recurrent retinal detachment or macular occlusion, which can progress to vision loss. 4 6 8 Xiao et al reported that 9.1% of the general population had some forms of ERM. 15 However, the present study showed that this proportion increased to 12.3% in eyes with SO tamponade. Although we cannot yet explain the mechanism underlying ERM formation in SO-filled eyes, our findings indicated that diabetes, preoperative PVR, SOTT, and photocoagulation energy were significant risk factors for ERM. Previous studies generally focused on the progression of diabetes toward diabetic retinopathy and fibroproliferative membrane formation, but did not address the relationship between diabetes as a systemic disease and the formation of ERM. [17][18][19] Patients with diabetic retinopathy were excluded from this study, and the results showed that type 2 diabetes was a significant risk factor for ERM in eyes with SO tamponade. The pathogenesis of ERM may be related to fibrocyte infiltration into vitreous fluid; fibrocytes and tenascin-C reportedly participate in ERM formation in patients with diabetes. 17 18 Hyperglycaemia causes a chain of events that leads to retinal vascular endothelial dysfunction, thus increasing the risk of ERM. 20 Stabilisation of glycaemia with medication, combined with dietary and lifestyle modifications, may reduce this risk. 21 For patients with preoperative PVR before SO tamponade, the reported incidences of postoperative ERM and recurrent retinal detachment are significantly increased. 6 22 In this study, II (201-400) 21 (27) 149 (25) III (401-600) 12 (15) 82 (14) IV (601-800) 11 (14) 68 (11) V (>800) 16 (20) 190 ( Clinical science the incidence of ERM in eyes with SO tamponade was strongly positively associated with preoperative PVR grade. ERM formation may be a continuation of previous PVR diseases. 22 Extravascular leakage of various growth factors might also contribute to ERM recurrence. 23 Moreover, CD, pigment release during endodrainage, inflammation and other factors are reportedly associated with the incidence of ERM. [22][23][24] Most of these factors are clearly associated with inflammation. Thus, antiinflammatory strategies (eg, steroid use) may be effective in the prevention of ERM. 25 26 Previous studies reported ILM peeling is associated with a reduction of recurrence rate of ERM. [27][28][29] However, ILM peeling may damage the Müller cells which connected to the ILM's basal lamina. [30][31][32][33] Ultrastructural damage to the inner retina caused by ILM peeling may be responsible for the increased macular thickness and reduced foveal light sensitivity. 34 In this study, the ILM was removed only when ERM involved the macula to prevent recurrence. But the recurrence of macular ERM in eyes underwent ILM peeling during SO removal surgery was similar to that without peeling ILM (6.6% vs 4.8%). Therefore, routinely peeling ILM is not recommended in cases with preoperative PVR, and its risks may outweigh the benefits. However, this conclusion needs to be further verified by more rigorously designed controlled studies. SO tamponade facilitates the gradual formation of firm retinal adhesions around tears and prevents fluid from flowing into the breaks. 35 Some researchers presume that SO can temporarily resist retinal contact-induced proliferation and may slow ERM recurrence by limiting dissemination and circulation of related cells and factors. 22 36 However, other studies have suggested that SO stimulates the release of various mitotic factors. 37 However, SO bubbles occupy most of the vitreous cavity and may increase proliferation by concentrating active factors near the retina. 22 36 The results of our study showed that longer SOTT was associated with greater incidence of ERM. Prolonged tamponading causes SO to move into the retina and other ocular tissues, leading to intraocular inflammation and increased intraocular pressure. 24 38 Furthermore, prolonged SOTT leads to greater abundance of retinoblasts in the RPE, thus increasing the likelihood of ERM formation. We suspect that SO removal at an appropriate time (eg, ≤3 months after the initial surgery) may reduce the incidence of ERM. Retinal laser photocoagulation has been widely used for several decades because it is minimally invasive and can rapidly enhance retinal choroidal adhesion. 39 The laser can effectively stabilise the retina and allow gradual SO removal. 38 A previous study indicated that broad application of photocoagulation can enhance intraocular inflammation and stimulate intravitreal proliferation, thus aggravating PVR. 4 ERM formation with ILM wrinkling may occur as a late complication of laser photocoagulation 40 ; however, the contributing roles of photocoagulation energy and the number of photocoagulation points remain controversial. Our findings indicate that ERM formation is positively associated with photocoagulation energy, rather than the number of photocoagulation points. We hypothesise that, during retinal self-repair, the accompanying mitosis and energyinduced damage will cause more extensive cell repair, leading to a macrophage-mediated inflammatory response, retinal pigment epithelium proliferation and a substantial Müller cell response; accordingly, proliferative lesions form at photocoagulation sites. 4 41 Therefore, we recommend the avoidance of intraoperative high-energy photocoagulation in eyes with RD. Other risk factors for PVR (eg, cryotherapy, retinal tear size and CD) have been reported, 42-44 but they were not associated with ERM in this study. These discrepancies are presumably because ERM in this study occurred in eyes with SO tamponade, and the inclusion criteria and intraocular environment differed with respect to the previous studies. Gupta et al 45 demonstrated that a complete set of preoperative eye examinations is often insufficient to make an accurate diagnosis; this influences the choice of surgical method. In our study, the accuracy of preoperative ERM diagnosis was only 40.5% (32/79). This low accuracy might be attributed to preoperative refractive media opacity in some parts of eyes with SO tamponade, which affects fundus observations. We recommend that surgeons carefully examine the entire retina after SO removal (during the operation) to avoid missing instances of ERM. The main advantages of this study were its large sample size and the comprehensive analysis of multiple factors. The findings provide insights for the diagnosis and treatment of ERM in eyes with SO tamponade. The major limitation of this study was its retrospective design. Further prospective clinical studies are needed to determine when ERM occurs and elucidate its underlying pathogenesis. Additionally, the mean follow-up interval in this study was short (12±6 months). Retinal detachment may recur several years after the initial surgery in some people because ERM can occur several years after SO extraction. 46 Beyond this, due to lack of glycosylated haemoglobin (HbA1c) data in nondiabetic patients, the effect of hyperglycaemic on ERM formation cannot be further analysed based on HbA1c level. In conclusion, the prevalence of ERM was 12.3% in eyes with primary RRD that received SO tamponade, and the accuracy of preoperative ERM diagnosis was only 40.5%. The main risk factors for ERM in eyes with SO tamponade were preoperative PVR, type 2 diabetes, photocoagulation energy, and SOTT.
2022-02-21T16:10:04.863Z
2022-02-19T00:00:00.000
{ "year": 2022, "sha1": "9d36361eebe1e571451fc5003730ed47fdfc5b71", "oa_license": "CCBYNC", "oa_url": "https://bjo.bmj.com/content/bjophthalmol/early/2022/02/18/bjophthalmol-2021-320121.full.pdf", "oa_status": "HYBRID", "pdf_src": "BMJ", "pdf_hash": "e3e401b2577a1fe31896e193f8add5250fb41a70", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55676208
pes2o/s2orc
v3-fos-license
A polarisation modulation scheme for measuring vacuum magnetic birefringence with static fields A novel polarisation modulation scheme for polarimeters based on Fabry-Perot cavities is presented. The application to the proposed HERA-X experiment aiming to measuring the magnetic birefringence of vacuum with the HERA superconducting magnets is discussed. Here The ellipticity ψ induced on a linearly polarised beam of light with wavelength λ passing through a medium with birefringence n and length L, and whose axes are defined by the external magnetic field, is ψ = π L λ n sin 2φ a e-mail: guido.zavattini@unife.it where φ is the angle between the magnetic field and the polarisation direction. The birefringence predicted by L EHW is [7][8][9][10][11] n = 3A e B 2 4 × 10 −24 B 2 . Several experiments are underway, of which the most sensitive at present are based on polarimeters with very high finesse Fabry-Perot cavities and variable magnetic fields [12][13][14]. The Fabry-Perot cavity is necessary to increase the optical path L within the magnetic field region, whereas the variable magnetic field is necessary to induce a time dependent effect. Both of these aspects significantly increase the sensitivity of the polarimeters. Ideas to use high field superconducting magnets, such as those used in the LHC and HERA accelerators, have also been suggested but their use is limited by the difficulty in modulating, in one way or another, their magnetic fields. To work around this problem, proposals of rotating the polarisation have been considered [15], but the presence of the Fabry-Perot cavity, whose mirrors always present an intrinsic birefringence whose induced ellipticity is orders of magnitude larger than the ellipticity due to vacuum magnetic birefringence, have made this idea unfeasible. In this note, a novel modulation scheme is presented that might profitably be employed with large superconding magnets. Preliminary considerations In a recent workshop in Hamburg [16], a new scheme, presented in this paper, has been suggested to measure the magnetic birefringence of vacuum predicted on the basis of the 1936 effective Lagrangian L EHW . One of the presenta-tions [17], proposed the idea (called HERA-X: Heisenberg-Euler-biRefringent-ALPS-eXperiment) of making use of the powerful infrastructure of the ALPSIIc set-up [18]: about 5000 T 2 m, which could go up to about 7700 T 2 m if the peak field of 6.6 T is employed. In this configuration the magnetic birefringence would be n (HERA-X) ≈ 10 −22 for the 5.3 T magnetic field. With this birefringence, the maximum ellipticity is ψ (HERA-X) = 5 × 10 −14 for λ = 1064 nm and L = 177 m. In the usual setups, the magnetic field is modulated to gain sensitivity. In the particular case of the HERA superconducting magnets the electric current can be modulated at about a millihertz frequency [19]. Let's analyse the measurement scheme of Fig. 1, featuring two crossed polarisers, a variable magnetic field (fixed direction) at a frequency ν B , and an ellipticity modulator at a frequency ν m . The purpose of the ellipticity modulator is twofold: it allows heterodyne detection for improving sensitivity by linearising the signal and shifting it to a high frequency; it permits the distinction between an ellipticity signal and a rotation signal [12]. In this scheme the intensity collected at the photodiode PDE is, at the lowest useful order, The interesting signal is found, in a Fourier transform of the signal from the photodiode, at the two sidebands ±ν B from the carrier frequency ν m of the ellipticity modulator. The resulting peak shot-noise sensitivity in such a scheme is where e is the electron charge, I 0 is the intensity reaching the analyser, and q is the quantum efficiency of the photodiode. With I 0 100 mW and q = 0.7 A/W, the shot-noise peak sensitivity is S shot 2 × 10 −9 1/ √ Hz. Despite the exceptional parameters of the magnetic field of HERA-X, Fig. 1 A simple heterodyne ellipsometer. PDE extinction photodiode, PDT transmission photodiode the integration time T to achieve a unitary signal-to-noise ratio remains too long, even supposing to work at shot-noise sensitivity: T ∼ S shot ψ (HERA-X) 2 ∼ 10 9 s. As mentioned above, further amplification is required. This can be achieved with a Fabry-Perot cavity, which can be thought of as a lengthening of the optical path by a factor N = 2F/π , where F is the finesse of the cavity. The proposed finesse for HERA-X is F = 60,000. With such a finesse, the ellipticity ψ increases by a factor N = 38,000 and the integration time therefore diminishes by a factor N 2 . Assuming shot-noise sensitivity, on paper, this device should easily allow the measurement. A problem remains, however, regarding the actual sensitivity that one may reasonably think to achieve at low frequencies with such a long cavity. Let us consider the experiments on this subject realised so far with a scheme similar to the one proposed with HERA-X [12,[20][21][22][23]. In Fig. 2 we show the noise densities in birefringence π d measured in these apparatuses as a function of the frequency of the effect. In this formula S ψ is the ellipticity sensitivity of each experiment, λ is the wavelength, F is the finesse and d the cavity length. Note that the cavity length d has been used instead of the length L of the magnetic region; what is plotted is therefore the best sensitivity in birefringence that could be obtained by the experiments. In the figure we did not report a much worse sensitivity value of the Q &A experiment [13]. The data are fitted with a power function. The message put forward by Fig. 2 is that increasing the effective length (finesse and magnetic field length) does not guarantee the shortening of the necessary integration time to reach a unitary signal-to-noise ratio; seeking the highest finesse possible is not necessarily the optimal choice. Increasing the birefringence modulation frequency seems to be more effective. Furthermore, with lower finesses, the cavity will have a shorter decay time and therefore a higher cutoff frequency allowing higher modulation frequencies. Figure 2 suggests, therefore, that the finesse of the cavity should be the highest for which the polarimeter is still limited by intrinsic noises (shot-noise, Johnson-noise, etc.). The figure suggests that it is unlikely that, at 1 mHz, a sensitivity better than S Method In this note, we present a novel modulation scheme that would bring in several advantages. This idea has never been tested in a laboratory, but is likely to be more effective than the one described above. In this way one can work at higher frequencies for the best sensitivity. In this scheme the magnetic field does not need to be modulated. The scheme consists in introducing a pair of co-rotating half-wave-plates L 1 and L 2 inside the Fabry-Perot cavity, as schematically shown in Fig. 3. The polarisation within the magnetic field would rotate at twice the frequency of the wave-plates and should allow to increase substantially the modulation frequency of the effect. An important feature of this scheme is that the polarisation direction of the light on the Fabry-Perot mirrors would remain fixed, thereby eliminating the contribution of the ellipticity due to the intrinsic birefringence of the mirrors. Furthermore, the polarisation direction on each mirror Fig. 3 Proposed modulation scheme. L 1,2 rotating half-wave-plates, PDE extinction photodiode, PDT transmission photodiode could be chosen; the input polariser defines the polarisation direction on the first mirror whereas on the second mirror the polarisation direction is defined by the relative angle between the axes of the two wave-plates. Let us indicate with ν L the rotation frequency of the waveplates, that we suppose to rotate synchronously but not necessarily aligned one to the other. The Jones representation of the electric field at the exit of the cavity is where δ is the round-trip phase acquired by the light between the two cavity mirrors, R and T are the reflectivity and transmissivity of the mirrors, I is the identity matrix, is the magnetic birefringence of vacuum generating an ellipticity ψ in the polarisation of the light, and are the rotating wave-plates. Here represents the wave-plate and the rotation matrix, with φ the variable azimuthal angle of the wave-plates: φ(t) = 2πν L t. The angle φ 2 − φ 1 is the constant relative phase between the slow axes of the two rotating wave-plates, and α 1,2 allow for small deviations from π of the retardation of the two imperfect wave-plates. The electric field after the analyser is then 1 and A = 0 0 0 1 are the ellipticity modulator, placed at 45 • with respect to the output polarisation, and the analyser set to maximum extinction, respectively. In the expression for H, η(t) = η 0 cos 2πν m t. The rotation matrix between the cavity and the ellipticity modulator ensures that the modulator and the analyser are correctly oriented. To first order in α 1 , α 2 , and ψ, the intensity detected by the photodiode PDE is given by The interesting result from this formula is that the signal of the magnetic birefringence of vacuum is found at the frequencies ν m ± 4ν L deriving from the product η(t) ψ sin(4φ(t) + 4φ 1 ), while the signals due to a small retardance difference from λ/2 of the wave-plates appear at ν m ± 2ν L . Higher order imperfections in the phase delay of the rotating waveplates may be present and may be described by the following expansion: The various orders of imperfection can be estimated from the specifications of the producers. A typical value for α The main contribution for α (1) 1,2 may come from the parallelism of the surfaces of the wave-plate (wedge) coupled to the distance of the beam from the center of rotation. The typical parallelism of a wave-plate is 2 × 10 −6 rad. With an off-center rotation of ≈1 mm this gives an estimated value of α (1) 1,2 2×10 −9 . The effect of such a defect, though, would generate signals at ν m ± 3ν L and ν m ± ν L and not at ν m ± 4ν L . To generate a spurious signal at ν m ± 4ν L the term α (2) 1,2 is necessary. Assuming an ellipticity value to be measured due to vacuum magnetic birefringence of ψ = 2.5 × 10 −11 [formula (2)], it is not unreasonable to imagine that α (2) 1,2 ψ ≈ α (1) 1,2 /100. In all cases, though, there will not be a contribution from these terms if the rotation axis of each wave-plate coincides with the beam position, condition which can be obtained with a careful alignment of the optics. The reduction of these systematic effects is to be performed with the magnets turned off. In the above formulas we have not considered the intrinsic birefringence of the mirrors [24]. In this scheme, by choosing appropriately φ 1 and φ 2 it should be possible to minimise the effect of this birefringence by independently aligning, on each mirror, the polarisation of the light to the birefringence axes of the mirrors [12]. Clearly the presence of the two half-wave-plates inside the cavity introduces some losses. Therefore there is an upper limit to the finesse one can obtain due to the absorption of the wave-plates. With a correct antireflective coating, waveplates can be obtained with a total absorption/reflection of 0.1 % each. Unwanted effects due to the reflected light from the wave-plates inside the cavity can be eliminated by misaligning the wave-plates very slightly so as to send the reflected light against the baffles which would be present inside the vacuum tube. Considering that the finesse F is where R + T + P = 1, and assuming that the transmission of the mirrors T are such that T P = 4×10 −3 (four passages through the waveplates), the absorption of the wave-plates limits the finesse to In this case the predicted QED ellipticity signal would be Assuming for HERA-X the best birefringence sensitivity as shown in Fig. 2, which is independent from the finesse, this would mean a value of S (100 Hz) n 2.5 × 10 −20 1/ √ Hz @ 100 Hz (with ν L = 25 Hz). Using F = 800 the sensitivity in ellipticity is The corresponding integration time to reach S/N = 1 would therefore be T = S ψ ψ (WavePlates) 2 10 5 s. ( Such a sensitivity remains to be demonstrated in the exceptional conditions of the proposed HERA-X experiment, but with such a low finesse, near shot-noise ellipticity sensitivities have been demonstrated. The minimum output power from the cavity to avoid being limited by shot-noise is ≈10 mW. With a finesse of F = 800 the circulating power inside the cavity is about 5 W distributed over a surface of about 0.1 cm 2 determined by the beam radius. This is well below the damage threshold of the wave-plates of 1 kW/cm 2 which therefore guarantees a correct and stable operation of the wave-plates. Furthermore very long Fabry-Perot cavities have been shown to be stable at frequencies of a few tens of hertz by LIGO and VIRGO reaching shot-noise performances [25]. The numbers seem to be within reach and we believe that this scheme could be a viable solution when using high field static magnetic fields generated by superconducting magnets. Conclusion In this note we have proposed a new scheme for a sensitive polarimeter dedicated to measuring vacuum magnetic bire-fringence based on a Fabry-Perot cavity which would allow the use of static magnetic fields generated by superconducting magnets. The modulation of the birefringence, necessary to reach high sensitivities, is performed by two co-rotating half-wave-plates inside the cavity, thus satisfying two conditions: rotating polarisation of the light inside the magnetic field; fixed polarisation direction on the Fabry-Perot mirrors. Furthermore the polarisation direction on the two mirrors can be controlled independently. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecomm ons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Funded by SCOAP 3 .
2016-01-15T16:26:30.000Z
2016-01-15T00:00:00.000
{ "year": 2016, "sha1": "20d4a9b0c799effeaeff1003e80c33d6c04c4baf", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-016-4139-0.pdf", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "05f02418fe97da919a45deccb4b89747b2f9fafe", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
3622419
pes2o/s2orc
v3-fos-license
A New Nano-Platform for Drug Release via Nanotubular Aluminum Oxide Nanotubular materials have many favorable properties for drug delivery. We present here a pioneering study of controlled release of a model drug, amoxicillin, from the internal nanopore structure of self-ordered, periodically spacedapart aluminum oxide with an innovative, nanotubular geometry. This aluminum oxide nanotube geometry has not yet been revealed for biological applications, thus we have selected this oxide nanotube structure and demonstrated its ability as a drug carrier. Controlled, sustained release was achieved for over 5 weeks. The release kinetics from the nanotube layer was thoroughly characterized and it was determined that the amount of drug released was proportional to the square root of time. This type of controlled release and longevity from the nanotube layer has potential for therapeutic surface coatings on medical implants. Furthermore, this type of geometry has many features that are advantageous and biologically relevant for enhancing tissue biointegration. Introduction Recently, anodic aluminum oxide (AAO) has become one of the most popular self-ordered periodic, porous templates.In general the highly developed, superior ordering of nanopores in AAO templates is obtained by using a two-step anodization process [1], a rather simple processing method.The AAO porous structure can be uniquely altered based on processing parameters and both porous and tubular shapes can be achieved and tailored with pore diameters between 5 nm -10 µm and film thicknesses (vertical height) reaching over 100 µm [2].AAO has been of great interest due to its outstanding material properties, including electrical insulation, optical transparency and chemical stability, and most recently because of its biologically inert and biologically compatibility properties [2].In terms of biological applications, the characteristic periodic porous films of AAO has been used for encapsulating enzymes [3], implant surface coatings on Ti alloys for bone in growth [4][5], membranes for hemodialysis [6], cardiovascular stent applications [7,8], biofiltration [9], and drug delivery [10,11]. Owing to porous AAO's ability to mimic the dimen-sions and nanoporous structural components of natural bone and the prospect of housing genes or drugs for therapeutic treatments within the pores [11], AAO films can be seen as promising coatings for medical, particularly orthopedic, implants.Research on biomedical applications of both porous alumina, and along the same lines nanotubular titania (TiO 2 ) [12][13][14][15][16][17][18], has increased tremendously in the past few years.It is well known that titania nanotube surfaces elicit favorable properties for bone cell growth and mineralization in vitro [16,18], osseointegration (bone/implant interface bonding) in vivo [19], enhance differentiation of mesenchymal stem cells toward osteogenic maturation [17], as well as elicit long term drug release [15].Titania nanotubes have been recently recognized for their impact in the future of nanomedicine [20].We focused on hard-anodized AAO which is recently of great interest because of its advantageous characteristics, i.e. 1) AAO film grows rapidly 1) it is well ordered and 2) there is a wide range of novel AAO nanostructures that can be obtained during the hard anodized AAO region which are yet to be discovered [21][22][23][24].For example, Zhao et al. reported a special AAO structure grown under constant current anodization, which is represented by a six-membered ring symmetry around the center pores with a regular hexagonal pore arrangement. While nanopore configuration has been studied greatly for potential biomedical and drug delivery type applications, the nanotube configuration has a unique feature in that the 10 -20 nm spacing between neighboring nanotubes allow a continuous supply of body fluids, nutrients and proteins underneath adhered and growing cells that could potentially aide in tissue development and biointegration.In contrast, simple nanopored configuration gets covered with adhered cells thus blocking the supply of body fluid.This has been described for the case of TiO 2 nanotubes with significantly enhanced osteoblast (bone cell) and bone tissue growth 16 , and applies to other adhered cells on the nanotubes. To imitate the TiO 2 nanotube geometry, we have microstructurally altered traditional porous AAO to give rise to a unique nanotube morphology to determine its use in biological applications.It is projected that we can utilize the nanotubes as "nanodepots" for advanced drug delivery therapeutics.AAO nanotubes are an example of a multidisciplinary approach for combining nanotechnology, biomedical engineering, and controlled drug delivery where antibiotics, growth factors, etc. are appropriately needed as well as proper biointegration (osseointegration for example) is desired [11].The nanotube geometry, as compared to the nanopore geometry with comparable pore diameter, provides ~2X more surface area, which could be utilized to provide more active spots and functional conjugation locations for increased adsorption or biomolecules, proteins, enzymes, and drug molecules. The use of AAO nanotubes is a new clinical approach not only for orthopedics, but also for the treatment of several other drug eluting implants which ideally would release for longer periods of time, on the order of days, weeks, even months.The aim of this study is to elucidate the complex phenomenon of drug release from AAO nanotube drug carriers.Detailed, characterized release curves, accumulation plots, and kinetics were revealed for the antibiotic drug, amoxicillin.This type of geometrical structure of AAO is novel in itself, but also the time scale of prolonged release of amoxicillin from the AAO nanostructure is incredibly long, on the order of weeks, with a possibility of extending the drug release to much longer periods. Aluminum Pretreatment 0.5 mm thick Al foil purchased from Alfar aesar (99.99%) was used as a starting material.The Al foil was succes-sively degreased with isopropyl alcohol and acetone for 5 minutes with ultrasonication followed by a D.I. water rinse and a nitrogen gas blow.Then, the organic-free Al foil was slightly etched in 1 M NaOH aqueous solution to remove any surface contamination before the electropolishing process.Conducting Cu tape was used as the electrode pathway to the Al foil and selected portions of the Al foil, such as the edges or backsides, were protected by a lacquer (Miccroshield) in order to make it resistant to the electrolyte used.A mirror-shiny Al foil was obtained after electropolishing in a mixed solution of HClO 4 and Ethanol (1:4(v/v)) at 5˚C under 20 V for ten minutes with a Pt counter electrode.For a control sample, electropolished Al was used in the experiments.For the experimental nanotube samples anodization was conducted. Anodization Prior to hard anodization, mild-anodized porous AAO (~800 nm thick layer) was formed under 25 V in 0.3 M sulfuric acid and then the electrolyte concentration was changed from 0.3 M to 0.06 M. Next, anodization voltage was increased (at a rate of ~0.5 V/s) from 25 V to 35V in order to inhibit local AAO film thickening due to localized high current concentration which otherwise would lead to inhomogeneous oxide film growth or even dielectric breakdown during hard anodization.Power supply (Agilent; E3612A) was connected to digital multimeter (Keithley; 2100) to monitor voltage-current evolution during anodization. AAO Post-Treatment After the anodization process, the top side of AAO layer was attached to Si substrate with adhesive glue for handling purpose.Then, Al substrate was selectively removed with a mixed HCl and CuCl 2 solution for ten minutes when the reaction ends.Any residual Cu debris adhered to the bottom of the AAO barrier layer was removed by placing in nitric acid for a few seconds and washed in D.I. water immediately after.The AAO barrier layer was then removed by 5 wt% phosphoric acid for ten minutes to two hours depending upon barrier layer thickness of the as-grown AAO and observed under Scanning Electron Microscope (SEM; Phillips XL30 ESEM).All samples were cut into identical size pieces (1 × 1 cm 2 ) and autoclaved before use as drug carriers. Porosity and Surface Area Calculation In order to characterize the AAO nanotube film structure, a porosity and surface area calculation was carried out.A theoretical porosity can be computed geometrically based on structural parameters such as pore size and inter pore distance assuming ideal hexagonal arrangement.The fo-llowing equation, Equation (1), was used to approximate the porosity, P, of the samples, where r is the inner pore radius and D int is the inter pore distance.The overall surface is assumed to be filled with Al 2 O 3 except the empty pore space, which is somewhat complex in the case of a tubular structure.The apparent porosity can be calculated based on pore area of the unit cell and divide by unit cell area.To determine the surface area of the samples with an area of 1 cm where R is the outer tube radius, r is the inner pore radius, and h is the height or length of the film.This equation is based on the surface area of a tube multiplied by the number of tubes in the sample area.This is a theoretical estimation.It may be that the voids located around the center pore are not completely void the length of the tube, nonetheless SA is dramatically increased based on the introduction of tubes on the surface.For a 1 cm 2 sample, the porosity and surface area was calculated to be 25% and 1830 cm 2 , respectively. Antibiotic Loading, Release, and Collection Insertion of liquid into AAO nanotubes is not always easy as the surface tension of the liquid has to be overcome.At room temperature (25˚C), the nanotube samples were placed in a vacuum (~10 -4 torr) for approximately 5 -10 minutes to rid nanopores of any trapped air.Approximately 1 ml of 1% amoxicillin (Sigma) in phosphate buffered solution (PBS) pH 7.4 (Invitrogen) was loaded onto each sample placed individually in separate wells of a 12-well plate (Nunc).To ensure dissolution of the amoxicillin prior to loading, a few microliters of 1N HCl was added until the solution became clear.The samples were incubated overnight to allow sufficient time for the drug to fully penetrate into the nanotube structure.The drug-loaded nanotube samples were washed 3X with ice cold PBS (to restrict diffusion from the reservoir).Next, the samples were individually placed in new wells (Nunc, 12-wells) incubated in a humidified 95% air/5% v/v CO 2 incubator at 37˚C in 1ml fresh simulated body fluid (PBS was used in this study).The solution was collected at hourly time points initially (hours 1 -6) and daily time points thereafter (up to day 35 or 5 weeks) and 1ml fresh PBS was added after each collection.Drug concentration was determined by measuring the absorbance of the fluid using a UV-VIS spectrophotometer at λ = 230 nm (Biomate_3, Thermo Electron, Madison, WI).The assay was calibrated by use of PBS blanks and a standard curve was determined up to 2 mg/ml amoxicillin.Three replicates per experimental sample for each time point were measured and the average values ± standard error (SE) was graphed to obtain release profiles, release rate, accumulation, and release kinetics. Results and Discussion The vertically aligned, periodic AAO nanotube structure used as a drug carrier is illustrated in the scanning electron microscopy (SEM) images in Figure 1 (top row).In contrast to conventional AAO nanopore structures, AAO nanotube unit cells are separated from each other while being loosely connected to each other, which is an interesting feature.In our nanotube samples, the nanotube center pore size (~20 nm) is practically the same as the size of the voids (spacing between adjacent nanotubes) surrounding the center pore.This makes our AAO nanotube structure favorable due to larger surface area in terms of loading drugs or catalyst chemicals into the AAO nanotubes.The equal size of the center pore and voids was achieved by the relatively low anodizing voltage (35 V) which ensures both relatively small center pore size and interpore distance compared to the higher voltage evolution under the constant current anodization technique conducted by Zhao et al. [24].The anodic current evolution during our nanotube fabrication is given in Figure 2. To our knowledge, this is the first study to utilize the AAO nanotube structure for applications in drug elution.The nanotube film shows a highly ordered and uniform nanotube morphology and long-range order with nanotube height reaching ~38 µm, which is the tallest used thus far in nanopore/nanotube ceramic alumina and titania drug elution studies.Physical details of the AAO nanotubes are portrayed in the chart shown in Figure 1 (bottom row).One of the advantages of our nanotube structure is the increased porosity (~25%) and the high surface-to-volume ratio.Here we show the surface area is increased by three orders of a magnitude by introducing the nanostructures on the surface, where a 1 × 1 cm 2 sample has perceivably 1830 cm 2 of surface area.The interstitial space between the tube walls and the inner pore walls aid in this calculation and is an advantage over a traditional pore structure.Another benefit to our AAO drug release system is that we can design the nanotubes to match a desired pore size (20 -100 nm), structural shape (pores vs. tubes), available porosity, and surface area which can help tailor, for instance by chemical wet etching or pore widening techniques to specific implant needs and controlled release.Factors such as adsorption properties (interactions between drug and matrix), pore size, pore connectivity, and pore geometry are just a few of the aspects to take into account when designing a controlled drug delivery system.It has been suggested that during AAO fabrication, stress cracking and other residual defects due to the oxidation volume expansion (Al becoming Al 2 O 3 ) may be present and these imperfections can leave charges on the surface, such as Al 3+ and O 2- [25].For the purposes of drug loading, this may aid in electrostatic adsorption of the drug molecules and help concentrate the drug within the nanotube "depots" so to speak. For this study, amoxicillin (AMX), a common pharmaceutical antibiotic, is used as a model drug in the following AAO drug elution studies.The size of the AMX molecule is ~0.8 nm [26], a reasonable size to enter and fill the 20 nm diameter pores and interstitial spaces in between the nanotubes.Preoperative oral administration of AMX has been proven to reduce the risk of implant failure [27,28], and local delivery of AMX during orthopedic surgery reduced the infections associated with open fractures [29], compound limb fractures [30], and with osteoinductive and osteoconductive bone-graft substitutes [29].As well, local delivery of antibiotics was effective in reducing vascular infections from staphylococcal strains [31].It can therefore be hypothesized that localized AMX elution from both orthopedic and vascular implants would be highly advantageous.With the more advanced drug delivery, controlled release system such as AAO nanotubes on implants, would help make improvements in delivery efficiency and localization which may also provide a solution for reducing dosages and help minimize toxic side effects and drug waste.The nanotube shape also has its advantages over a porous structure because it provides an optimal surface shape to allow for cellular adhesion sites for better cell attachment and proliferation [32], aiding in surface integration and cellular locking [33]. In terms of controlled release, it was found that the AAO nanotubes were cable of carrying cargo molecules (AMX, a small drug molecule) and releasing them in a physiological environment of the simulated body fluid, phosphate buffered solution (PBS).There are several types of controlled release devices and the AAO nanotube system presented here can be considered a drug diffusion-controlled release, where the entrapped drug diffuses out of a matrix at a defined rate [34].An antibiotic release profile from the AAO nanostructures filled with AMX was obtained for over 5 weeks (35 days), illustrated in Figure 3.For a control experiment, electropolished aluminum, without a nanostructured surface, showed almost zero antibiotic release as expected (data not shown).This indicated that it was the nanotube design on the surface which created a reservoir for the AMX that was responsible for the drug release.Figure 3 shows the total amount of amoxicillin released as a function of time.A near steady release profile is achieved after the first week of release (after Day 7).The ideal release profile for most drugs would follow this type of a steady release rate so that the drug levels in the body remain constant while the drug is being administered [35].The drug elution from the AAO nanotubes accomplishes the primary objective of a controlled release device which is to provide a sustained release for long periods of time on the order of days, weeks, even months. In the inset graph of Figure 3, which shows the initial release of drug from the nanotubes in the first 6 hours of release, the highest "burst effect" is in the first hour with ~13 µg of drug release.The "burst effect" is often seen as controversy as to whether this is due to near-surface entrapped drug or surface-absorbed drug [36].The initial burst and release of drug from the nanotubes may be related to several factors including 1) high relative top sur-face area 2) increased drug diffusivity through tube walls/channels and 3) high porosity.In addition, the specifics of the pore dimensions and their uniformity as well as subtle difference in physical form of the nanotubes may play a role at release during the initial short term release (first 7 days) before the steady elution (beyond 7 days in Figure 3).At this stage in release, the drugs are being released from the top portion of the film where the so called "matrix surface" becomes a factor.After seven days, however, it is suggested that the drugs are traveling from a distance that is farther down in the matrix and less likely to be affected by the very top surface.We have also studied drug release from AAO ~20 nm and ~40 nm pore (not tube) structures with the same film thickness or height as the nanotubes studied in this report (data not shown), however it has been suggested that it is the height of the pore, not the pore size, that changes the diffusion characteristics [15].Varying film thickness will impart some of our future studies, but this report focuses on the unique geometry and beneficial properties of the AAO nanotube structure. To further characterize the AMX release from the AAO nanotubes, Figures 4 and 5 illustrate the accumulative release (showing daily and weekly accumulation) and release rate per day over the 5 week elution study, respectively.A near steady release rate occurred over the course of the 5 weeks.This type of release would help maintain a drug level in a therapeutic window, avoiding the extremes of systematic drug over-dosages or under-dosages, eliminating the risks of adverse effects, drug waste, or being sub-therapeutic.When studying the drug release of molecules that have a size regime on the same scale as the matrix features, the basic principle of diffusion as a mixing process with solutes free to undergo Brownian motion in three dimensions may not necessarily apply, in at least on dimension, for the AAO nanotubes because the solute movement is physically constrained by the nanotube walls [37].The AAO nanotube geometry may impose a rate limiting condition due to the length of the nanotube walls, because the length dictates how far the solute molecules have to travel to be released from the reservoir.While it is not possible to draw significant conclusions without varying the wall height of the AAO nanotubes, this in part will form some of our future work. Many studies have observed that the release rate of a drug dispersed in a solid matrix (with no erosion of the matrix occurring) is proportional to the square root of time, as predicted by the Higuchi model [38][39][40].It was determined that this was because the release rate is inversely proportional to the distance the drug must travel within the matrix to the matrix surface, since the diffusion distance increases with time, the release rate decreases with time [38].The Higuchi equation follows, where t Q Q  is the cumulative fractional release at time t and k is the release constant. To identify the release rate mechanism and model the drug transport in the AAO nanotube system, the hypothesis was made that the release data obtained could be fitted using Equation ( 3) and the results are given in Figure 6, where the cumulative fractional AMX release, Equation (3), was plotted versus the square root of time.A near perfect linear fit was observed, demonstrating that the drug kinetics approximately follow the square root of time relationship.The chart in Figure 6 describes the linear fit.The mechanism of release is most likely attributable to a novel constrained diffusion mechanism provided by the AAO nanotube walls. AAO films are simple to prepare and can be easily modified and structurally tailored.As well they are resistant to most physiologic and chemical reactions (bioinert), mechanically strong, and are considered biocompatible in vitro and in vivo.By utilizing AAO nanotubes as drug carriers, a variety of drugs can be loaded into the device reservoir in a range of physical states, including solutions and crystalline or micronized suspensions [37].This flexibility with respect to encapsulated drugs provides options to substantially increase the load dose and duration of therapy, as well as stability of drugs that are unstable in certain biological fluids or different biochemical/acidic/alkaline environments.AAO films are structurally robust and will not swell or change its porosity under different pHs or temperatures [41].Thus, AAO nanotube drug carriers can be used to address the problems associated with conventional drug therapies such as limited drug solubility, poor biodistribution, lack of selectivity and unfavorable pharmacokinetics [11].Lastly, the potential for AAO nanotube arrays on implant surfaces will help mimic the complex geometries of natural tissue and will provide a porous template for the growth and maintenance of healthy cells and tissue [42], aiding in implant design as well as local delivery of therapeutics. Conclusions The controlled release of amoxicillin from anodic alumina oxide (AAO) nanotubes was investigated.The unique AAO nanotube morphology was fabricated using a simple two-step anodization process that resulted in highly uniform, structurally robust nanotubes.This is the first study utilizing the AAO nanotube geometry as a drug carrier and the diffusion characteristics including a drug release profile, drug accumulation plot, and release rate were acquired.The AAO nanotube carriers demonstrated controlled, sustained release of common antibiotic, amoxicillin for approximately 5 weeks.This study illustrates the potential advantages of using AAO nanotubes as a unique alternative in terms of therapeutics concepts for implant surface coatings. Figure 1 . Figure 1.Upper left: Top-view SEM image of vertically aligned periodic AAO nanotubes.Upper right: Oblique view (inset = higher magnification image).The table describes the physical dimensions of the nanotubes. Figure 2 . Figure 2. Representative anodic current evolution for AAO nanotube fabrication.Applied voltage was slowly increased from 25 V to 35 V and remained at 35 V. Anodization was conducted for 30 minutes overall. Figure 3 . Figure 3. Absolute release rates of amoxicillin as a function of sampling time for the AAO nanotube drug carriers.The initial burst of drug from the surface is shown in the inset.The graphs show the mean ± SE (n = 3). Figure 4 . Figure 4. Accumulative amoxicillin released as a function of time.The graph shows mean ± SE (n = 3).The dotted line reveals the daily accumulation over time and the bars represent the average accumulation per week. Figure 5 . Figure 5. Daily release rate over time (normalized per weekly time point).The graph shows the mean ± SE (n = 3). Figure 6 . Figure 6.To assess the mechanism of drug release, a plot of fractional release       t Q Q  vs. the square root of time was completed.A near perfect linear fit was observed and details are shown in the above table. (2)first the nanotube density was established based on SEM images by counting the number of nanotubes per field for a given area.The nanotube density, N, was ~2.19 × 10 10 (nanotubes/cm 2 ).The following equation, Equation(2), was used to approximate the surface area, SA, of the samples
2018-03-03T18:40:26.803Z
2011-07-06T00:00:00.000
{ "year": 2011, "sha1": "9c5433df65557f645c196930ee93bd0697642b2a", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=5337", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9c5433df65557f645c196930ee93bd0697642b2a", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
407238
pes2o/s2orc
v3-fos-license
Tackling Obstacles for Gene Therapy Targeting Neurons: Disrupting Perineural Nets with Hyaluronidase Improves Transduction Gene therapy has been proposed for many diseases in the nervous system. In most cases for successful treatment, therapeutic vectors must be able to transduce mature neurons. However, both in vivo, and in vitro, where preliminary characterisation of viral particles takes place, transduction of neurons is typically inefficient. One possible explanation is that the extracellular matrix (ECM), forming dense perineural nets (PNNs) around neurons, physically blocks access to the cell surface. We asked whether co-administration of lentiviral vectors with an enzyme that disrupts the ECM could improve transduction efficiency. Using hyaluronidase, an enzyme which degrades hyaluronic acid, a high molecular weight molecule of the ECM with mainly a scaffolding function, we show that in vitro in mixed primary cortical cultures, and also in vivo in rat cortex, hyaluronidase co-administration increased the percentage of transduced mature, NeuN-positive neurons. Moreover, hyaluronidase was effective at doses that showed no toxicity in vitro based on propidium iodide staining of treated cultures. Our data suggest that limited efficacy of neuronal transduction is partly due to PNNs surrounding neurons, and further that co-applying hyaluronidase may benefit applications where efficient transduction of neurons in vitro or in vivo is required. Introduction In order to understand how different genes can modify specific neuronal functions, it is necessary to manipulate gene expression in neurons. However, reliably introducing genetic material into neurons has been problematic (for review see [1]). Recently lentiviral vectors have emerged as a powerful tool capable of delivering DNA to neurons in vitro and in vivo. While the transduction efficiency with viral vectors in general is largely dependent on titre, further limitations of the ability to transduce cells are imposed, in vivo mainly through the restricted diffusion of lentivector particles around the injection site within the extracellular matrix of the brain [2], and in vitro, especially for lower multiplicities of infection (MOIs), through an as yet poorly understood restriction. In many studies this is circumvented by transducing at early time points or simultaneously with seeding cells [3,4,5,6,7,8,9,10]. In general transduction efficiency is thought to decrease with increasing age of the neuronal culture (days in vitro; DIV) but few groups have systematically investigated this. In cases where virally transduced genes are intended to take effect in mature neurons only, or where transduced genes may disrupt development and differentiation of plated neurons, the requirement to transduce neurons in vitro at early stages in order to achieve high transduction efficiency presents a major drawback. One possibility is that the age-related decrease in transduction efficiency is linked to the maturation of neuronal cells, with changes of the outer cellular surface during this phase restricting viral entry into cells. All types of cells are surrounded by extracellular matrix (ECM), and one of the main constituents of the ECM is hyaluronic acid (HA), a long polysaccharide molecule which is composed of N-acetyl-glucosamine and Dglucuronic acid [11,12]. HA is anchored to extracellular receptor CD44 and CD168 [13] and serves as a scaffold to keep proteins and molecules that support cellular viability in close proximity to the cell surface (for review see [14]). The importance of HA in the brain has been recognised since the 1970s [15,16]. It entirely covers neurons, including cell bodies, dendrites and axons [17], and in conjunction with other molecules such as chondroitin sulphate proteoglycan and various proteins like tenascins, reelin, laminin, HA is central to building up a net like structure surrounding neurons, known as perineural nets (PNNs). In the brain, HA is thought to maintain the physicochemical properties of the ECM [18,19], but there is increasing evidence that HA also alters functional properties of neurons [20,21,22]: The characteristic distribution of HA and changes during cerebral development are indicative of functional properties (n.b. there is no HA in the adult cerebellum); neurite growth is altered by HA, and neurites tend to avoid HA containing collagen substrates. HA can also have an impact on membrane potential, as indicated by the depolarization observed when HA was added to cultured neurons [23]. The mechanisms of these interactions are still a matter of speculation, but could be related to altered distribution of extracellular ions or signalling via CD44 receptors. Recently HA has been shown to influence neurotransmission and signalling, and also to contribute to synaptic plasticity by regulating usedependent Ca 2+ currents via Ca v 1.2 channels [24], thus manipulation of HA might have consequences for neuronal viability. Hyaluronidase is an enzyme which cleaves HA, and could be used to degrade the PNNs and increase access to the surface of neurons. We report that treating cells with hyaluronidase improves transduction efficiency with lentiviral vectors in vitro and in vivo. This was done after evaluating potential toxic effects in vitro and in vivo on neuronal survival. Materials and Methods All experimental procedures of this study involving animals were carried out in accordance with the UK Animals (Scientific Procedures) Act 1986 and following ethical approval from UCL Institute of Neurology. Chemicals if not specified are from Sigma (St. Louis, Missouri, USA). Production of Lentiviral Vectors and Titration Second and third generation lentiviral vectors have been produced as described previously [25,26]. Both express different GFP variants driven by different promoters, with pGIPZ (second generation; Open Biosystems, ThermoScientific, Waltham, Massachusetts, USA) expressing turboGFP (tGFP) under the control of CMV promoter, and pCDH1-MCS1-EF1-copGFP (third generation; System Biosciences, Mountain View, California, USA) expressing copGFP under the EF1a promoter. Packaging was done with plasmids pCMVDR8.91 for second generation vector [27], or pMDLg/pRRE and pRSV-Rev [26] for third generation vectors, together with the vesicular stomatitis virus protein G envelope plasmid pMD2.G expressing VSV-G surface protein for both (packaging and envelope plasmids were kindly provided by D. Trono, Geneva, Switzerland). To make vectors, the corresponding plasmids were co-transfected with calcium phosphate method into HEK293T cells (originally form ATCC/LGC Standards, Teddington, UK), which were cultured in DMEM (PAA, Pasching, Austria) supplemented with 10% FBS (Gibco-Invitrogen, Carlsbad, California, USA) and Penicillin/Streptomycin (PAA) in a 5%-CO 2 incubator (Binder, Tuttlingen, Germany) at 37uC in a humidified atmosphere. Supernatant has been collected 48 and 72 hours after transfection, concentrated with ultracentrifugation (Kontron Instruments, Zuerich, Switzerland) in a SW28.1 rotor (Beckman, Brea, California, USA) and resuspended in 50 ml DMEM medium without any additives. Titration was carried out separately for each vector in HEK293T cells by transducing them with serial dilutions of concentrated vector using polybrene (8 mg/ml; Sigma). The percentage of green cells was analysed with a FACS Calibur flow cytometer (Becton Dickinson, Franklin Lakes, New Jersey, USA) 72 h after transduction. The viral dilutions which gave 1-10% green cells have been chosen to calculate titres, which were in the range between 1 * 10 8 and 2 * 10 9 TU/ml. As the different vectors relied upon distinct promoters, the calculated titres may partly reflect different reporter expression levels in HEK cells. For this reason, all comparisons of viral efficacy were done using a single virus, however to show that the influence of ECM on transduction level was not restricted to a single promoter we carried out similar tests on more than one vector. Rat Primary Mixed Cortical Cultures Sprague Dawley rat pups (P0-P1; UCL breeding colony, UCL, London, UK) were used for primary mixed cultures. The protocol was a slightly modified version of [28]. Cortex of both hemispheres was removed, purified and minced in HBSS. Trypsin (PAA) was added to a final concentration of 0.25%. After incubation at 37uC for 15 min, cortical pieces were triturated with fire-polished Pasteur pipettes. 10 5 cells were seeded on 13 mm coverslips (Menzel glass, Braunschweig, Germany), which were pre-treated with Poly D-Lysin/Laminin (Sigma). They were cultivated in 12well plates in a total volume of 1.5 ml complete Neurobasal-A medium (Neurobasal-A, supplemented with B-27(R) serum-free supplement and Glutamax; all Invitrogen), and kept at 37uC in a humidified atmosphere CO 2 -incubator (5% CO 2 , Binder). A partial medium change (one third of the total volume) to maintain the cultures was scheduled twice per week. Hyaluronidase Treatment and Lentiviral Transduction of Rat Primary Neuronal Cultures Hyaluronidase from bovine testes Type I-S (Sigma) was diluted in complete Neurobasal-A medium as a 3-fold concentrated stock solution, which was done freshly on the day of the experiment. After adding hyaluronidase to the medium it was left on the cells without subsequent medium change (partial medium change was scheduled according to regular maintenance; there were at least two days between experimental treatment and medium change). For transductions, different vector batches were used after correction for differences in their titres. In order to achieve a multiplicity of infection MOI = 1 in a well with 10 5 cells plated, 10 5 TU per well were added. Scaling up to higher MOIs was accomplished accordingly. Transfection of Neuronal Cultures For transfection of neuronal cultures in the 12-well format (10 5 cells per well), Lipofectamine2000 TM (Invitrogen) was used according to the manufacturer's instructions. Two different amounts of DNA were used, with the higher amount being 1.6 mg of pCDH1-MCS1-EF1-copGFP DNA and 4 ml Lipofecta-mine2000 TM reagent in 100 ml OptiMEMH (Invitrogen) each, or the lower amount with 0.8 mg of pCDH1-MCS1-EF1-copGFP DNA and 2 ml Lipofectamine2000 TM reagent in 50 ml Opti-MEMH each. Intracerebral Injections of Lentiviral Vectors and/or Hyaluronidase Eight male Sprague-Dawley rats (11 weeks; Harlan, Shardlow, UK) were used for these experiments. They were kept in the local animal facilities at the Institute of Neurology, with a 12-h light/ dark cycle and with ad libitum access to food and water. For stereotactic surgery they were deeply anaesthetised with Vetflurane (Virbac, Suffolk, UK), and fixed to a stereotactic frame (Kopf Instruments, Tujunga, California, USA). Two holes were drilled into the skull at the targeted injection area in the motor cortex. The coordinates were 1.0 mm anterior to Bregma, 2.4 mm laterally from midline, 2.0 mm underneath the surface of the skull. Injections were done with a 33G injection cannula of a 5 ml Hamilton syringe, attached to an automated injection device (WPI, Sarasota, Florida, USA) promoting the injection at a rate of 200 nl/min over a period of 10 min per site. The total injection volume was 2 ml per site. The cannula was left in place for 5 min after each injection. After removal the skin was sutured and disinfected with Braunoderm (Braun-Melsungen, Melsungen, Germany). Postoperative care included analgesia injection of Buprenorphine (BuprenexH, Reckitt Benckiser, Slough, UK). The animals were maintained to allow vector genes 12 days to express prior to brain removal. The injection solution was a mixture of 1 ml the lentiviral vector (pCDH1-MCS1-EF1-copGFP, titre 2 * 10 9 TU/ml) and either 1 ml PBS or 1 ml hyaluronidase in PBS (4 or 20 U/ml). This resulted in an injection of 10 6 TU of lentiviral vector in 2 ml, with or without 4 or 20 U hyaluronidase. Each animal received both treatments, with PBS or hyaluronidase in either of the two hemispheres. Injections of hyaluronidase only contained the final amount of enzyme (4 or 20 U) in 2 ml volume PBS. Immunohistochemistry Animals were transcardially perfused with PBS and 4% PFA in PBS. Brains were removed and fixed overnight in 4% PFA. 50 mm free floating coronal sections were cut on a vibratome (VT1000S, Leica, Wetzlar, Germany) and kept at 4uC in PBS until further use. Brain slices were permeabilized with 0.3% Triton/PBS, and subsequently blocked in 1% BSA/0.3% Triton/PBS. The same primary (for NeuN and tGFP) and corresponding secondary antibodies were used with the same dilutions as described for Immunocytochemistry. They were diluted in 1% BSA/0.3% Triton/PBS. Stained brain slices were mounted onto microscopic slides and sealed with PVA mounting medium (Sigma). For staining of the extracellular matrix (see [29]), the slices were incubated in biotinylated Wisteria floribunda agglutinin (20 mg/ml; Sigma) in PBS overnight at 4uC. After washing (3610 min in PBS), slices were incubated in Strepdavidin-Texas Red (5 mg/ml; Invitrogen) for 2 h and washed 3610 min in PBS afterwards. Nuclear staining was done with 5 mM Hoechst 33342 (Molecular Probes, Eugene, Oregon, USA), slices were mounted onto microscopic slides and sealed with PVA mounting medium (Sigma). Fluoro-Jade C staining was done as previously described [30]. Briefly, sections were mounted onto microscopic slides and incubated in 0.06% KMnO 4 for 10 min. After washing (262 min in H 2 O), they were incubated in 0.0001% Fluoro-Jade C (in 0.1% acetic acid) for 25 min at RT, washed 362 min in H 2 O with the last washing step containing 5 mM Hoechst 33342 (Molecular Probes) for nuclear staining, and finally sealed with DPX mounting medium (Sigma). Microscopy and Image Analysis For the Hoechst/propidium iodide (PI) staining, coverslips with primary cells from rat mixed cortical cultures were incubated in 5 mM PI (Sigma) and 5 mM Hoechst 33342 (Molecular Probes) for 30 min at room temperature. Images were obtained on an epifluorescence inverted microscope equipped with a 20x fluorite objective (Olympus, Tokyo, Japan) using excitation light provided by a Xenon arc lamp. Emitted fluorescence light was reflected through a 380/10 nm filter (for Hoechst) or a 530 nm LP filter (for PI) to a CCD camera (Retiga, QImaging, Canada). Each group was done on 3 coverslips. Each coverslip was analysed on 4 pictures taken in randomly chosen areas using ImageJ software (NIH, Bethesda, Maryland, USA). Images of lentivirally transduced neuronal cultures were acquired with a standard tissue culture epifluorescence microscope with GFP filter set and 10x or 20x ADL objectives (Nikon) using excitation light provided by an LED excitation light source. GFPpositive cells were counted manually on the entire 13-mm coverslip (for total number of green cells ,500) or from the average of 7-10 randomly taken images of the 13-mm coverslip and adjustment of the counted number to the total coverslip size using the microscope's unique field number (for total number of green cells .500). Images of double stained neuronal cultures were obtained on an Axiovert AX10 microscope with FITC and Rhodamine filter sets (Zeiss, Oberkochen, Germany), using a 10x Plan-Neofluar objective and excitation provided by a HBO100 mercury light source (Leistungselektronik, Jena, Germany). Overlay analysis was done using Zeiss Axiovision software package. Images of GFP-only positive brain sections were taken on an Axiovert AX10 microscope with FITC filter set, using a 2.5x Plan-Neofluar objective. Measurements of the size of the injection area were done using ImageJ software. Sections of the entire rostrocaudal length of the injection area were analysed for their twodimensional spread. The third dimension for the volumetric analysis was added by applying the slice thickness (50 mm) as the distance between sections and calculating the volume accordingly. Images of double labelled brain slices, slices with staining of the extracellular matrix and slices stained with Fluoro-Jade C were acquired using the confocal laser scanning microscope Zeiss 710 LSM with a META detection system (Zeiss). A 20x objective was used. HoechstH dye fluorescence was produced with the 405 nm laser, GFP fluorescence using the 488 nm laser, and red fluorescence with the 561 nm laser. For analysis of NeuN/GFP double-positive cells, overlay images of two slices in close proximity to the centre of the injection have been analysed per animal using Volocity Demo software (Perkin Elmer, Waltham, Massachusetts, USA). Statistical Analysis Statistical analysis was done using Origin 8.5 software (Microcal Software Inc., Northampton, Massachusetts, USA). Data are presented as mean 6 SEM. Data were compared by Student's ttest (two-tailed) or analysis of variance (ANOVA), followed by Tukey post-hoc test, if appropriate. Statistical significance was accepted if P,0.05. Transduction Efficiency Decreases with Age of the Culture To systematically investigate the relationship between DIV and transduction efficiency, neuronal cultures (10 5 cells per well; N = 3 per group) were transduced at different time points after plating, (DIV 0 immediately after plating; DIV 2 and DIV 4, day 2 and day 4 after plating, respectively) with pGIPZ lentiviral vectors at MOI = 3. The total number of GFP-positive cells was assessed under the microscope seven days after transduction (Fig. 1). The transduction efficiency as observed from the number of green cells, decreased gradually, but significantly from DIV 0 (78726490) to DIV 4 (11796109; one-way ANOVA, F 2,6 = 92.3, P = 0.00003). Hyaluronidase has a Dose Dependent Toxicity on Primary Neurons which is Age Dependent There is evidence that HA promotes neuronal survival, which may be reduced by hyaluronidase treatment. To evaluate the toxicity of hyaluronidase treatment neuronal cell cultures of different ages (DIV 5,8,12) were incubated with a range of concentrations of hyaluronidase (0 U/ml -300 U/ml) in the growth medium (complete Neurobasal-A). After 3 or 7 days, the viability of cells was determined with PI/DAPI staining (N = 3 per group; Fig. 2A Based on these findings, in the following experiments, the two lower concentrations 10 U/ml and 30 U/ml, which did not produce significant toxic effects, were chosen to investigate effects on transduction. Hyaluronidase Improves Transfection with Lipofectamine2000 TM Hyaluronidase may improve transduction efficiency by facilitating the access to the outer cellular surface for the DNA carrier particles. Alternatively, it may have a function which is restricted to lentivirus, for example by increasing the affinity of unique surface properties specific for viral vectors. To rule out a viral specific mechanism, a standard Lipofectamine2000 TM transfection of neuronal cultures with pCDH1-MCS1-EF1-copGFP was analysed. The average size of Lipofectamine2000 TM particles carrying the DNA are between 160-410 nm diameter [31], and thus slightly larger than the average size of lentiviral vectors (75-100 nm) [32]. However, if hyaluronidase improves access to the cell membrane by increasing the permeability of the ECM, it may be expected to have an effect on other particles even if slightly larger. Cells were seeded at a density of 10 5 cells per well, and Lipofectamine2000 TM transfection was carried out on DIV 8 (N = 4 per group). The number of green cells was counted 10 days after transfection (Fig. 4). At both low (0.8 mg), and high (1.6 mg) DNA concentrations, treatment with 10 U/ml hyaluronidase significantly increased the efficiency of transfection (1188645 versus 113625 green cells per well for 0.8 mg DNA (P,10 28 ) and 1106654 versus 128612 green cells per well for 1.6 mg DNA (P,10 28 ). One-way ANOVAs (treatment) confirmed significant differences between treatments for low DNA amount (F 2,11 = 285.3, P,10 28 ) and high DNA amount (F 2,11 = 212.8, P = 2.6*10 28 ). Treatment with enzyme for 1 hour was more effective than 24 hour treatment, however 24 hour treatment was still significantly higher than control (451623 green cells per well for 0.8 mg DNA, P = 1.5*10 25 , and 442621 green cells per well for 1.6 mg DNA, P = 4.0*10 25 ; Tukey post-hoc tests). Hyaluronidase Increases the Percentage of Transduced NeuN-positive Neurons in vitro In order to verify that the improvement in transduction efficiency was not restricted to increase in transduction of nonneuronal cells in our mixed cultures, we also assessed the proportion of transduced cells which were positive for NeuN, a neuronal marker. Primary neuronal cultures were transduced on DIV 12 with the third generation pCDH1-MCS1-EF1-copGFP lentiviral vector at MOI = 30, applying 0, 10 or 30 U/ml hyaluronidase (N = 3 per group), and the number of green cells per well was assessed seven days after transduction (Fig. 5B). This relatively high MOI was chosen in order to produce conditions which are comparable to the situation in vivo, where locally high amounts of virus accumulate due to restricted diffusion within the target tissue from the injection needle. For the estimation of an approximate MOI we considered the cerebral cortex to contain around 0.5-1 * 10 5 neurons per mm 3 [33], and an injection of 2 * 10 6 TU of lentiviral vectors (e.g. 2 ml of vector with a titre of 1 * 10 9 TU/ml), which is expected to spread approximately 1 mm 3 around the injection site. Altogether, this would result in an MOI of approximately 20-40 in total for neurons at the injection site. Under these conditions, the transduction efficiency of NeuNpositive cells increased from 63.465.3% (control) to 72.565.3% (10 U/ml) and 84.565.3% (30 U/ml), as confirmed by one-way ANOVA (F 2,8 = 11.8, P = 0.008; Fig. 5A). Toxic Effects Occur in vivo after Treatment with High Concentrations of Hyaluronidase In order to visualise potentially toxic effects of intracerebral hyaluronidase injections, brains which received PBS, 4, 20, or 40 U hyaluronidase (or 4 mg kainic acid as a positive control) treatment, were perfusion-fixed 24 h after injection, and pictures of the Fluoro-Jade C-stained sections were taken. These images included the injection canal, the area where the hyaluronidase is expected to be at the highest concentration (Fig. 6). In all conditions, a small number of positive cells were visible, including PBS injection and injection of 4 U hyaluronidase. These positive (green) cell bodies were distributed within the area near the canal, but they were also apparent at some distance. In contrast 20 U produced slightly more positive cells, while 40 U resulted in a large number of green cells focused on the area immediately surrounding the injections site. Staining from injections containing 20 U hyaluronidase was similar to the effects of an injection of kainic acid, which is neurotoxic and which served as a positive control. For each row, treatment refers to the point at which hyaluronidase was added, and analysis indicates when cells were assessed for toxicity. DIV5-late cells were treated with hyaluronidase at the same time as DIV 5 cells but were allowed to recover for longer. (B) Three days after treatment with hyaluronidase, PI/Hoechst staining of neuronal cultures revealed increased cell death at 100 U/ml and 300 U/ml for DIV 5 and DIV 8 cultures, while 100 U/ml did not produce any difference to control level for DIV 12 cells. Allowing cells treated on DIV 5 to recover for 7 days after treatment (DIV 5 -late) showed toxic effects only for 300 U/ml, revealing that potential damage of 100 U/ml is transient. *** p,0.001, ** p,0.01, n.s. not significant, compared to 0 U/ml respectively (ANOVA) (C) Representative images of hyaluronidase (10 U/ml) treated cells with PI and Hoechst staining. (N = 3 per group). Scale bar = 10 mm; doi:10.1371/journal.pone.0053269.g002 Local Injection of Hyaluronidase Produces Local Degradation of Extracellular Matrix For the investigation of the size of the area where the ECM is being degraded, the lower dose of 4 U was chosen because it appears to be less toxic than 20 U with Fluoro-Jade C staining. Brains were analysed 24 h after injection using Wisteria floribunda agglutinin (WFA)/Streptavidin-TexasRed (WFA), which binds to components of the ECM, and was used to reveal the degradation of ECM, because direct staining of digested hyaluronan in adult brain is not possible [34]. As expected, staining with WFA revealed robust ECM signal across the whole brain section indicated by a red staining between blue (HoechstH dye) nuclei, and with occasional very high density signals around few selected cells. Injection of 4 U of hyaluronidase produced a clearly visible area lacking ECM signal as a result of the hyaluronidase activity. This area was located slightly below the needle track, consistent with restricted area of hyaluronidase activity. The size of the area affected by hyaluronidase was variable, and the edges were too diffuse to allow exact quantitative analysis, however the largest extent of the diffusion area was in the range of 500-1000 mm in diameter, which would result approximately in a spherical volume of 0.1-0.5 mm 3 (Fig. 7). Intracerebral Injections of Lentiviral Vectors to Assess Effects of Hyaluronidase in vivo A concentrated lentiviral vector stock solution of pCDH1-MCS1-EF1-copGFP (2 * 10 9 TU/ml) was mixed 1:1 with PBS or hyaluronidase/PBS (4 or 20 U/ml), and 2 ml of this mixture was injected into the motor cortex of 11-week old rats, resulting in Discussion To obtain optimal gene therapy with viral vectors it is important to obtain the best possible transduction efficiency, and to be able to modify the desired subset of cells. In some tissues this presents more of a challenge than in others, and in many circumstances, mature neurons are difficult to transduce. Even before viral gene therapy emerged as a means for delivering genetic material to cells, it was known that chemical agents, in particular polycations like polybrene, DEAE-dextran or poly-1-ornithin could enhance viral infectivity [35,36]. However, the practical use of these agents is limited due to their toxicity. We show that hyaluronidase treatment also improves the efficiency of viral transduction of neurons with lentiviral vectors both in vitro and in vivo, and can be effective at low concentrations which are not toxic. Importantly, hyaluronidase treatment increases the percentage of transduced cells positive for NeuN, which is a marker for mature post-mitotic neurons [37]. These cells are often the primary target cells in potential applications of gene therapy in neurosciences and are expected to be a major target of gene therapy in clinical neurology. Hyaluronidase has already been shown to be effective in improving transduction in other tissues, such as studies using antitumour retroviruses in cellular pleura models [38], improvement of activity of oncolytic adenoviruses in a mouse tumour model [39], or increased administration of AAV into skeletal muscle [40]. Previous studies of hyaluronidase in the CNS have shown it can increase the distribution of nanoparticles (54 nm diameter), when the particles are injected into rat brain after hyaluronidase treatment [41]. In that study the size of the target area was increased by about 56%. In contrast, we did not see an increased distribution of the viral particles, which could reflect the smaller size of the nanoparticles compared to lentiviral particles, the different brain region targeted (striatum, as opposed to motor cortical area), the more homogeneous distribution of nanoparticles due to their smooth surface (as opposed to that of lentiviral vector particles, containing proteins), or the higher volume and amount of enzyme used in the nanoparticle study (5 ml of 20000 U/ml, which resulted in a total amount of 100 U injected, as opposed to 4 U in 2 ml per site used in our study). However, our in vivo results on toxicity indicate that 40 U produces significant toxicity, which might confound the goal of gene therapy. It is important to note that in vivo the distribution volume and local concentration of hyaluronidase are not easy to quantify and there may be a substantial concentration gradient of the injected enzyme between the centre of injection and the diffusion edges. Abordo-Adesida et al. demonstrated that in vivo there is a saturation effect for transduction, with the transduction efficiency not changed as the number of particles injected increased from 10 6 to10 7 particles per site [42]. One possibility is that disrupting the extracellular matrix with hyaluronidase would overcome this barrier, allowing surplus particles to reach more distant target neurons. However, we did not see an increase in the size of the target area in the presence of hyaluronidase. Instead our data indicated the enzyme increased the proportion of neurons within the targeted area that were positive for GFP, suggesting the enzyme enhanced access to these cells. If the extracellular matrix plays an important role in restricting viral tropism, then mature neurons, with a rather rigid and less permeable extracellular matrix, would be predicted to be more difficult to transduce than cells with less dense ECM. This is consistent with what we saw in vitro, where the beneficial effect of hyaluronidase treatment gradually declined from DIV 5 to DIV 12, possibly reflecting the meshwork increasing around neurons and finally make up the PNNs together with other components that are not subject to cleavage by hyaluronidase. Glial cells and neurons have both been implicated in building up ECM material and PNNs [43,44]. It has been shown that between week 2 and 3 in vitro cultures start to build up the ECM which is similar to that of the adult brain and mainly based on HA and chondroitin sulphate proteoglycans, [45]. This time window also coincides with our observation that hyaluronidase treatment is effective at improving transduction efficiency during the second week in vitro. Hyaluronidase also improved transfection efficiency of neurons using a standard transfection protocol using Lipofectami-ne2000 TM . This suggests that the improvement in efficiency is not specific to lentiviral particles but may be a direct result of making the cell surface more accessible to particles larger than the pores in the intact ECM. In untreated cells the ECM is described to create a network with pores as small as 56 nm [46]. These pores would effectively exclude, lentiviral particles (70-100 nm diameter [32]) and Lipofectamine2000 TM particles (160-410 nm [31]) from reaching the surface of cells. Increased accessibility by degrading the network of ECM may be sufficient to allow larger particles to reach and interact with the cell membrane. Hyaluronidase could potentially have off-target effects that would limit its use. The hyaluronidase used in this study was isolated from bovine testes and cleaves the 1-4 linkage between Nacetylglucosamine and D-glucuronic acid randomly within the HA Figure 8. In vivo stereotactic injection into rat cortex of pCDH1-MCS1-EF1-copGFP lentiviral vector does not result in wider spreading, but in an increased percentage of transduced NeuN-positive cells. (A) Volumetric analysis of the injection site revealed no difference between PBS, 4 or 20 U hyaluronidase (HYA). Inserts with rat brain images were taken from [49]. Scale bar = 400 mm (B) Percentage of GFP/ NeuN-positive cells is increased after co-injection of viral vector and hyaluronidase. *p,0.05; Scale bar = 100 mm. doi:10.1371/journal.pone.0053269.g008 macromolecule. It also cleaves analogous molecular bonds within the macromolecules chondroitin, chondroitin-4-and -6-sulfates, and dermatan. However, these molecules form part of the extracellular matrix with primarily scaffolding function and, thus, their cleavage is more likely a synergistic by-phenomenon than a drawback for our application. In gene therapy trials, it will be important to have a delivery mechanism which does not cause toxicity to the target cells. Our data indicate that hyaluronidase, at the doses we used may have no or limited side effects (as seen in vitro from the PI/Hoechst staining for doses 10 and 30 U/ml, Fig. 2; and in vivo from Fluoro-Jade C staining for doses 4 and 20 U, Fig. 6), and, does not have long-lasting effects (due to the reversibility of the facilitating effect 24 h after hyaluronidase treatment, Fig. 4). These data suggest at the concentrations we used hyaluronidase is not toxic to neurons. In addition even the toxicity measurement of DIV5 treated cells at the later time point (day 7 after treatment; Fig. 2) is reassuring that no long lasting consequences arise from the one-off treatment. The unique properties of lentiviral vectors, in particular their large gene packaging capacity and their ability to transduce postmitotic cells like neurons, make them ideal tools to use for gene therapy in neurosciences, even more in conjunction with the advent of non-integrating lentiviral vectors [47,48], which have a highly improved safety profile in terms of insertional mutagenesis of the host cell genome. Improving their efficiency for transducing neurons is a major goal, and our data suggest that coadministration of lentiviral vectors with hyaluronidase may be one step towards making gene therapy more effective in the central nervous system.
2016-05-03T22:56:22.947Z
2013-01-03T00:00:00.000
{ "year": 2013, "sha1": "4e990256611b50895d8e6b47d43fb40dea54d01c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0053269&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e990256611b50895d8e6b47d43fb40dea54d01c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
2483535
pes2o/s2orc
v3-fos-license
Cardiovascular disease guideline adherence and self-reported statin use in longstanding type 1 diabetes: results from the Canadian study of longevity in diabetes cohort Background Older patients with longstanding type 1 diabetes have high cardiovascular disease (CVD) risk such that statin therapy is recommended independent of prior CVD events. We aimed to determine self-reported CVD prevention guideline adherence in patients with longstanding diabetes. Research design and methods 309 Canadians with over 50 years of type 1 diabetes completed a medical questionnaire for presence of lifestyle and pharmacological interventions, stratified into primary or secondary CVD prevention subgroups based on absence or presence of self-reported CVD events, respectively. Associations with statin use were analyzed using multivariable logistic regression. Results The 309 participants had mean ± SD age 65.7 ± 8.5 years, median diabetes duration 54.0 [IQR 51.0, 59.0] years, and HbA1c of 7.5 ± 1.1 % (58 mmol/mol). 159 (52.7 %) participants reported diet adherence, 296 (95.8 %) smoking avoidance, 217 (70.5 %) physical activity, 218 (71.5 %) renin-angiotensin-system inhibitor use, and 220 (72.1 %) statin use. Physical activity was reported as less common in the secondary prevention subgroup, and current statin use was significantly lower in the primary prevention subgroup (65.5 % vs. 84.8 %, p = 0.0004). In multivariable logistic regression, the odds of statin use was 0.38 [95 % CI 0.15–0.95] in members of the primary compared to the secondary prevention subgroup, adjusting for age, sex, hypertension history, body mass, HbA1c, cholesterol, microvascular complications, acetylsalicylic acid use, and renin-angiotensin system inhibitor use. Conclusion Despite good self-reported adherence to general CVD prevention guidelines, against the principles of these guidelines we found that statin use was substantially lower in those without CVD history. Interventions are needed to improve statin use in older type 1 diabetes patients without a history of CVD. Background Cardiovascular disease (CVD), which includes myocardial infarction, coronary artery disease (CAD), stroke, and peripheral vascular disease, is often cited as the primary cause of mortality in type 1 diabetes mellitus [1][2][3][4]. Though it has been suggested that people with diabetes have a two to fourfold excess risk of developing CVD, in the context of type 1 diabetes the magnitude of this risk approaches tenfold [5][6][7]. The etiology for amplification of lifetime CVD risk in type 1 diabetes may relate to longer duration of exposure to hyperglycemia [7,8] in part owing to relatively younger age at diagnosis. Thus, older patients with long duration of type 1 diabetes are a unique group with extremely high lifetime risk of CVD. More intensive cardiovascular protection measures are recommended for older patients with longstanding diabetes regardless of their CVD history, particularly with regard to pharmacotherapy for lipid control [9,10]. Clinical guidelines on vascular protection in diabetes recommend the use of 3-hydroxy-3-methylglutaryl-coenzyme A reductase inhibitors (commonly referred to as "statins") in patients who are over age 40, have long duration of diabetes, or are younger but have existing microvascular complications or additional risk factors [10][11][12]. The benefits of statin use and lowering LDL cholesterol (LDL-C) in diabetes are strongly supported by studies which demonstrate the effectiveness of statins in reducing risk of vascular events and mortality regardless of prior CVD history [13,14]. Other general strategies for vascular protection include smoking cessation, dietary modification, regular physical exercise, maintenance of optimal glycemic control, blood pressure, and weight, and the use of renin-angiotensin system (RAS) inhibitors such as angiotensin-converting enzyme inhibitors (ACEi) or angiotensin receptor blockers (ARB). The evidence that justifies acetylsalicylic acid use is focused mainly on secondary rather than primary CVD prevention [11,12]. Ultimately, in the context of long diabetes duration and older age, a key emphasis of guidelines is the use of pharmacotherapy-in particular statin therapy-independent of CVD history. Despite strong evidence for CVD primary prevention strategies, studies in general practice settings have noted low adherence to these guidelines, possibly due to the nature of preventative rather than therapeutic interventions, expense, concerns about efficacy and side-effects, and limitations in physician-patient relationships [15][16][17][18]. Suboptimal statin adherence has been shown in CVD primary prevention (those without history of CVD), with long duration of therapy, and in elderly patients. Undertreatment substantially increases the risk of adverse cardiovascular outcomes and mortality [18][19][20][21][22][23]. To our knowledge, there are no studies which examine attention to CVD prevention guidelines, and specifically statin use, in patients with longstanding type 1 diabetes. We aimed to determine whether guideline adherence and self-reported statin use differed between those with and without history of CVD in the baseline phase of the Canadian Study of Longevity in Type 1 Diabetes cohort consisting of patients with 50 years or more of type 1 diabetes at uniformly high risk of CVD. Disparity in the comparison between those with and without CVD history may indicate suboptimal implementation of current clinical practice guidelines and a disregard for longstanding diabetes as a fundamental CVD risk factor. Study overview This study was conducted as a secondary analysis of the baseline data from the Canadian Study of Longevity in Type 1 Diabetes cohort (JDRF operating grant 17-2013-312). The goal of this analysis was to describe adherence to CVD prevention guidelines-with an emphasis on self-reported statin use-in Canadians living with type 1 diabetes for 50 years or more. Participant recruitment Between April 2013 and December 2014, patients were contacted across Canada through public advertisements, social media, and mailings to health care professionals including primary care physicians, endocrinologists, and pharmacists. Akin to other cohorts, our study included patients with a history of at least 50 years of insulin dependence, as acknowledged through medical documentation or corroboration by a family member [24,25]. For the Canadian Study of Longevity in Type 1 Diabetes, we anticipated a total cohort sample size of approximately 300 participants based on Canadian 1962 census data and contemporaneous age-specific incidence rates of type 1 diabetes and survival curves [26,27]. A total of 427 people initially contacted us by toll free number, mail, or e-mail, and 386 eligible participants agreed to participate. By the time of analysis, 309 questionnaires were returned and these participants were included in analysis. Participant flow is summarized in Fig. 1. Participant recruitment and data entry remains ongoing. Participants provided written informed consent, and the study protocol was approved by the ethics committee of the Mount Sinai Hospital (Toronto, ON, Canada). Data collection Data were collected through a 35-page questionnaire in which participants were asked about their diabetes management, family history of CVD, lifestyle and smoking habits, medication use, and history of cardiovascular disease (angina and heart attack), related surgeries (cardiac/leg bypass and angioplasty), hypertension and other medical history including cerebrovascular disease. Furthermore, we obtained from participants' healthcare providers recent clinical, physical, and laboratory measurements including blood pressure, lipid profile, HbA1c, estimated glomerular filtration rate (eGFR), and fundoscopy examination results. Lifestyle variables included questions related to diet, smoking, and physical activity: Diet was assessed using seven questions surrounding nutrient and caloric intake, meal patterns, and meal content. Dietary adherence was defined by presence of (1) self-reported consumption of fruits and vegetables and (2) self-reported effort to moderate consumption of dietary carbohydrates and fats. Participants reported presence or absence of current smoking and physical activity, and provided body weight and height from which BMI was calculated. To define pharmacotherapy use, participants were asked to list all current medications, allowing determination of acetylsalicylic acid, statin, ezetimibe, fibrate, and RAS inhibitor use; history of side-effects, drug intolerance, and duration of therapy were not ascertained. As participants were over the age of 40, according to guidelines all participants were eligible for statin use [10][11][12]. Also according to guidelines, RAS inhibitor use is indicated for participants older than 55 years, with microvascular complications, or with prior CVD history. Subjects meeting these criteria and reporting use of RASi were considered to be adherent [11]. Nephropathy was defined by the presence on laboratory tests of albumin to creatinine ratio (ACR) ≥2 mg/mmol or an age-adjusted glomerular filtration rate (GFR) <60 ml/ min [28]. Presence of retinopathy, and its classification as proliferative or non-proliferative, was obtained by the recent eye specialist examination. Presence of symptomatic diabetic neuropathy was determined through the use of the 15-item, self-administered Michigan Neuropathy Screening Instrument (MNSI) questionnaire. Neuropathy was defined by a score ≥3 [29]. Primary and secondary prevention subgroups Participants were stratified into primary and secondary prevention subgroups for comparison. The secondary prevention subgroup consisted of participants who reported any previous diagnosis of coronary artery disease, heart attack or angina, history of cardiac or leg angioplasty, bypass graft surgery, or cerebrovascular disease including stroke. Participants without any of these factors were considered to be in the primary prevention subgroup. As this study was a secondary analysis of cohort data, there were no specific questions about cerebrovascular incidents; history of such events was determined from an open-ended question for participants to report all known medical conditions and history. Statistical analysis SAS version 9.2 (SAS Institute, Cary, NC, USA) was used to perform statistical analysis. Descriptive characteristics were reported as mean ± standard deviation (SD), median and interquartile range (IQR), or as frequency and percent. Statistical comparisons between primary and secondary prevention subgroups were made using the Student's t test, the Mann-Whitney U test, or the χ 2 -test, depending on the distribution of the variable. For the adherence index, the χ 2 -test was used to compare adherence rates between primary and secondary prevention subgroups; Cohen's kappa coefficient was used to assess agreement among each index. Logistic regression was performed to assess the association of CVD history with self-reported statin use: first univariable models were used to identify other participant characteristics that were significantly associated with statin use. In order to adjust for these potential confounders, these characteristics were then included as independent variables along with CVD history in a final multivariable model, with statin use as the dependent variable. Age, sex, and HbA1c were included a priori, as well as all significant predictors (p < 0.05) in univariable analyses. Multicollinearity among the independent predictor variables was assessed. Odds ratios (OR) are reported along with their 95 % confidence intervals. As a sensitivity analysis, the multivariable model was also run using a stepwise variable selection model. As a second sensitivity analysis, the logistic regression was restricted to the primary prevention subgroup. P-values < 0.05 were considered statistically significant. The sample size was estimated to have a power of 0.81 to detect at least a 15 % difference in proportion of statin use between primary and secondary prevention subgroups, based on the assumptions of approximately 50 % statin use [22] and 50 % prevalence of CVD in longstanding type 1 diabetes [24]. Missing data was assumed to be missing at random. Systolic blood pressure (SBP) and diastolic blood pressure (DBP) data was incomplete in that 150(49 %) of values were unreported in the physical exam reports from health care providers. For this reason, self-reported history of hypertension was instead used in the multivariable model, but we performed a sensitivity analysis using SBP. Available-case analysis was used to report patient characteristics, guideline adherence values, and univariable screening, whereas complete-case analysis was used for multivariable regression. To honour variations in threshold values reported by different international organizations that provide clinical practice guidelines, a sensitivity analysis was performed using higher HbA1c of 7.5 % (58 mmol/mol), 8.0 % (64 mmol/mol), and 8.5 % (69 mmol/mol), blood pressure <140/90 mmHg, and BMI <30 kg/m 2 targets as cut-offs [12,30,31]. Furthermore, as some guidelines recommend special considerations for statin use for patients of extreme age, we performed sensitivity analysis by comparing statin use only among participants who were 75 years or younger [12]. The distribution of cardiovascular conditions is presented in Fig. 2. Of the entire cohort, 105 (34 %) had cardiovascular conditions (and were included in the secondary prevention subgroup). Within this subgroup, 78 (75.0 %) reported history of heart attack or angina, 52 (50.0 %) had cardiac bypass surgery, 40 (41.2 %) had cardiac angioplasty, 16 (16.5 %) had leg bypass surgery, and 21 (21.4 %) had leg artery angioplasty, and 2 (1.9 %) had cerebrovascular disease. These two individuals also reported history of heart attacks, and one of them had a cardiac bypass surgery. Guideline adherence Adherence to guideline recommendations is summarized in Table 2. The following results are reported for the cohort as a whole: Under the domain of lifestyle recommendations, 52.7 % of participants reported following a recommended diet, 96.8 % did not currently smoke, and 70.5 % reported current physical activity. Based on clinical exam and laboratory reports, 35.0 % had optimal HbA1C ≤7 % (53 mmol/mol), 47.8 % had blood pressure ≤130/80 mmHg, 57.7 % had LDL-C ≤2.0 mmol/L, and 50 % had optimal BMI <25 kg/m 2 . In terms of pharmacotherapy, 98.1 % of participants were eligible for RAS inhibitors and 72.5 % of these participants were using an ACEi or ARB. Finally, 72.1 % participants reported statin use. Among all participants, 62.5 % of guideline recommendations were met, which was the same between primary and secondary prevention subgroups. Agreement among each component of the adherence index was low (κ < 0.20), except for between statin use and LDL-C (κ = 0.29). The primary and secondary prevention subgroups had similar self-reported adherence to all the above recommendations except for physical activity and statin use. Specifically, compared to the secondary prevention subgroup the primary prevention subgroup had significantly higher proportion of participants who were physically active and lower prevalence of statin use. In sensitivity analysis, results did not differ when alternate target thresholds were used for HbA1c, blood pressure, and BMI. Furthermore, when statin use was compared only in participants 75 years or younger, the primary prevention subgroup still had significantly lower statin prevalence than the secondary prevention subgroup (117 (65 %) vs 73 (86.9 %), p < 0.001). Factors associated with statin use Univariable analyses demonstrated that female sex, hypertension history, higher BMI, lower LDL-C, lower HDL-C, lower total cholesterol, presence of at least one microvascular complication, nephropathy, retinopathy, absence of CVD history, acetylsalicylic acid use, and RAS inhibitor use were associated with the presence of statin use (Table 3). These variables, in addition to age and HbA1c, were used to create a multivariable model as shown in Table 4, which showed that only a higher cholesterol level (adjusted OR = 0.39 [0.25, 0.59] per unit increase in mmol/L, p < 0.001) and absence of CVD history (adjusted OR = 0.38 [0.15, 0.95] for membership in primary prevention subgroup, p = 0.04) were independently associated with lower statin use. Both factors remained significantly associated with statin after a stepwise selection process. When the logistic regression was restricted to participants in the primary prevention subgroup, sex, age, total cholesterol, BMI, RAAS blockade, and aspirin use were associated with statin use in univariable analysis. When these were included in the multivariable model, only lower total cholesterol was significantly associated statin use. Discussion In the cross-sectional analysis of 309 Canadians with longstanding type 1 diabetes uniformly considered to be at high CVD risk, we observed similar adherence to most general guideline recommendations between the primary and secondary prevention subgroups. However, against prevailing recommendations, the primary prevention subgroup had markedly insufficient statin use-approximately one-third odds relative to the secondary prevention subgroup. Such odds persisted in adjusted analysis to account for potential confounding variables: age, sex, hypertension history, greater BMI, higher HbA1c and total cholesterol, presence of microvascular complications, and acetylsalicylic acid and RAS inhibitor use. Suboptimal self-reported statin use in the context of the literature Suboptimal statin use has been commonly reported in a variety of study populations, and is associated with financial, drug-related, health system-related, conditionrelated, and patient and physician-related factors [17]. Fig. 2 Prevalence of cardiovascular disease (CVD) conditions among the 105 participants with CVD Statin use in large clinical trials typically ranges from continuation rates of 70-90 % of trial participants [32][33][34]. Cross-sectional studies in patients with diabetes in real-world clinical settings have reported much lower rates, generally approximating 50 % prevalence of statin use [35][36][37]. Most of these studies have been conducted in the context of type 2 diabetes, though some limited data suggests even lower statin use in type 1 diabetes with estimated prevalence below 50 % [38]. Our study addresses this paucity of evidence on statin use in type 1 diabetes by studying individuals with longstanding diabetes who are at high CVD risk and uniformly require statin use for CVD prevention. It is encouraging to note that statin prevalence in our participants (72.1 %) exceeds that in most observational studies, and even approaches the high rates observed in statin clinical trials. Nonetheless, the concern remains that there was a key disparity between primary and secondary prevention subgroups, with about 20 % lower statin use in those without a history of CVD. This implies a significant care gap in the primary prevention of CVD in patients with type 1 diabetes which puts these patients at high risk of a first CVD incident [19,20]. To estimate the potential clinical implication of this care gap, a simulation study demonstrated that a 25 % increase in statin prevalence in a primary prevention cohort is predicted to avert up to 53 % more CVD-related deaths over 10 years [39]. We therefore hypothesize that increasing statin use among type 1 diabetes patients without a history of CVD to approximate that observed in patients with CVD could represent a substantial strategy to reduce CVD mortality. From a public health perspective, our results suggest that targeting improved statin use in longstanding type 1 diabetes presents an opportunity to decrease the CVD incidence and mortality. Comparison to literature for other general guideline recommendations Regarding other CVD prevention recommendations, adherence in our participants was similar to that in other cross-sectional studies in outpatient diabetes settings [35][36][37]. In fact, it is reassuring that as a whole, our participants had HbA1c, BMI, blood pressure, and lipid measures which were close to or better than guideline recommendations. Remarkably, our cohort had a RAS inhibitor prevalence of 72.5 % amongst eligible participants-similar between primary and secondary prevention subgroups-which approximates that previously found in type 1 diabetes populations [40]. Interestingly, high ACEi and ARB use was uniform between the two subgroups despite greater prevalence of hypertension and nephropathy in the secondary prevention subgroup. This is in contrast to lower statin use in the primary compared to secondary prevention subgroup, even though the two subgroups had similar levels of LDL-C. Perhaps this phenomenon suggests a strong recognition by clinicians and patients of the protective benefits of RAS inhibition, and in contrast, an incorrect but prevailing clinical view that statin use should be limited to those with CVD or dyslipidemia. This data strongly supports the notion that clinicians and patients may not appreciate long diabetes duration as a significant CVD risk factor and are thus reluctant to use statin for primary prevention-a view which has been disproven by the results of many large studies [7][8][9]. Study limitations While this study is the first to investigate attention to clinical practice guidelines and self-reported statin use in the context of longstanding type 1 diabetes, and it used mixed methods of data acquisition including self-report, validated questionnaires, and laboratory measures, we recognize some limitations and sources of potential bias. First, our investigation of extreme diabetes duration carries a risk of selection bias-specifically, incidence-prevalence (survival) bias-whereby participants may have had better life-long management of CVD risk compared to those who did not survive to 50 years diabetes duration. However, though the magnitude of adherence and CVD prevalence may be affected by such incidence-prevalence bias, it is unlikely that it would affect the observed association of statin use and the primary and secondary prevention subgroups. Secondly, we acknowledge the risk of recall bias and consequent misclassification error, though we expect these to be small in magnitude given the discernible nature of CVD and current medication use, and we emphasize that such recall bias is non-differential between our analytical subgroups and that the odds ratios presented are unlikely to be influenced by this bias. Third, ascertainment of cerebrovascular disease events may have been incomplete, but the low prevalence of these events is in keeping with known rates from epidemiological study of type 1 diabetes [41]. Fourth, while we believe that our analysis has considered the most fundamental confounders through adjustment, there remains the possibility of unmeasured and residual confounding. For instance, other studies have found retirement and female sex to be associated with lower adherence to statin use and prescription, respectively, [42,43]; these variables may require further study in our cohort. Fifth, as this study was a secondary analysis, we only determined prevalence of statin use rather than a direct measure of medication adherence such as proportion of days covered by statin prescriptions or reasons-such as medication side-effects-for statin non-use. Finally, our results are specific to those with longstanding type 1 diabetes and may not extend to T2DM or older adult populations without diabetes. Conclusions Adherence to cardiovascular protection guidelinesespecially statin use-is of fundamental clinical importance because under-treatment in high-risk individuals can worsen cardiovascular disease risk and increase the burden to the healthcare system [17,23]. This study supports that Canadians with longstanding type 1 diabetes have relatively high self-reported adherence to guidelines and statin prevalence as a whole, but there is inappropriately lower statin use for CVD primary prevention than secondary prevention. In view of this apparent clinical disregard for longstanding diabetes as a fundamental CVD risk factor, our results may serve to encourage improved adherence to evidence-based recommendations for primary prevention of CVD with statins in this population. Future research with this unique cohort should focus on elucidating the causes of suboptimal statin use, and interventions should address statin disparity between primary and secondary CVD prevention in patients with longstanding type 1 diabetes.
2016-05-04T20:20:58.661Z
2016-01-25T00:00:00.000
{ "year": 2016, "sha1": "5588ba7b5fdf408a6ae95bdce409a9bbadd732ee", "oa_license": "CCBY", "oa_url": "https://cardiab.biomedcentral.com/track/pdf/10.1186/s12933-015-0318-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5588ba7b5fdf408a6ae95bdce409a9bbadd732ee", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8593470
pes2o/s2orc
v3-fos-license
Structural Characterization of Outer Membrane Components of the Type IV Pili System in Pathogenic Neisseria Structures of the type IV pili secretin complexes from Neisseria gonorrhoeae and Neisseria meningitidis, embedded in outer membranes were investigated by transmission electron microscopy. Single particle averaging revealed additional domains not observed previously. Secretin complexes of N. gonorrhoeae showed a double ring structure with a 14–15-fold symmetry in the central ring, and a 14-fold symmetry of the peripheral ring with 7 spikes protruding. In secretin complexes of N. meningitidis, the spikes were absent and the peripheral ring was partly or completely lacking. When present, it had a 19-fold symmetry. The structures of the complexes in several pil mutants were determined. Structures obtained from the pilC1/C2 adhesin and the pilW minor pilin deletion strains were similar to wild-type, whereas deletion of the homologue of N. meningitidis PilW resulted in the absence of secretin structures. Remarkably, the pilE pilin subunit and pilP lipoprotein deletion mutants showed a change in the symmetry of the peripheral ring from 14 to 19 and loss of spikes. The pilF ATPase mutant also lost the spikes, but maintained 14-fold symmetry. These results show that secretin complexes contain previously unidentified large and flexible extra domains with a probable role in stabilization or assembly of type IV pili. Introduction Neisseria species are Gram-negative b-proteobacteria, whose pathogenic members Neisseria meningitidis, which normally inhabits the human nasopharynx, and Neisseria gonorrhoeae, which normally colonizes urogenital mucosal surfaces, are responsible for bacterial meningitis and septicemia, and the sexually transmitted disease gonorrhea, respectively. During the infection process, several factors contribute to the interaction with the host cells [1]. Among these factors are type IV pili which mediate binding of the bacteria to the host cells. Type IV pili are long fibrous structures extending from the bacterial surface which can be extended and retracted [2,3]. They are involved in a variety of processes; not only do they mediate cellular attachment to tissue receptors [1,4], but they are also involved in several other processes, including bacterial autoagglutination [5,6], twitching motility [7], biofilm formation [3,8,9,10], and natural competence for DNA uptake [11,12,13]. Type IV pili are dynamic structures which consist of approximately 500-2000 subunits of the major pilin protein, PilE [14] and which are assembled and disassembled by a complex machinery of approximately 10 conserved core proteins and several additional proteins [2,3,15]. This machinery shows similarity to the complexes involved in secretion of proteins via the type II secretion pathway [16,17]. The nomenclature of components of type IV pili systems often differs between organisms. In this manuscript we will refer to the N. gonorrhoeae genes and proteins if not indicated otherwise. The first step of pilus assembly is the insertion of the pilin into the cytoplasmic membrane. After membrane insertion, the leader peptide is both cleaved at the cytosolic side of the membrane and methylated on the N-terminal amino acid by the pre-pilin peptidase PilD [18,19]. The PilE subunits are assembled and extruded from the inner membrane by the PilF hexameric ATPase (a homologue of GspE, and a member of the AAA chaperone/mechanico-enzyme family) with the aid of a polytopic inner membrane protein, PilG [20]. Remarkably, PilT which is a similar ATPase to PilF, is involved in the disassembly of the PilE subunits at the cytoplasmic membrane. Disassembly takes place at a rate of approximately 700 pilin subunits/s, resulting in retraction of the pilus with a force of over 100 pN [21,22]. Several other proteins, called pseudo-pilins or minor pilins, are similarly processed by PilD and can also be integrated into the growing pilus, and were proposed to affect pilus dynamics by influencing the membrane-localization and/or polymerization state [23]. The pilus passes the outer membrane through PilQ [24,25]. PilQ is one of the most abundant Neisserial outer membrane proteins and it has previously been estimated that PilQ comprises 10-13% of the total outer membrane proteins [26]. PilQ is a member of the GspD secretin superfamily of integral outer membrane proteins involved in type IV pili and in type II and type III secretion systems [27]. Transmission electron microscopy (TEM) of purified members of the secretin superfamily, such as XcpQ and PilQ from Pseudomonas aeruginosa [28], PulD from Klebsiella oxytoca [29,30,31], the pIV filamentous phage protein [32], GspD from Vibrio cholerae [33] and PilQ of N. meningitidis [24,25,34,35], indicated that these proteins form a multimeric ring-like structure. A 3D structure of the N. meningitidis PilQ (Nme PilQ) was determined by using single particle averaging methods applied to transmission electron microscopy (EM) images of the purified multimer visualized by cryo-negative EM staining. This structure showed 4-fold rotational symmetry (and 12 fold quasi-symmetry) with four 'arm'-like structures extending from the structure and a large central cavity which was closed on both sides. [24,25,34,35]. The observed structure was flexible, and showed conformational changes upon interaction with isolated pili [25] and purified NMe PilP [36]. A higher resolution 3D structure of a secretin was obtained for K. oxytoca PulD [29,30,31]. This complex consists of a dodecameric structure composed of a closed disc with ring-like structures on both sides. The two rings form chambers on either side of a central plug that is part of the middle disc. A recently published dodecameric cryo-EM structure of the purified GspD secretin of Vibrio cholera shows a 200 Å long complex with a periplasmic domain, an outer membrane domain and a unique extracellular cap. The structure was obtained in its ''closed'' state and has an outer diameter of 155 Å . It has a prominent periplasmic gate and a conserved constricted region. It was proposed that this region interacts with the substrate and renders conformational changes to the structure for toxin secretion [33]. Members of the secretin superfamily often interact with small lipoproteins, also known as pilotins, or pilot proteins. These lipoproteins are involved in oligomerization, stabilization, and/or outer membrane localization of the secretin. For example, the K. oxytoca PulD requires the PulS pilotin for proper outer membrane association; in its absence, PulD remains associated with the inner membrane [37]. Similarly the Shigella flexneri MxiM and Yersinia enterocolitica YscW pilotins are required for outer membrane localization of the Type III secretion secretins MxiD and YscC, respectively [38,39]. The interaction between MxiM and MxiD has been studied using NMR spectroscopy [40]. It has been demonstrated that Nme PilP and Nme PilW interact with Nme PilQ. Purified Nme PilP was shown to interact with Nme PilQ and was proposed to localize to the cap region of the Nme PilQ structure [36]. Although Nme PilQ does not need Nme PilP for its stabilization or membrane localization, Nme pilP mutants showed a loss of piliation and of natural competence [36]. In a Nme pilW deletion mutant, the total amount of outer membrane localization of Nme PilQ monomers was not changed, but the stability of the Nme PilQ multimer was strongly affected [41]. Another protein that has been proposed to interact with PilQ is PilC. Two copies of pilC (pilC1 and pilC2) are found in pathogenic Neisseria species. In N. gonorrhoeae, each copy can function as a pilus tip adhesin, while in N. meningitidis, only PilC1 promotes adhesion [42]. The Nme PilC proteins are associated with the outer membrane but can also be recovered from purified pili, where they seem to be located at the top of the pilus [43]. Although the observations made by different authors have been useful in establishing that secretins adopt ring-like structures with 12-14 fold symmetry, there are still many remaining questions. All structural information about the secretins obtained to date has been obtained from purified proteins, and structural information about the interaction of the secretins with other components is lacking. Multi-subunit membrane complexes can loose subunits during the purification procedure [44]. Therefore we set out to study the structure of the PilQ secretin within the membrane using Transmission Electron Microscopy and single particle averaging. To obtain further information on the PilQ complex, we studied the structure of the complex in membranes derived from different pil deletion mutants. Implications for the assembly and structure of the observed PilQ mega-complex are discussed. Transmission electron microscopy on isolated membranes and whole cells of Neisseria gonorrhoeae To study the structure of the PilQ secretin of N. gonorrhoeae in its native environment, total membranes were isolated and separated on a sucrose gradient. Fractions containing the highest amount of PilQ (from 45 to 51% (w/v) sucrose) were collected and concentrated. This fraction contained both inner and outer membranes as determined by antibodies against SecY and DsbA inner membrane markers) and Omp85 and Imp (outer membrane markers). Although several methods, including different disruption methods in combination with a large variety of density gradients were tested to separate inner and outer membranes, no complete separation was obtained. PilQ containing fractions were analyzed with transmission electron microscopy. Both inner membranes, which appear as vesicles, and outer membranes, which appear as flattened sheets, could be identified [45]. Roughly 25% of the vesicles seem to be derived from inner membranes. The membranes form intact closed vesicles because upon air-dried negative staining the membranes collapse and become superimposed. This can be seen at the edges where a white rim marks the curvature ( Figure 1A). The outer membranes contained prominent stain-filled indentations ( Figure 1A) which were absent in the inner membranes, with an average density of 350 indentations per mm 2 Since these stain-filled indentations were most likely formed by the PilQ secretin, a pilQ deletion strain was constructed. Comparison of the outer membrane enriched samples of MS11 and the pilQ deletion strain confirmed both the abundance of PilQ in the outer membrane samples, and the absence of PilQ in the deletion strain (see Figure S1A). Isolated membranes of the pilQ deletion strain were analyzed by electron microscopy ( Figure 1B). The stain filled indentations were absent in the membranes of the pilQ deletion strain demonstrating that they are indeed related to the presence of PilQ. Interestingly, the stain filled indentations were also evident on whole cells of Neisseria gonorrhoeae when observed using electron microscopy ( Figure 2). While comparing piliated and non piliated cells, some of the thin, long type IV pili structures on the piliated cells seemed to extend from the stain filled indentations. These results further demonstrated the outer membrane localization of the stain filled indentations (Figure 2A and 2B). To further confirm that these indentations contain PilQ, nanogold labeling was performed on the outer membrane enriched fractions using a N.meningitidis PilQ monoclonal antibody [46]. This monoclonal antibody specifically cross reacts with N.gonorrhoeae PilQ (see Figure S1B). Here we observed labeling of the indentations in PilQ containing membranes (See Figure 1C). When similar experiments were performed in the absence of the PilQ antibody only very low levels of nano-gold labeling were observed (see Figure S3A). Similar very low levels of nano-gold labeling were observed for inner membranes and for membranes derived from the pilQ deletion strain ( Figure S3B). To-gether this showed that labeling is specific for the presence of PilQ in the indentations. Since the gold labeling influences the alignment procedure, single particle averaging was not performed on gold labeled particles instead, about a hundred images were visually analyzed ( Figure 1C, see also Figure S2). Labeling was strongly reduced for the observed inner membranes, or when control experiments were performed in the absence of the PilQ antibody ( Figure 1C). Projection structure of the N. gonorrhoeae PilQ complex To further analyze this PilQ-containing structure, a large data set of about 20,000 single projections of the stain-filled indentations was obtained from EM images and analyzed by single particle analysis. After several cycles of multi-reference alignment, multivariate statistical analysis and classification, final class sums from all analyzed particles were obtained ( Figure 3A). The 2D map shows a circular particle composed of a double ring with extending spike-like densities. The central ring has a diameter of 150 Å and has a large central cavity, whereas the peripheral ring has a diameter of 210 Å . The spikes further extend the diameter to 310 Å . The second ring has a 14-fold symmetry, while the spike-like densities show a 7-fold symmetry. After applying 7fold symmetry, the features of both the second ring and the spikes improve ( Figure 3B). When this figure was used for further improvement as a reference in a next alignment procedure, it appears that the spike features became stronger, but at the cost of the resolution of the peripheral ring, which now becomes less well defined ( Figure 3C). This suggests that the structure has some flexibility between the second ring and the spikes. The symmetry of the central ring could not be resolved from either the class averages without symmetry applied or from the class averages with 7-fold symmetry applied. In an attempt to determine the symmetry of the central ring, the second ring was masked out during analysis. After repeated alignment and classification, the final projection map showed two striking features. First, densities in the central ring come into focus ( Figure 3D). At least 11 densities are well separated, with a average center-to-center distance of about 25 Å (red bars, Figure 3E). However, in two areas the features are not well resolved (blurred red bars), despite the fact that we increased the number of analyzed projections from 20,000 to 36,000. This indicates that at the current resolution of this map, which is in the range of the 25 Å of the center-to-center distance of the central ring densities, we cannot prove the symmetry. However, it appears most likely that the symmetry is 14 or 15. By imposing 14-fold symmetry, as performed in Figure 3F, the features become stronger as compared with any other imposed symmetry between Transmission electron microscopy on isolated membranes of N. meningitides To enable us to compare the previously published structure of the purified Nme PilQ complex with the structure observed in the membrane, membranes of N. meningitidis were isolated and analyzed by transmission electron microscopy. The membranes of N. meningitidis also showed the presence of indentations, but in much smaller numbers compared to N. gonorrhoeae. Moreover, the pores were less homogeneously distributed in the membrane, and some of them are found in small clusters ( Figure 4A). Single particle analysis showed a structure composed of only one ring ( Figure 4B) or with an additional second ring in about 20% of the data set ( Figure 4C). Some particles (10%) showed an incomplete second ring ( Figure 4D). The central and peripheral rings have the same diameter as observed in the N. gonorrhoeae structure ( Figure 4E). Remarkably, there was no indication of spikes attached to the second ring. Comparison with the previously published purified Nme PilQ structure, which has a diameter of approximately 15.5 to 16.5 nm and a 6.0-to 6.5-nm-diameter cavity showed that PilQ forms the central ring of both the N. meningitidis and N. gonorrhoeae structures, whereas the peripheral ring and the spikes are formed by additional proteins. Remarkably, the symmetry number of the peripheral ring observed for N. meningitidis was substantially bigger than that observed for N. gonorrhoeae, indicating that this ring is possibly composed of a larger number of copies of the same or a smaller protein. Symmetry analysis performed to evaluate the copy number shows that imposing 2-, 3-, or 7-fold does not enhance this feature. Since the motif is smaller than in N. gonorrhoeae, symmetries above 14 were evaluated ( Figure 4F-J). This approach strongly pointed to a 19fold symmetry in the second ring. Based on the high similarities of the PilQ proteins of N. meningitidis and N. gonorrhoeae (89% identity and 91% similarity), the observed differences were unexpected. To ensure that the observed differences in the PilQ complexes in the membranes of the N. gonorrhoeae MS11 and the N. meningitidis HB-1 strains are species specific, membranes of different strains were isolated. Two additional N. meningitidis strains, strain H44/76, the wild-type parent of HB-1 (to ensure that the absence of the capsule locus did not affect PilQ appearance), and wild-type strain M986, belonging to a different clonal lineage were tested. Furthermore, N. gonorrhoeae strain FA1090 was examined. Both N. meningitidis strains gave very similar results as strain HB-1. Similarly FA1090 gave identical results as were obtained for MS11. To further exclude that the strains used in our study contained any mutations in pilP or pilQ, the entire pilP/Q operon and flanking regions were sequenced, but no differences were observed with the published sequences [47]. One of the differences between the N. meningitidis and N. gonorrhoeae PilQ is the presence of a small octapeptide (PAKQQAAA) basic repeat of which four to seven copies are present in N. meningitidis PilQ, whereas N. gonorrhoeae PilQ contains either two or three copies [48]. However, the 14-fold symmetry of the peripheral ring and the spikes were still observed in the N. gonorrhoeae strain in which the 2 repeats of MS11 were replaced with the 6 repeats of N. meningitidis strain HB1 (data not shown). Structure and assembly of the PilQ complex of N. gonorrhoeae Our analysis of the structure of the PilQ complex in isolated outer membranes unequivocally shows extra domains, i.e. a peripheral ring with associated spikes, not observed in purified Nme PilQ complexes. To attempt to identify these novel features, we set out to generate deletion mutants for genes of possible candidates for the extra observed densities. PilC is a protein normally present in two copies, is involved in adhesion to epithelial cells, and is located in the outer membrane and at the tip of the pilus. Since the N. gonorrhoeae MS11 strain used for our study, contains a non functional copy of pilC1 that is not expressed due to a frame-shift mutation [49], we generated a pilC2 deletion mutant in MS11. The pilC1/C2 mutation resulted in non-piliated cells, as seen in previous studies. The pilC1 gene was sequenced in the pilC2 deletion mutant and this re-confirmed the presence of the frameshift mutation. Hence, we conclude that a true pilC1/C2 mutant was generated. Single particle analysis was performed and 6,000 projections were analyzed. The pilC1/C2 mutant yielded projection maps showing a similar structure as observed for wild-type with the presence of central and peripheral rings and 7 extending spikes ( Figure 5C). Again, these features became more visible after imposing symmetry ( Figure 5D). This demonstrated that PilC is not a subunit of the observed PilQ complex. In a next step, we generated a deletion mutant of N. gonorrhoeae NgonM_03101, a small (28 kDa) lipoprotein containing six tetratricopeptide repeats (TPR) motifs. N. gonorrhoeae NgonM_03101 is a homologue of Nme PilW and Pseudomonas aeruginosa PilF, which have been shown to be involved in stabilization of the PilQ oligomer [41,50,51]. In membranes isolated from the NgonM_03101 deletion mutant, no stain filled indentations were observed demonstrating that NgonM_03101 is involved either in assembly or stabilization of the PilQ oligomer. Since no structures were observed, we cannot discriminate between the possibility that NgonM_03101 functions as a chaperone for oligomerisation of the secretin, or that it is part of the larger PilQ complex. PilP is an 18 kDa lipoprotein shown to interact with PilQ. Both in N. meningitidis and N. gonorrhoeae pilP and pilQ are co-transcribed [36]. pilP mutants show a loss of piliation and natural competence [52]. In a previous study additional densities were observed when purified Nme PilQ was incubated with recombinant Nme PilP [36]. To study the localization of PilP in PilQ-containing complexes within the membrane, a pilP deletion mutant was created. Western blotting confirmed the expression of PilQ in the pilP deletion mutant strain. An increased degradation of full length PilQ was however observed ( Figure S4) and the observed density of indentations in the membranes derived from the pilP deletion mutant was reduced to ,30% compared to the wild-type membranes, suggesting that PilP influences the stability of PilQ multimers. Differences were observed when the class averages of 6,000 particles of the pilP mutant without ( Figure 5E) and with applied symmetry ( Figure 5F) were compared to the class averages obtained from wild-type membranes ( Figure 5A and B). The pilP deletion mutant not only lost the extending spike-like densities, but remarkably the symmetry of the peripheral ring changed from 14 to 19. Even more surprisingly, the structure of the PilQ-containing complex in membranes derived of the N. gonorrhoeae pilP deletion mutant showed a notable similarity to the structure of the PilQ complex in N. meningitidis membranes. Based on a possible localization of PilP between the central and peripheral rings of the PilQ complex, it could be proposed that absence of PilP affects the interface between the central and peripheral ring, resulting in a reassembly of the second ring. The reassembled second ring does not appear to be able to bind the spike-like extensions. It has been previously demonstrated that incubation of purified Nme PilQ complexes with isolated pili can induce structural changes in Nme PilQ [25]. To compare the structures of PilQcontaining complexes that have interacted with the pilus and those that have not interacted with the pilus, membranes derived from pilF and pilE deletion mutants were studied. PilF is an ATPase, localized in the inner membrane and essential for the assembly and extrusion of pilin subunits. PilE is the pilus subunit, which forms thin pilin filaments of ,60-80 Å [53]. When the class averages of 6,000 particles obtained from membranes of the pilF and pilE deletion mutants without ( Figure 5G and 5I) and with applied symmetry (Figure 5H and 5J) were compared to the class averages obtained from wild-type membranes, it appeared that the secretin structures of both mutants lost the extending spike-like densities. This suggests that the secretin complex changes its conformation upon interaction with the pilus resulting in assembly of the extending spike-like structures or that the spike-like structures are formed by a protein transported across the outer membrane along with the extension of the pilus. Interestingly, in addition to disappearance of the spike-like densities, the pilE mutant also showed a 19-fold symmetry similar to the pilP deletion mutant and the secretin of wild-type N. meningitidis. Phase variation could have changed the expression levels of PilE in the pilP mutant and in the N. meningitidis strains, and lowered levels of PilE in the pilE and pilP mutants of N. gonorrhoeae and in the N. meningitidis strains might explain the change in symmetry. To test this, the expression levels of PilE in N. gonorrhoeae strain MS11 and the pilQ, pilP and pilE deletion mutants and in the N. meningitides strain HB1, H44/76 and M986 were determined by Western blotting using a PilE-SM1 monoclonal antibody [54] ( Figure S4). This demonstrated that PilE expression could be detected in all strains except in the N. gonorhoeae pilE deletion mutant and the N. meningitidis strain M986. The N. meningitidis strain M986 strain expresses a class II pilin that can not be detected with the class I pilE-SM1 antibody [55] but most likely also expresses PilE. Since the pilP mutant that has an 19 fold symmetry of the outer ring still expresses PilE, phase variation of PilE expression could be excluded as a possible reason for the structural change of the outer ring of the secretin complex. Why the absence of PilP/PilE in the structure of the N. gonorrhoeae PilQ complex results in a structural change to a complex resembling the PilQ complex in N. meningitidis remains an open question. To examine the possible effect of the deletion of a minor pilin, a deletion mutant was created of pilW, a minor pilin located in an operon with pilV and pilX. Single particle analysis was performed and yielded projection maps showing a similar structure as observed for wild-type ( Figure 5K and 5L), demonstrating that at least deletion of the pilW minor pilin has no effect on the domain structure of the secretin complex. Discussion In this study we analyzed the structure of the PilQ secretin within isolated outer membranes using transmission electron microscopy and single particle averaging. Several lines of evidence demonstrate that the observed stain filled indentations are formed by PilQ: I) The structures are not observed in the pilQ deletion mutant ( Figure 1A and 1B). II) The structures are also not observed in the ngonM_03101 mutant. NgonM_03101 is a homologue of N. meningitidis PilW. Deletion of the N. meningitidis pilW gene was shown to abolish the formation of the PilQ oligomer [56]. III) The symmetry of the outer ring of the structure is affected in deletion mutants of several other pil genes, e.g. pilE, pilP, and pilF. It is unlikely that these mutants would affect the structures of another membrane complex than PilQ. IV) The structures are labeled using immuno-gold labeling with an antibody specific for N. gonorrhoeae PilQ ( Figure 1C). V) The inner ring of strain filled indentations has the same diameter as the purified PilQ complex of N. meningitidis. VI) The observed structure is present only in outer membrane sheets. Inner and outer membranes can be easily distinguished in electron microscopy, and the structures are also seen on electron microcopy images of whole cells. VII) The abundance of the structure correlates with the abundance of PilQ in the outer membrane [26]. Our approach revealed features of a large structure not seen previously. Compared to the published structures derived from purified PilQ of N. meningidis, the complexes observed in N. meningitidis membranes contained an extra peripheral proteinous ring with a 19-fold symmetry. In our analysis, we also observed structures lacking or having a partial peripheral ring, indicating that the extra domains are not tightly attached and may be dissociated during the membrane isolation procedure. Apparently, this extra ring structure is also lost during the previously described purification of the PilQ complex [36]. Also for other purified secretins, no additional ring structures were observed. Only after purification of PulD-PulS complex from K. oxytoca radial spokes, most likely formed by PulS were observed [31]. These spokes seem however of a much small mass then the extra ring structure observed for the PilQ complex. Remarkably, the secretin complexes observed in membranes isolated from N. gonorrhoeae appear much more stable, and showed a double ring structure with a 14-fold symmetry of the peripheral ring, from which seven external spikes protrude. These data demonstrate that the study of these multi-component membrane inserted complexes within their native lipid environment by electron microscopy can identify extra components and/or structures which are lost during purification. Compared to the previously published structures derived from purified PilQ complexes of N. meningitidis, the central ring in our structures consists of PilQ. The symmetry of the central ring of N. meningitidis has previously been determined to be 12, while the symmetries of the K. oxocyta PulD and the pIV protein of filamentous phages were 12 and 14, respectively. Unfortunately, we were unable to conclusively determine the symmetry of the central ring of N. gonorrhoeae, but our analysis indicates that it is most likely 14, and thus could differ from the central ring of N. meningitidis. Another interesting feature of the secretin complexes investigated is the high flexibility between the different rings and the spikes. In particular, the observation that the number of protein copies in the second ring changes from 14 to 19 in the pilP and pilE mutants is intriguing. A comparison of the pilP and pilE mutants with an 19-fold symmetry to those of wild-type and the pilC1/C2 mutant with an 14-fold symmetry shows that the overall diameter of the peripheral ring is smaller in the pilP and pilE deletion mutants, whereas the size of the central ring is equal for all complexes. This indicates that it is unlikely that there is a higher copy number of the same protein in the structure with the 19-fold symmetry (which would increase the size of the peripheral ring), but instead suggests that the structure with the 19-fold symmetry either arises from processing of the peripheral ring protein(s), or that the peripheral ring protein(s) are replaced by other protein(s). It appears that the spikes can only attach to the structure with the 14-fold symmetry. Since also structure with a 14-fold symmetry without the spikes are observed it is unlikely that that the presence of spikes forces the inner ring into the 14 fold symmetry. A comparable change in symmetry between rings has been observed for photosystem I (PSI) of Synechocystis PCC. Monomeric photosystem I (PSI) is a membrane protein complex of 330 kDa which is mainly present as trimers in cyanobacteria. Under stress conditions, it forms supercomplexes IsiA, with a 37 kDa integral membrane protein. These complexes have been extensively studied by electron microscopy [57,58] and it was shown that IsiA can form complete and incomplete single and double rings around monomeric or trimeric PSI. The number of IsiA copies was variable; in the case of monomers the inner IsiA ring was composed of 12, 13 or 14 copies and these numbers corresponded to 19, 20 or 21 copies in the peripheral ring, respectively. On trimers with two IsiA rings, the inner ring is composed of 18 copies and the peripheral ring is formed by 25. However, the positions of IsiA in incomplete second rings with 12-19 copies were slightly different. If extrapolated to the complete rings, they appeared to consist of only 24 copies. These data illustrate how IsiA is flexibly attached to PSI ( [57] and unpublished data). Similarly, it is possible that the protein(s) making the second ring around the secretin of the type IV pilus are flexible in their self-association. Within this study we also attempted to identify the proteins located within the second ring and in the spike-like extensions. Initially, we expected that the peripheral rings and/or spikes were formed by PilC, since PilC is a large (110 kDa) protein which was shown to be located in the outer membrane and at the tip of the pilus [43,49], and Nme pilQ mutants were shown to shed Nme PilC to the medium [52]. However, a mutant of pilC1/C2 showed similar complexes as observed in wild-type membranes demonstrating that PilC is not a component of the peripheral ring or the spike-like extensions. Another candidate was the homologue of N. menigitidis PilW (NgonM_03101), a small (28 kDa) putative lipoprotein containing six tetratricopeptide repeats (TPR) motifs, necessary for the stabilization of pili fibers but not for their assembly or surface localization. A deletion mutant of the N. gonorrhoeae homologue of NMe PilW strongly affected the stability of Nme PilQ multimers [41]. Similar results were obtained for the N. meningitidis PilW homologues of Myxococcus xanthus (Tgl) [59] and of Pseudomonas aeruginosa (PilF) [50]. In membranes of the deletion mutant of the N. gonorrhoeae homologue of N. menigitidis PilW, no secretin complexes are observed, confirming that also the N. gonorrhoeae NgonM_03101 affects the stability of PilQ. Therefore we cannot rule out that NgonM_03101 is part of the second ring or the spikes, although the small size of the protein might not account for the densities of the peripheral ring subunits. Interestingly, our study demonstrated that the symmetry of the peripheral ring of the secretin complex in the pilP and pilE deletion mutants changed from 14 to 19, and that the structure lost the extending spike-like densities in the pilP, pilE and pilF deletion mutants. These results demonstrate that both PilE and PilP are important for the assembly of the peripheral ring. PilP is a small protein (21 kDa) previously suggested to be localized in the inner membrane, and to attach to the cap region of the PilQ complex [36]. This would place the PilP protein on the periplamic interface possibly between the central and peripheral ring. These data and the small size of PilP make it unlikely that PilP forms either the second ring or the spike-like extension, but PilP could be involved in aligning the central and peripheral ring. The effects of mutations in the pilin protein PilE, and the PilF secretion ATPase, which both inhibit formation of the pilus structure, demonstrate that pilus formation influences the PilQ complex. The changes observed in the PilQ complexes can be a direct effect of an interaction between the pilus and PilQ, or an indirect effect on the export or assembly of minor pilins or pilus associated proteins in the absence of a formed pilus. Our data cannot discriminate between these two possibilities. Our approach has revealed that the PilQ secretin complex of the type IV pili of Neissera gonorrhoeae interacts with other proteins in the peripheral membrane to form a large multi-domain complex. The function of these extra domains is currently unknown, but they may simply be involved in anchoring the secretin stably into the outer membrane during pilus extension and retraction. Alternatively, the extra domains could be involved in attaching proteins to the pilus, modifying the pilus or play a specific role in type IV pili dependent natural transformation. It will be important to identify the protein within the extra domains and to determine whether these domains can also be found in Type II secretion systems or in the Type IV pili systems of other organism. Strains, plasmids, primers and media Strains used in this study are described in Table 1. N. gonorrhoeae strains were grown at 37uC in 5% CO 2 on GCB (Difco) plates containing Kellog's supplement [60] or GCB liquid medium (GCBL) containing 0.042% NaHCO 3 and Kellog's supplements. N. meningitidis was also grown at 37uC in 5% CO 2 on GC-agar plates or in tryptic soy broth (TSB). When necessary, erythromycin was used at 5 mg/ml. Construction of deletion mutant strains Deletion mutants in pilC and pilF were made using the insertionduplication mutagenesis method [61]. Using this method, the gene is disrupted and expression of genes downstream of the disrupted gene is driven from the erythromycin promotor. PCR products encoding 541 bp (primers PilC-for and PilC-rev), 524 bp (primers PilF-for and PilF-rev) and 452 bp (primers PilW-for and PilW-rev) fragments of pilC, pilF and pilW were amplified from isolated chromosomal DNA of N. gonorrhoeae strain MS11 (for a list of used primers, see Table 2). The pilC and pilW PCR fragment was digested with BamHI and KpnI and ligated into the BamHI/KpnI sites of pIND3 [62], resulting in plasmid pSJ030 and pSJ032 respectively. The pilF PCR fragment was digested with XhoI and KpnI and ligated into XhoI/KpnI site of pIND3, resulting in plasmid pEP057. Plasmids pSJ030, pSJ032 and pEP057 were transformed to MS11 and colonies were selected on GCB plates containing erythromycin. Correct clones were identified by performing a PCR on isolated chromosomal DNA of these colonies resulting in strains SJ030-MS, SJ032-MS and EP060, respectively (Table 1). To create marker-less non-polar deletion mutants of pilP, NgonM_03101, pilQ and pilE, PCR fragments of the flanking regions of the respective genes were annealed using the splicing by overlapping extension PCR (SOE-PCR) [63] method. To create the PCR products for pilP, NgonM_03101, pilQ and pilE, the primer combinations of PilP-for1/PilP-rev1 and PilP-for2/PilP-rev2; NgonM_03101-for1/NgonM_03101-rev1 and NgonM_03101-for2/NgonM_03101-rev2, PilQ-for1/PilQ-rev1 and PilQ-for2/ PilQ-rev2 and PilE-for1/PilE-rev1 and PilE-for2/PilE-rev2 were used. The obtained PCR products were diluted and amplified with the external primers which also contained the gonoccocal DNA uptake sequence (DUS). The PCR product was transformed to strain MS11 or FA1090 and the mutant colonies were checked using colony PCR. The marker-less insertion of the SBR containing region of the N. meningitidis HB1 strain was introduced into N. gonorrhoeae MS11 by transformation of a PCR fragment carrying the extra region. The PCR product was obtained by using the SBR-for and SBR-rev primers. Correct clones were identified by performing a PCR on isolated chromosomal DNA of several colonies resulting in strains SJ031-MS, SJ007-MS, SJ001-MS, SJ006-FA1090 and SJ002-MS (Table 1). To further confirm the correct deletion of the gene, the deletion site and the flanking regions were determined by sequencing. Membrane Preparation To isolate membranes of N. gonorrhoeae, the strain was plated on GCB plates with the appropriate antibiotic, and (when possible) piliated cells were scraped from the plate and transferred to 3 ml GCBL medium. Cells were grown to an OD 660 of 0.6 and consecutively diluted into increasing volumes until a final volume of 1 liter with an OD 660 of 1.0 was obtained. Cells were centrifuged at 8,000 rpm in a JLA-16.25 rotor and resuspended in 50 mM Tris-HCl pH 7.5. Cells were broken by three passes through a French press at 15 kpsi. Cell debris was removed by centrifugation at 6,000 rpm in a SS34 rotor for 10 min. The membranes were pelleted at 40,000 rpm in a Ti-45 rotor for 1 h, resuspended in 1 ml of 50 mM Tris-HCl pH 7.5 and overlaid on a 4 step (1, 1. Electron Microscopy and single particle analysis For image processing, whole membranes from N. gonorrhoeae and N. meningitidis were negatively stained with 2% uranyl acetate on glow-discharged carbon-coated copper grids. Electron microscopy was performed on a Philips CM120 equipped with a LaB6 tip operating at 120 kV. The ''GRACE'' system for semi-automated specimen selection and data acquisition [64] was used to record 204862048 pixel images at 60,000x calibrated magnification with a Gatan 4000 SP 4K slow-scan CCD camera. About 9,000 images were recorded. From the images we selected about 20,000 single particle projections of the PilQ complex from N. gonorrhoea, 8,000 projections of the PilQ complex from N. meningitidis, 7,000 projections of the pilC deletion mutant and approximately 5,000 of the pilE and pilP deletion mutants from N. gonorrhoea, respectively. Single particle analysis was performed using the Groningen Image Processing (''GRIP'') software packages (see http://bfcemw0.chem.rug.nl/progs-grip.html for a description) on PC clusters. Single particles of PilQ were repeatedly aligned with multireference and nonreference alignments and treated with multivariate statistical analysis and hierarchical ascendant classification [65]. In the final step, the best 50% of the class-members of the best 50% of the classes were taken for the final sums, with the correlation coefficient from alignments as a quality parameter. Rotational symmetry was analyzed in a similar way, as described previously [66]. Nanogold labeling of isolated membranes with PilQ antibodies 5 ml of the outer membrane enriched fraction of wild type N.gonorrhoea MS11A strain was immobilized on a glow-discharged carbon-coated copper grid. The grid was then incubated with N.meningitidis PilQ monoclonal antibody [46] diluted 1:1 in wash buffer (20 mM Tris-HCl pH 7.5, 150 mM NaCl) for 1 hr. After 3 washes with wash buffer, the grid was incubated for 1 hr with 1:10 diluted gold labeled Protein G secondary antibody (Aurion, The Netherlands). After 3 washes with wash buffer the sample was fixed with 2% glutaraldehyde for 5 minutes before staining with 2% uranyl acetate. Electron microscopy was then performed as described above. To exclude aspecific labeling, membranes were labeled using a similar protocol as described above, with the difference that the PilQ monoclonal antibody was replaced by buffer. To test whether the labeling was specific for the presence of the indentations, membranes from the outer membrane enriched fractions of the wild type N.gonorrhoea MS11A strain and the pilQ deletion mutant strain were mixed and immobilized on a grid. Labeling was then performed as above. Electron microscopy on whole cells Piliated and non-piliated colonies of N. gonorrhoeae strain MS11 were selected and transfered to GCB plates. After 18 hrs of growth, the cells were scraped from the surface of the plate and resuspended in 1 ml of GCBL medium. 5 ml of this suspension was incubated on a glow-discharged carbon-coated copper grid. Carbon grids were then washed three times with water before straining with uranyl acetate. The grids were analyzed by electron microscopy as described above. SDS-PAGE and Western blotting In order to test the cross-reactivity of the PilQ monoclonal antibody [46] directed against N.meninigitidis PilQ to that of N.gonorrhoeae PilQ; isolated outer membrane enriched samples were treated with phenol to generate momomeric PilQ as described previously [67]. Briefly, 200 ml (about 500 mg protein) was mixed with equal volume of 88% phenol and incubated at 70uC for 10 minutes. The samples were then cooled to 4uC and centrifuged at 50006 g for 10 minutes. The upper aqueous phase was discarded and the intermediate and lower phase was retained and mixed with equal volume of water. After incubation at 70uC for 10 minutes, samples were centrifuged at 50006 g for 10 minutes to remove the aqueous phase once again. The protein was then precipitated with 1 ml ice cold acetone and the pellet was resuspended in sample buffer and run on a 10% SDS-PAGE gel either for coomassie staining or for immunoblotting. Western blotting was performed using PVDF membranes. Blots were developed by incubating with a 1:1000 dilution of the PilQ [46] and the PilE monoclonal [54] antibody, followed by washes, and incubation with a 1.10000 dilution of anti-Mouse alkaline phosphatase-conjugated secondary antibody (Sigma). The chemiluminescence signal was obtained using the CSDP-star substrate (Roche) on a Roche Lumi-imager.
2014-10-01T00:00:00.000Z
2011-01-31T00:00:00.000
{ "year": 2011, "sha1": "cdcd095d56004c74280567e0b25af1e9ce4e89e0", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0016624&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7c1806c1248fd779ef16eb626b380f4a7ebb875f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
27721898
pes2o/s2orc
v3-fos-license
fNIRS detects temporal lobe response to affective touch Touch plays a crucial role in social–emotional development. Slow, gentle touch applied to hairy skin is processed by C-tactile (CT) nerve fibers. Furthermore, social brain regions, such as the posterior superior temporal sulcus (pSTS) have been shown to process CT-targeted touch. Research on the development of these neural mechanisms is scant, yet such knowledge may inform our understanding of the critical role of touch in development and its dysfunction in disorders involving sensory issues, such as autism. The aim of this study was to validate the ability of functional near-infrared spectroscopy (fNIRS), an imaging technique well-suited for use with infants, to measure temporal lobe responses to CT-targeted touch. Healthy adults received brushing to the right forearm (CT) and palm (non-CT) separately, in a block design procedure. We found significant activation in right pSTS and dorsolateral prefrontal cortex to arm > palm touch. In addition, individual differences in autistic traits were related to the magnitude of peak activation within pSTS. These findings demonstrate that fNIRS can detect brain responses to CT-targeted touch and lay the foundation for future work with infant populations that will characterize the development of brain mechanisms for processing CT-targeted touch in typical and atypical populations. INTRODUCTION Affective touch, such as that shared between a mother and her infant, plays a critical role in social-emotional development.This caress-like touch, when applied to hairy skin, is processed by a specific type of nerve fiber called C-tactile (CT) afferents (Morrison et al., 2010).Despite an extensive behavioral literature on the importance of touch in early development, researchers have only recently begun to study the neural underpinnings of the CT system in typical adults (e.g.Olausson et al., 2002;Gordon et al., 2013;McGlone et al., 2012), and developmental research on the topic is scant.An understanding of these neural mechanisms and their maturation in typically developing infants and children will improve our understanding of the critical role of touch across the lifespan and may inform study of neurodevelopmental disorders that involve sensory issues, such as autism.Functional near-infrared spectroscopy (fNIRS) is an emerging tool in developmental cognitive neuroscience that can be used with awake and alert infants, toddlers and children (e.g.Taga and Asakawa, 2007;Grossmann and Johnson, 2010;Ichikawa et al., 2010;Lloyd-Fox et al., 2010;Gervain et al. 2011).Although functional magnetic resonance imaging (fMRI) has been used to investigate brain mechanisms that support the processing of CT-targeted affective touch (Olausson et al., 2002;Morrison et al., 2010;Gordon et al., 2013;Voos et al., 2013), no study to date has used fNIRS to examine this system.We sought to establish a paradigm to study the developmental trajectory of brain mechanisms for processing affective touch, targeting the CT system.Based on our fMRI results (Gordon et al., 2013;Voos et al., 2013), we hypothesized posterior temporal lobe involvement in processing CT-targeted touch.Furthermore, we investigated the association between the neural response to such touch and individual differences in autistic traits, based on previous findings of this relationship in healthy adults (Voos et al., 2013). The skin is the largest and earliest developing sensory system in humans (Montagu, 1971).For example, as young as 8 weeks in utero, a fetus will pull away from an object that touches its face (Hepper, 2002).Touch plays a fundamental role in social development (Atkinson et al., 1982;Maurer and Maurer, 1988), and interpersonal touch is one of the earliest forms of parent-child communication (e.g.Frank, 1957;De Chomaso, 1971;McDaniel and Andersen, 1998).Studies with non-human primates (e.g.Harlow and Zimmermann, 1959) and human infants (e.g.Barnett, 2005) have highlighted the importance of social touch in healthy development and specifically in the development of emotion regulation (Feldman et al., 2003).Mothers exhibit a distinct pattern of slow, affective touching behavior (beginning with fingertips and expanding to full palm) in conjunction with prolonged eye contact when first interacting with their newborns, implicating a specific parental touch response (Rubin, 1963;Klaus et al., 1970).Furthermore, <4 days post-partum, mothers (with eyes and noses blindfolded) can recognize their newborn's hand solely based on their sense of touch (Kaitz et al., 1992(Kaitz et al., , 1993)).Finally, tactile stimulation is soothing to newborns (Birns et al., 1966;Korner and Thoman, 1972) and improves the health of preterm infants by increasing weight gain and caloric intake (Helders et al., 1989;Scafidi et al., 1990).Taken together, these studies highlight the importance of touch in early social interactions.Although the benefits of affective touch have received extensive empirical attention (Stack, 2001), related neurodevelopmental research is sparse.An understanding of these neural mechanisms and their typical development is necessary for a complete understanding of the biological bases supporting the critical role of touch across the lifespan. A specific type of nerve fiber, CT afferents, has been implicated in processing slow, gentle touch (Morrison et al., 2010), much like the touch described earlier in parent-infant interactions.CT afferents exist only in the hairy skin of mammals and respond specifically well to slow, caress-like touch ranging from 1 to 10 cm/s (Kumazawa and Perl, 1977;Vallbo et al., 1993;Lo ¨ken et al., 2009).In their pioneering work, Olausson et al. (2002) identified CT afferents in a neuropathy patient lacking myelinated A-beta nerves, which normally function in discriminative tactile sensation, and showed that stimulation of CT afferents elicited activation in insular cortex.This finding led researchers to propose the 'skin as a social organ' hypothesis (Morrison et al., 2010), positing that the CT system represents an evolutionarily conserved mechanism for processing affective, or 'limbic' touch (Olausson et al., 2002(Olausson et al., , 2008)).Recent fMRI studies from our group (Gordon et al., 2013;Voos et al., 2013) support this hypothesis by demonstrating that CT-targeted touch activates key nodes of the 'social brain' (Brothers, 1990;Frith 2007;Adolphs, 2009) including the posterior superior temporal sulcus (pSTS), medial prefrontal cortex (mPFC), amygdala and posterior insula.In addition, the pSTS response to CT-targeted touch has been linked to individual differences in autistic traits (Voos et al., 2013).Using fMRI, Voos et al. (2013) found that neurotypical adults with more autistic traits exhibited a diminished pSTS response to CT-targeted touch.As social deficits are a core feature of autism (APA, 2000), the relationship between brain mechanisms for processing affective touch and autistic traits is of particular interest.Autism is a neurodevelopmental disorder, and therefore disrupted brain mechanisms for processing touch may arise early in life and affect the subsequent development of this system.One investigation of adults with autism demonstrated abnormal brain responses to tactile stimulation, with these adults showing decreased neural response to pleasant/neutral touch but increased neural response to aversive touch compared with healthy controls (Cascio et al., 2012).This supports the notion that individuals with autism experience and process touch differently.Affective touch plays a critical role in social-emotional development, and we propose that pSTS may represent the neural basis of this process.Thus, this study aimed to assess the ability of fNIRS to detect brain responses to CT-targeted touch in pSTS, as this imaging method is particularly suited for developmental studies of infant brain function. Functional NIRS is well-suited for the study of infants for several reasons.Compared with fMRI, fNIRS imposes fewer safety concerns because there is no magnetic field or noise (therefore, no need for hearing protection).It is also more ecologically valid as infants can sit up naturally in the lap of a parent (Gervain et al., 2011).Compared with electroencephalography, fNIRS has better spatial localization, is unaffected by eye blinks, and is less sensitive to motion artifacts (Gervain et al., 2011).Given these advantages, and to pursue the use of fNIRS to study the development of brain mechanisms for processing affective touch, we sought to assess whether fNIRS can reliably measure brain responses to CT-targeted touch in an adult population.Although a growing number of fNIRS studies are investigating brain mechanisms for social processing in the auditory (Sakatani et al., 1999;Csibra et al., 2004;Taga and Asakawa, 2007;Bortfeld et al., 2009;Saito, 2009;Grossmann et al., 2010) and visual (Otsuka et al., 2007;Grossmann et al., 2010;Ichikawa et al., 2010;Kojima and Suzuki, 2010;Lloyd-Fox et al., 2010) domains, few studies have explored the tactile domain (but see Haensse et al., 2004;Becerra et al., 2008;Saito, 2009;Shibata et al., 2012). This fNIRS study aimed to identify a pSTS response to CT-targeted touch in typical adults to establish a method for future research investigating early development of cortical mechanisms for processing affective touch.Although other brain areas, such as the insula (Olausson et al., 2002), are known to be involved in processing CT-targeted touch, these regions were not examined in this study due to cortical depth beyond fNIRS measurement capabilities and limited coverage of the optode lattice, respectively.We hypothesized that fNIRS would be sensitive to cortical brain activity to CT vs non-CT touch in right posterior temporal brain regions found in an fMRI study of CT-targeted touch.We utilized an identical paradigm to our fMRI study (Gordon et al., 2013), which included alternating blocks of gentle touch to the arm (CT) and palm (non-CT).This similarity allows us to compare data collected with fMRI and fNIRS.We also examined individual differences in the response to CT-targeted touch as a function of participants' autistic traits.We hypothesized that individuals with more autistic traits would exhibit a diminished response to this socially relevant touch.Hence, we sought to establish fNIRS as a neuroimaging technique to distinguish between posterior temporal lobe neural responses to CT-and non-CT-targeted touch in healthy adults.A secondary goal was to determine whether fNIRS could capture a relationship between individual differences in autistic traits and the neural response to CT-targeted affective touch.Successfully achieving these goals will allow us to downward extend this paradigm to infants to further understand how the neural systems for processing CT-targeted affective touch develop in typical and atypical populations. EXPERIMENTAL PROCEDURES Participants Participants included 30 healthy, right-handed adults (18 females, 24.2 years AE 3.47).Four participants were excluded from analyses because two or more of their 52 recording channels did not collect data, which prohibited spatial interpolation of the results.Four additional participants were excluded due to lack of channel position measurements, as this prohibited spatial coregistration of data into standard space.Therefore, 22 participants were included in subsequent analyses (13 female, 24.1 years AE 3.80).Written informed consent was obtained for each participant according to a protocol approved by the Yale School of Medicine Human Investigations Committee.Participants received $25 for their participation. Pre-experiment self-report ratings Before the fNIRS session, an experimenter brushed participants on their right arm and palm.Participants rated the pleasantness of each type of touch on a 1-5 Likert scale (1 ¼ 'not at all', 2 ¼ 'slightly', 3 ¼ 'moderately', 4 ¼ 'very' and 5 ¼ 'extremely').Each participant completed the Autism-Spectrum Quotient (AQ; Baron-Cohen et al., 2001), Social Responsiveness Scale (SRS; Constantino and Todd, 2003) and Social Touch Questionnaire (STQ; Wilhelm et al., 2001).The AQ is a self-report measure of autistic traits.Scores range from 0 to 50; higher scores indicate more autistic traits.The SRS measures social responsiveness and is completed by a friend or family member of the participant.Scores range from 0 to 195; higher scores indicate less social responsiveness.The STQ is a self-report measure that assesses participants' attitudes toward social touch.Scores range from 0 to 80; higher scores indicate an aversion to giving, receiving and witnessing social touch.After completing these ratings and questionnaires, we measured 8 cm on the right arm (beginning at the wrist) and 4 cm on the right palm (beginning at the base of the hand) of each participant, to demarcate the brushing area. Experimental design Participants received continuous brushing back and forth (proximal-distal orientation) to the right forearm and palm separately, in a block design procedure.There were two alternating blocks of each condition.Each block contained eight repetitions of 6 s periods of touch (arm or palm) followed by 12 s of rest (no touch).Between blocks, there were six additional seconds of rest to allow the experimenter to prepare for the next block of touch (Gordon et al., 2013).Tactile stimuli were slow strokes (8 cm/s) performed with a 7-cm wide watercolor brush administered by one of two trained female experimenters.The brushing velocity of 8 cm/s was chosen because this speed is within the optimal range for targeting CT afferents (Lo ¨ken et al., 2009).Before data acquisition, participants were instructed to close their eyes throughout the procedure and to focus on the touch.The experimenter watched and confirmed that all participants kept their eyes closed throughout the duration of the experiment.In total, the procedure lasted 10.03 min. NIRS data acquisition Preceding data collection, each participant's head dimensions were measured as per the international 10-20 system (Jasper, 1958) to assess variability in participants' head sizes and shapes.The average measured distances from nasion to inion, left ear to right ear and nasion to right ear were 29.82 cm (AE 3.31), 35.98 cm (AE 1.56) and 16.23 cm (AE 1.10), respectively.Importantly, as cap placement was standardized to the right ear in each participant, the relatively low variability in distances from the nasion to the right ear validated the use of this landmark for consistent optode location.Measurements of neural activation were taken using a 50 two-channel NIRS machine (ETG-4000, Hitachi Medical) with 33 optodes separated by 3 cm configured in a 3 Â 11 lattice.This spacing of optodes allowed for measurement of hemoglobin changes at a depth of up to $2 cm, thus occluding the measurement of deeper brain structures, such as the insula (Hock et al., 1997).Changes in oxygenated (oxy-Hb) and deoxygenated (deoxy-Hb) hemoglobin were measured using two wavelengths of infrared light (695 and 830 nm).Analyses in this study focused predominantly on oxy-Hb, although our region of interest (ROI) analysis of both oxy-Hb and deoxy-Hb showed that the two measures were inversely related, as would be expected (Figure 2).Data were collected at a frequency of 10 Hz.Placement of the optode lattice was standardized across subjects by positioning source probe number 25 directly above each participant's right ear.Following the acquisition of functional data, all optodes were removed from the lattice, and a 3D digitizer system (Polhemus, VT, USA) was used to localize the placement of each optode in relation to reference points on the participant's head (nasion, inion, left and right ears, top and back of the head).The locations of these reference points were used to coregister each recording channel into MNI space on a single-participant level using NIRS-SPM (Jang et al., 2009;Ye et al., 2009;Tak et al., 2010a,b) to perform subsequent group-level general linear model (GLM) analyses and ROI analyses in standard space. Data analysis GLM-based analyses Before GLM analyses, data were low-pass filtered at a frequency of 0.2 Hz, and a moving average of 1 s was applied to decrease high-frequency noise.Using NIRS-SPM version 3.2 (Jang et al., 2009;Ye et al., 2009;Tak et al., 2010a,b), global trends were removed from each single participant's measurement data using a wavelet-minimum description length detrending algorithm.For GLM analyses, 6-s blocks of arm and palm touch were modeled separately with boxcar functions.We then performed single-participant GLM analyses by convolving the two task functions (arm touch and palm touch) with a double-gamma hemodynamic response curve to model the hypothesized oxy-Hb response during each experimental condition.On a single-participant level, channel placement was registered to MNI space, and statistical values calculated for each recording channel were interpolated to increase spatial resolution (Ye et al., 2009).Three of the 22 participants were missing data from 1 of their 52 channels.Because data from all 52 channels are needed for spatial interpolation, data points for these channels were estimated by averaging non-processed measurement data from the four adjacent channels for each time point of the experiment.None of these estimated channels were used in the ROI analysis.For the four main contrasts of interest (arm touch > palm touch, palm touch > arm touch, arm touch > baseline and palm touch > baseline), single-participant activation maps were combined in group-level mixed-effect GLM-based analyses, analyzing any pixel in which 15 of the 22 participants had overlapping functional data (see outline on Figure 1).All analyses were assessed at an uncorrected statistical threshold of P < 0.05. ROI analyses Based on our a priori hypothesis that pSTS is involved in processing CT-targeted touch (Gordon et al., 2013;Voos et al., 2013), we implemented an ROI analysis focusing on this brain area.Our ROI was based on a pSTS region that was significantly active to arm vs palm touch in an fMRI study of identical design to the current fNIRS investigation (Gordon et al. 2013).Using the voxel of peak activation in the pSTS region reported in the aforementioned fMRI study (Talairach coordinates: 57, À55, 13), and converting each participant's NIRS channel location coordinates from MNI to Talairach space, we identified the four fNIRS recording channels whose approximated cortical locations were closest to this peak coordinate for each participant.Maximum distances from the peak voxel coordinate to the farthest channel included in the ROI for each participant ranged from 1.8 to 2.8 cm.ROI analyses were performed by integrating oxy-Hb signals over each 6 s block of arm touch (16 blocks) and palm touch (16 blocks) on a single-participant level in each channel.These integrated signals in the four channels identified as being within each participant's individually defined pSTS ROI were then averaged for each time point of data acquisition in the 6-s stimulus block, as well as for the 2 s pre-stimulus onset and 10 s post-stimulus offset.A moving average of 1 s was also applied to decrease high-frequency noise, and integrated data were baseline corrected using a pre-block period of 2 s and a post-block period of 2 s (beginning after a recovery time of 8 s following block completion).The same was done for deoxy-Hb signals.To take advantage of the within-participant experimental design, we conducted paired-sample t-tests to examine the differential pSTS ROI activation to arm and palm touch ranging from 2 s pre-stimulus onset to 10 s post-stimulus offset for each participant.This difference waveform was than averaged on a group level, providing a visualization of the oxy-Hb and deoxy-Hb responses to arm > palm touch in the pSTS. Based on the GLM results, we conducted a post hoc ROI analysis of the dlPFC with equivalent parameters to the pSTS analysis described earlier.The dlPFC region was based on that identified as active to CT-targeted touch in Voos et al. (2013) (Talairach coordinates: 39, 44, 13).Maximum distances from the peak voxel coordinate to the farthest channel included in this ROI for each participant ranged from 1.5 to 4.5 cm. Relationship between autistic traits and brain response to CT-targeted touch Exploratory analyses examined the relationship between autistic traits and peak neural responses to CT-targeted touch in the pSTS ROI to inform future work on the relation of these brain responses to autistic symptoms.For each participant, the peak magnitude of oxy-Hb during integrated arm (CT-targeted) and palm (non-CT-targeted) trials were identified within the time window of 5-12 s after stimulus onset.This was chosen to correspond to the previously demonstrated time window of the hemodynamic response to CT touch (Gordon et al., 2013).After removing one participant whose peak oxy-Hb value for arm touch was greater than two standard deviations from the mean, 21 participants remained in the analysis.We performed a correlation analysis between participants' AQ scores and their peak response to CT touch within their four-channel pSTS ROI.Given our limited AQ range, we also split participants into quartiles based on their AQ scores and examined the relationship between peak response to CT touch and AQ scores. Self-report measures Self-report measures based on the experience of arm and palm touch prior to the fNIRS experiment revealed a mean pleasantness rating for arm of 3.7 (AE 0.7) and for palm of 3.2 (AE 0.9).Arm touch was rated significantly more pleasant than palm touch [t(22) ¼ 2.5, P ¼ 0.04].The average scores on the remaining self-report measures were as follows: STQ 27.1 (AE 10.31), AQ 13.09 (AE 5.79) and SRS 15.32 (AE 11.67). NIRS GLM-based analyses Group-level analyses were performed using a mixed-effects GLM.Group results were first assessed in the contrast arm touch > palm touch.All activations reported are in the right hemisphere of the brain.Distinct regions of the posterior temporal lobe and the dlPFC showed significant activation to arm touch > palm touch (P < 0.05; Figure 1).Secondary group-level analyses revealed that no regions showed significant activation to palm touch > arm touch, arm touch > baseline or palm touch > baseline at P < 0.05, or at a more lenient threshold of P < 0.1. ROI analyses The ROI analysis served to confirm and expand upon the results of our GLM analysis.Activations to arm and palm touch were calculated for each participant based on the average of his or her four recording channels located closest to the peak coordinate of activation in the pSTS region identified by Gordon et al. (2013) as being preferentially active to CT-targeted touch (Figure 2a).Paired-sample t-tests were conducted at each time point, as an ad hoc method of delineating the time window of maximum significance between brain responses to arm and palm touch.This comparison revealed that oxy-Hb activation to arm touch was significantly greater than activation to palm touch from 8.4 to 9.7 s post-stimulus onset (P's < 0.05; Figure 2b).In addition, the subtraction of arm-palm touch at each time point for each participant allowed us to visualize the average time course of the oxy-Hb and deoxy-Hb response to arm > palm touch (Figure 2b). In addition, activations to arm and palm touch were calculated for each participant based on the average of his or her four recording channels located closest to the peak coordinate of activation in the dlPFC region previously identified (Voos et al., 2013) to process CT-targeted touch.As described in the pSTS ROI analyses, we also performed paired-sample t-tests at each time point to examine the oxy-Hb time course response to arm > palm touch, which revealed that activation to arm touch was significantly greater than activation to palm touch from 10.8 to 11.1 s post-stimulus onset in dlPFC (P's < 0.05). Relationship between autistic traits and brain response to CT-touch A correlation analysis of peak pSTS response to arm touch and AQ scores was not significant.Thus, in an attempt to elucidate how these factors might be related, participants were divided into quartiles based on their AQ scores: low AQ (M ¼ 5.60; s.d.¼ 1.14), low-medium AQ (M ¼ 9.75; s.d.¼ 1.71), high-medium AQ (M ¼ 14.14; s.d.¼ 1.10) and high AQ (M ¼ 20.80; s.d.¼ 2.59), with five, four, seven and five participants in each group, respectively.Next, an independent-sample t-test conducted in the low and high AQ quartile groups revealed a significant difference in peak oxy-Hb concentration [t(8) ¼ 2.69, P ¼ 0.03]: individuals with lower AQ scores (fewer autistic traits) had higher peak pSTS responses to arm touch relative to the individuals with higher AQ scores (more autistic traits) (Figure 2c).There was no difference in pSTS response to arm > palm touch or to palm touch in the low and high AQ groups.It should be noted that the Levene's test for equal variance was non-significant; therefore, t-tests for samples with equal variance were used.Similar analyses were conducted by parsing participants into groups based on SRS and STQ scores, however, no significant differences in peak pSTS activation to arm or palm touch based on these scores were detected. DISCUSSION The aim of this study was to validate fNIRS as a neuroimaging technique to measure temporal lobe responses to CT-targeted touch in a sample of healthy adults.Our main objectives were to verify the ability of fNIRS to detect the cortical neural responses to gentle touch to the arm and palm by replicating results found using fMRI (Gordon et al., 2013) and to establish an fNIRS paradigm for use in future studies of the development of these neural mechanisms in infants. This study replicated previous fMRI findings of pSTS involvement in processing CT-targeted touch.We targeted CT-afferent nerves located in hairy skin because they respond specifically well to slow, gentle touch (Olausson et al., 2002), which is reminiscent of that shared in early social interactions, such as a parent's caress of their child.This type of touch is known to acutely impact social-emotional development (Stack, 2001;Barnett, 2005).To study early development of brain mechanisms for processing such affective touch, we must first establish the brain response to affective touch in healthy adults as measured with fNIRS.To this end, we implemented group-level GLM and individualized ROI analyses, two independent strategies, both of which revealed complimentary results of pSTS activation to CT-targeted touch.Thus, the GLM and ROI analyses do not represent distinct findings.Instead, the ROI analysis confirms that the activation visualized in the GLM analysis is anatomically comparable with the locus of activation determined in a previous fMRI study of identical experimental procedure (Gordon et al., 2013).In addition, exploratory analyses showed that individual differences in autistic traits were related to peak pSTS activation to CT-targeted touch.Individuals with more autistic traits had lower peak pSTS activation to arm touch than those with fewer autistic traits.This finding is concordant with our recent fMRI result of a negative correlation between pSTS response to CT-targeted touch and autistic traits (Voos et al., 2013).Thus, the current results validate fNIRS as a neuroimaging method for studying the neural mechanisms of CT-targeted affective touch and lay the foundation for studying the development of this system in the first years of life.The implications of these findings are discussed below in the context of social neuroscience, typical development and developmental disorders, such as autism. To our knowledge, this is the first fNIRS study to examine the brain mechanisms for processing CT-targeted affective touch.Here, we replicate fMRI findings of pSTS response to this touch, expanding upon previous work showing concordance between brain responses measured with fMRI and fNIRS (Steinbrink et al., 2006).As, compared with fMRI, fNIRS has reduced spatial resolution and decreased signal-to-noise ratio, we implemented two methods to account for these limitations.We digitally localized optode placement to normalize the location of recording channels to standard space in each participant.Using this normalized placement, we applied a hemodynamic response function to model the oxy-Hb response to arm and palm touch using a GLM.Together, these analysis strategies enhanced spatial resolution and statistical power in the identification of true hemodynamic responses recorded with fNIRS.In addition, we implemented an ROI analysis that utilized the peak voxel of activation in pSTS to CT-targeted touch identified in an fMRI study of identical design (Gordon et al., 2013).The four recording channels nearest to this peak coordinate for each participant were combined to form ROIs that were individualized based on each participant's optode placement.This novel analysis method allowed us to hone in on a specific brain region involved in processing CT-targeted touch.The results from these two independent analyses converge to support the hypothesis that the pSTS is involved in the neural processing of CT-targeted, affective touch.While it is possible that differential stroking distance influenced the results, previous fMRI findings of pSTS activity in response to CT-targeted touch (i.e. from a paradigm in which stroking distance was held constant) (Voos et al., 2013) suggest that this is not the case.Our findings validate the use of fNIRS to measure brain responses to affective touch, establishing the efficacy of our paradigm for future studies of the development of the CT system, an area ripe for future research. Our finding of pSTS activation to CT-targeted touch is noteworthy given this region's role as a key node of the 'social brain' (Brothers, 1990).The pSTS responds to social stimuli in the visual (Allison et al., 2000;Pelphrey et al., 2005), auditory (Belin et al., 2000;Shultz et al., 2012) and tactile (Gordon et al., 2013;Voos et al., 2013) domains.The diverse nature of these studies emphasizes the 'multimodal' role of this region in social processing (Barraclough et al., 2005;Beauchamp 2005).Future research on the development of this region's response to social stimuli is especially important in the tactile domain, as this sensory system is the earliest to develop (Montagu, 1971) and plays an important role in social function in both primates (Harlow and Zimmermann, 1959;Bowlby, 1969) and human infants (Stack, 2001;Barnett, 2005). In addition to pSTS, we found dlPFC sensitivity to CT-targeted touch.Notably, this region likely does not correspond to the mPFC activation previously found to process affective touch (Gordon et al., 2013) because fNIRS only measures a depth of $2 cm from the scalp (Cui et al., 2011).However, dlPFC activation in this study shows similarity to an area of dlPFC found to activate to CT touch in another fMRI study by our group (Voos et al., 2013).Based on the peak voxel of this activation, we conducted a post hoc ROI analysis to determine whether dlPFC activation was anatomically analogous to that previously reported.Indeed, we found that activation to arm touch in dlPFC measured with fNIRS was significantly greater than palm touch for a brief period (10.8-11.1 s) post-stimulus onset.We hypothesize the role of the dlPFC in processing CT touch to be related to reward, given that in this study and others (Gordon et al., 2013;Morrison et al., 2011;Voos et al., 2013), participants rated CT-targeted touch as more pleasant than non-CT-targeted touch.The dlPFC has been implicated in reward processing in a variety of tasks in both humans and primates (Inoue et al., 1985;Leon and Shadlen, 1999;Hornak et al., 2004).Furthermore, this region responds to pleasant vs unpleasant touch to the leg (Hua et al., 2008), and pleasant vs unpleasant words (Herrington et al., 2005).While this study did not aim to investigate this region's role in processing social touch, our GLM and ROI analyses converge to suggest its importance in processing CT-targeted touch and future work may clarify this region's functional involvement. In addition, we found that individual differences in autistic traits were related to the magnitude of peak activation to CT-touch in pSTS.Individuals with more autistic traits, as measured by the AQ, had significantly lower peak activation in pSTS to gentle arm touch relative to individuals with fewer autistic traits.This supports previous findings that individuals with more autistic traits exhibit dampened pSTS responses to CT-targeted touch (Voos et al., 2013).Although these findings are within a healthy sample, it is possible that pSTS activation to CT-targeted affective touch may be disrupted in individuals with autism.Indeed, Cascio et al. (2012) recently demonstrated that adults with autism show decreased neural response to pleasant touch and increased neural response to aversive touch. While novel, this study incurred some limitations.An inherent limitation to fNIRS methodology is the difficulty in standardizing cap placement due to variability in head size and shape.We mitigated this problem by implementing digital localization to normalize all subjects' recording channels to standard space.In addition, the depth to which the infrared light can penetrate within the brain limits the spatial resolution of fNIRS (up to $2 cm from the skull).Thus, we were unable to measure deeper regions involved in CT processing, such as the insula and the orbitofrontal cortex (McGlone et al., 2012).In addition, because our fNIRS device can simultaneously record from a maximum of 33 optodes, we chose to retain spatial specificity by placing these optodes at a traditionally used distance of 3 cm apart from each other in our 3 Â 11 lattice.Due to this spacing, we were only able to image one side of the brain.We focused on the right hemisphere because previous studies from our group have shown ipsilateral (right hemisphere) temporal lobe (pSTS) activation to right-lateralized touch in response to CT-targeted touch (Gordon et al., 2013;Voos et al., 2013).Also, due to the criteria of only analyzing pixels where 2/3 or greater of the participants contributed data (to avoid reporting spurious results from a small subset of the sample), we were unable to explore the somatosensory cortex (see Figure 1 for extent of analyzed pixels) although it may be of interest (but see Olausson et al., 2002).Finally, because we had a small range of AQ scores, a larger range of AQ scores and a larger sample size might reveal a more robust relationship between autistic traits and brain responses to touch. This study lays the foundation for future research aimed at exploring the typical and atypical development of neural processing of gentle touch processed by the CT system.We successfully replicated a tactile paradigm using fNIRS, originally implemented in fMRI, and found pSTS and dlPFC activation to CT-targeted, affective touch.Concordant with our previous fMRI study, individuals with more autistic traits showed a diminished pSTS response to CT-targeted touch.Future work will explore cortical brain mechanisms for processing CT-targeted touch in individuals with autism, as well as infants both at high-and low-risk for developmental disorders such as autism, where hypersensitivity to sensory information, especially touch, is often present (Blakemore et al., 2006).Neuroimaging studies of the development of affective touch processing hold great promise for illuminating the biological mechanisms underlying the robust influence of touch on social-emotional development. Fig. 1 Fig. 1 Activation to arm > palm touch.The lighter region encompasses the pixels analyzed in the group-level GLM analyses.Activations indicate regions with a greater response to arm touch relative to palm touch (P < 0.05).Activation is presented on the right cortical surface, in Montreal Neurological Institute space. Fig. 2 Fig. 2 Activation to arm vs palm touch within the pSTS ROI.(a) Visualization of the peak voxel used for ROI analysis, with the extent of the sphere depicting the average distance from the peak voxel of interest that included the four recording channels used for the individualized ROI analysis (average radius ¼ 2.3 cm).(b) Waveforms for the four-channel pSTS group ROI analysis (arm touch > palm touch), with stimulus from 2 to 8 s.(c) Peak amplitude of pSTS response to arm touch in the high (N ¼ 5) and low (N ¼ 5) AQ groups, restricted within the time window of 5-12 s post-stimulus onset.Range of AQ scores for each group are shown in parentheses (low-medium and high-medium groups not shown here).
2018-04-03T01:01:46.532Z
2014-04-01T00:00:00.000
{ "year": 2014, "sha1": "76f68ab8eed98725919d3bfa29cbe785932e0d18", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/scan/article-pdf/9/4/470/27110170/nst008.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "76f68ab8eed98725919d3bfa29cbe785932e0d18", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
253074723
pes2o/s2orc
v3-fos-license
Influential children in middle childhood peer culture: Effects of temperament and community culture For children in middle childhood, the social world, particularly the behavior and attitudes of their school peers, has been shown to be an important factor in their educational and mental health outcomes. In the school environment, some children seem to influence the attitudes and behavior of their peers more than others. The behavior patterns of children, as reflected in temperamental traits, have been shown to drive peer perception in important ways and might play a role in identifying the individuals and social processes that operate in peer influence. It seems likely that temperamental traits will have different effects on school peers, dependent on characteristics of the school attended. Fourth and fifth grade children from four rural counties in the southeastern portion of the United States were studied. Temperamental characteristics were assessed based on teacher perception of six characteristics. Peer perceptions of the extent to which each child was perceived to influence others in five areas of school culture (e.g., academics, sports) was measured through a peer nomination procedure. Additional status-related perceptions and behaviors of participating children were also assessed by peer nominations. Teacher ratings of temperamental behaviors were submitted to latent profile analyses resulting in a seven-cluster model. Results indicated temperamental profiles were significantly and meaningfully associated with peer perceptions of influence as well as social status. Further, demographic differences between two groups of schools were found to moderate the effects that temperament profile had on peer influence. Introduction Children in middle childhood are acutely attentive to their social world. They have emerged from living in a world of adult caretakers (e.g., parents and teachers) into the complex social world of peers. A good proportion of the social interaction with peers takes place in schools where children must learn to adapt to a staggering array of individual differences in behaviors, attitudes, and expectations. How the child copes with this social environment has important consequences; it can determine acceptance into subgroups (e.g., cliques; Gazelle and Ladd, 2003;Rubin et al., 2006), as well as general social status, mental health, and academic achievement (Asendorpf, 1990;Rubin and Coplan, 2010;Masland and Lease, 2016). One important aspect of the social life of children in middle childhood is that peers, through a variety of means, influence the behavior and attitudes of one another. Peer influence during the late elementary school years has been shown to effect aggressivedisruptive behavior (Powers and Bierman, 2013) as well as academic engagement (Gremmen et al., 2018). Peer influence can stem from close friends but also from the broader peer group (Gottman and Mettetal, 1986). Identifying which children are most likely to be influential, and the social circumstances in which children are influential, is an important question and the focus of the current study. The Dominance-Prestige model of social influence has guided our thinking about how temperament might affect the influence one child has on another. This model posits that there are two pathways that can be used to climb the social hierarchy (Strayer and Trudel, 1984;Cheng et al., 2013;Maner, 2017). The Dominance pathway is established in the context of agonistic exchanges using manipulation and aggressive strategies. It seems likely that children who exhibit higher levels of the temperamental trait labeled irritability or negative emotionality are likely to rely on dominance and antagonistic behaviors to establish influence. The Prestige path, in contrast, is accomplished based on skills, knowledge, and abilities. Those who have higher status based on prestige are perceived as having higher competence and altruistic tendencies. The temperamental characteristics most associated with altruistic behaviors are positive emotionality. Skills and abilities most pertinent in elementary school are academic and athletic abilities. Academic ability is associated with temperamental traits related to self-regulation of attention, which, in turn (Martin, 1989), is strongly linked to school performance (Martin, 1989;Martin et al., 2020). Social skills are related to the temperamental traits of sociability and inhibition , and having a high level of gross motor vigor (activity level) is logically related to athletic ability. Temperament research has traditionally focused on the measurement (Rothbart et al., 2001;Halverson et al., 2003;Putnam and Rothbart, 2006) and structure of early appearing individual differences of children (Beekman et al., 2015;Martin et al., 2020), the extent to which they are genetically linked (Saudino and Wang, 2012;Tackett et al., 2013;Scott et al., 2016) as well as the extent to which they are associated with a variety of physiological functions (Kagan et al., 1988;Van Ijzendoorn et al., 2012;White et al., 2012;Marsman et al., 2013). In addition, there has been considerable effort to demonstrate that temperament traits are related to a wide range of behaviors in childhood, including diagnosed mental health problems (Thomas and Chess, 1977;Tackett et al., 2013). Among the most provocative research efforts are those that demonstrate the long-range effects of temperament in early childhood on adult attitudes and behaviors including political orientation (Block and Block, 2006), adult personality (Caspi and Silva, 1995), adult psychiatric disorders , antisocial behavior in adulthood (Henry et al., 1996;Moffitt et al., 2002), and gambling (Slutske et al., 2012). There has been much less attention on temperament effects on schooling. The research that has been published has primarily related temperament to achievement and behavior problems (Martin and Holbrook, 1985;Martin, 1989;Nelson et al., 1999;Guerin et al., 2003) as well as the management of individual differences in the classroom (McClowry, 2014). There has been a notable lack of research on temperament as it affects social relationships in general and peer influence, in particular. Given the importance to life-span development of early educational experiences, this is an unfortunate oversight. One recent study by Martin et al. (2020) has addressed the issue of the relationship between peer influence and temperament. This research has shown that the temperamental profiles of children as assessed by parents and teachers are meaningfully related to the influence children have on one another in elementary school. However, this research did not address the issue of the effects of different macro-social environments on this relationship. The purpose of the current study was to refine aspects of the prior research and to directly address the effect of the broad social environment in which children live on temperament-influence relations. Three questions will be addressed in this paper: First, how do temperament-based profiles based on teacher perception relate to the influence peers have on one another as reported by the peers themselves? While this question was addressed in the prior research, the sample analyzed has been changed. The current sample is composed exclusively of 4 th and 5 th grade students, while the previous sample included 3 rd graders. This sharpens the focus of the research on late elementary school. Second, the profile model in the current analysis focuses exclusively on teacher perceptions of temperament and does not include data from parents as was the case in prior analyses. Third, profile models used in the prior research included parental and teacher perception of academic ability (intelligence). The current research focuses exclusively on tradition temperament constructs in the development of profile models. In addition, several refinements are made in the current analysis to help control for gender factors in the peer nomination procedures as well as to control differences among schools in the way that peer nominations were done. When the best fitting temperament profile model has been developed and associations to peer influence determined, the second question to be addressed becomes, what social status and status-related behavioral characteristics are most strongly associated with the temperament profiles of influential children. This analysis is designed to set the stage for future researchers to determine the longitudinal pathways operating from temperamental characteristics and social status characteristics to influence. The characteristics investigated include peer perception of popularity, likability, aggression, a tendency to be sympathetic, to work hard in school, to be perceived as cool, and to be good at sports. The characteristics were selected to present aspects of the dominance versus prestige approach to status attainment. The third question to be addressed relates to the effect of the broader Frontiers in Psychology 03 frontiersin.org social-cultural environment of the schools studied in modifying the association between temperament profiles and influence. The specific question that is addressed is: Are the temperamental characteristics of influential children in schools located in counties with higher high school graduation rates in the adult population different from the temperamental characteristics of influential children in schools located in counties with lower high school graduation rates? Participants Lease and colleagues (Kwon and Lease, 2014;Lease et al., 2020) initiated two different data collections designed to compare a variety of social and education outcomes from schools in the southeastern portion of the United States. The children studied included those attending schools in rural and semi-rural counties. The data were collected from rural areas to truncate socioeconomic differences within schools which have been shown to relate to a range of schooling outcomes. From this larger project, the data analyzed for the current study were selected to maximize the similarity in age and gender distribution of the children in two groups of schools. The groups of schools were differentiated by demographic characteristics, particularly the education level of the population from which the students were drawn. Data in the current analysis were obtained from teachers and students in six schools in three counties (School Group A: 22 teachers, and 448 students) and four schools in one county (School group B; 24 teachers, 349 students). All children were enrolled in the 4th or 5th grades, and all were between 9 and 12 years-of-age. Table 1 presents the demographic characteristics of the participants in Group A and in Group B schools as well as the total sample. The data in Table 1 indicate that the samples were similar except of the racial/ethnic composition; Group A school served a more diverse group of students. Demographic characteristics of counties in which schools were located To help understand the cultural context in which the students lived, we obtained data at the county level in which each school was situated. Data were obtained from the US census for 2010. All four counties were rural with no cities of population greater than 4,000. Between 2000 and 2010, School Group B was in a county that had significantly gained population, while the three counties in which Group A schools were located had lost population. The racial/ethnic composition of the public schools in Group A were more racially/ethnically diverse (44.4% minority children) than the county containing Group B schools (23.0% minority children). The populations in the county served by Group B schools had a higher median family income ($49,700) than the counties served by Group A schools ($38,033), but both were significantly below the United States median family income level in 2010 ($62,664). Both sets of schools served populations with very similar levels of educational attainment. For example, the percentage of the population 25 years or older who did not graduate from high school or obtain a GED was 16.9% (Group A) and 16.8% (Group B), but both were above the national average of 11.6%. However, the minority population was significantly less affluent and had lower mean educational attainment than the White population in all counties. Educational attainment differences were particularly lower for minority males. In the counties containing the Group A schools, the mean percentage of minority males (25 years and older) who did not graduate from high school or have a GED was 40.9%, while in the county containing the Group B schools, this percentage was 19.8%. In summary, both groups of schools were in rural areas in which the median family income was lower than the national average as was the educational attainment for the adult population. However, the minority population was far less affluent and less formally educated than the White population of these counties. Group A schools were in a county with a much higher proportion of minority residents than was the case for Group B schools. Thus, children in Group A schools grew up in an environment in which the adults, particularly males, were less educated and had fewer material resources. This was particularly true of the minority children in Group A. Study procedure For the original data collection from which the current participants were selected, approval was obtained from the superintendent of the school district. Then, individual school principals and staff were contacted; only one school declined participation. Active parental consent for student participation was required, and child assent was obtained prior to the administration of questionnaires. The roster of students used for peer nominations included only the names of students who had obtained parental consent to participate. Nonparticipating students were given the option of working quietly at their desks. All procedures were approved by a university Institutional Review Board. Measurement Teacher perception of temperament Teacher perceptions of their students' temperament characteristics were assessed based on a modified version of the Individual Differences of Children and Adolescents questionnaire (ICID; Halverson et al., 2003). The ICID was designed for parents; the revised form for teachers was modified to make it appropriate for classroom teachers. The measure was an abbreviation of the ICID and was very similar in length and item content to a published abbreviated version of the ICID (Deal et al., 2007). Seven scales from the Teacher ICID measure were used in the current study to develop temperament profiles. These scales were designed to measure classic temperamental traits. Inhibition and fearfulness were combined because they were highly correlated (0.80). This resulted in six temperament scales. The internal consistency reliability as indexed by the alpha coefficient for the 4 th and 5 th grade children studied in this analysis were as follows: activity level (alpha = 0.80), sociability (alpha = 0.90), positive emotionality (alpha = 0.88), negative emotionality (alpha = 0.93), distractibility (distractibility alpha = 0.81), and inhibition (inhibition and fearfulness, alpha = 0.80). The concurrent validity of the teacher form of the ICID has been documented through scale and profile similarities to parental ratings on the same instrument, as well as to important outcomes for children in elementary school such as behavior problem ratings and academic achievement (Martin et al., 2020). Peer perceptions Peer perception of influence was measured by self-report measure based on existing scales and/or theoretical formulation by Hawley et al. (2002), Keltner et al. (2001), and Janes and Olson (2000); see , for a complete description of these procedures. Influence was assessed in five areas: academics, sports, peer cultural trends (e.g., clothing, music), make-believe games, and inappropriate behavior (e.g., talking back to the teacher; fooling around when the teacher leaves the room). These measures resulted in six indicators of influence for each child, one for each of the five areas of influence and a total influence score created by summing scores across all five areas. An example of the questions used to elicit peer nominations of influence is: "Think of a time when you decided to work really hard on a class project or study hard for a test because other kids were. What kids made you want to study hard, too"? From a listing of consented children provided to each student, they recorded the number of the children who fit this description. In some schools, children were asked to nominate peers from their class (homeroom), whereas in other schools, children were nominated from the grade level. The numbers of nominations children received were standardized (M = 0, SD = 1) at the classroom level or at the school level, depending on which procedure was used. Standardization was used to control for the differing number of nominations possible based on the number of participating peers in the classroom or grade level. In addition, standardization was carried out separately for girls and boys. Gender plays a role in many aspects of peer relationship, particularly in middle childhood. Children interact with samegender peers more often than opposite-gender peers (Martin et al., 2013). To better understand the characteristics of children who were considered most influential, children were asked to nominate children who fit several behavioral or status characteristics. These nomination procedures were based on similar measurement procedures by Parkhurst and Hopmeyer (1998) and Coie et al. (1982). From these descriptions, seven scores were created, following the same standardization procedures described for influence nominations (above). Children were asked to nominate the peers they would most like to play with and those they would least like to play with. A social preference score was derived by subtracting the standardized least liked score from the standardized most liked score (Coie et al., 1982). A similar process was used to obtain a measure of popularity; that is, least popular scores were subtracted from the most popular scores. Further, nominations were obtained for the children who were perceived as 'cool' and well known in the school. Finally, nominations were obtained in response to the following: This person tries hard to do good schoolwork (tries hard); this person shows sympathy to a peer who is sad, hurt, or upset (shows sympathy); and this person is good at sports (good athlete). A final set of five descriptors indicating the tendency to be aggressive were obtained from peers and were aggregated into one score. Examples of these items are: "This person makes mean faces at someone when they are upset with them" and "This person overreacts and is easily pushed to anger. " The five-item aggression scale had a coefficient alpha of 0.92. Statistical procedures Children were given a score on each of the six temperamental characteristics rated by teachers. These scores were standardized for each teacher/classroom. This procedure helped to control for teacher biases in rating student behavior. These scores were submitted to a latent profile analysis using Mplus (Muthen andMuthen, 1998-2012). This type of analysis assumes that within a large group of children, there are subgroups (clusters) who share common patterns or profiles of characteristics, and that these profiles describe the children Frontiers in Psychology 05 frontiersin.org more accurately than any of the individual characteristics. These subgroups occur because there are correlations among the behavioral traits used as indicators of the subgroups. A latent profile is a description of a group (cluster) of individuals that share a pattern of behavior. It is latent in the sense that it is not known by the researchers at the time of data collection or analysis. The goal of the analysis is to statistically determine the smallest number of latent clusters that is sufficient to account for the associations observed among the measured variables. The cluster of individuals within a profile is typically identified by their average score on each indicator variable. All the individuals within the group do not have the same score, but the scores of children in the group are more similar than to children in any other group. A central question in latent profile research is how many clusters best fit the data. It is customary to test a wide range of models to find the one that best fits the statistical criterion. Previous research indicates that from 3 to 9 clusters meet these criteria for temperament and related child behavioral measures (Asendorpf and van Aken, 1999;Martin et al., 2020). The criteria that are most often used include a decline in the three information criteria (Akaike, Bayesian, and Bayesian adjusted for sample size) as more clusters are added to the model. Some researchers (Morin and Marsh, 2015) plot these criteria across models and look for an elbow in the declining plot line. One other criterion that was used in the current analysis is the size of the smallest cluster. Since differences in two groups of school were to be investigated, a minimal cluster size of 30 children was established before the analysis was done (4.0% of the sample). Two simplifying assumptions are made to reduce the number of parameters that are estimated in the model. The first assumption is that the correlation among indicator variables in each profile is zero. This assumption is never exactly met, but in the current analysis, all variables were correlated < 0.30 in each profile. The second assumption is that the standard deviation of each variable is the same for all profiles. Modeling the effects of indicator correlations within profiles and standard deviation differences across profiles would require much large samples than were available in the current analysis. These assumptions are common practice (Muthen andMuthen, 1998-2012). Results Temperament profiles Table 2 presents the outcomes of the latent class analyses for models containing 3 to 9 clusters. All three information criteria declined as the number of clusters in the model increased. The entropy index in all models was excellent. All the lowest mean classification probabilities were also excellent. Thus, these statistical indices were not particularly helpful in determining the best model fit. Consistent with suggestions by Maiano, et al. (2011) and Morin and Marsh (2015), when other indices do not point to a best fitting model, the rate of decline in the information criteria should be examined. At some point, as the number of clusters in the model is increased, the rate of decline in information criteria flattens out. In the current analysis, the rate of decline slowed between the 7-and 8-cluster models indicating that both models should be examined to determine if they fit other criteria (e.g., some cluster is very small; one model fits better with temperament research outcomes in the literature than another). After consideration of all criteria, the 7-cluster model was selected. Table 3 presents the mean temperament score for each profile cluster, the standard deviation of each variable within clusters, and the number of children in each cluster. The clusters in this paper will be identified by a number (1-7) and a brief description. The numbering of the clusters is arbitrary. The clusters have been numbers based on the number of children presenting each temperament profile from larges to smallest. Cluster 1 children are labeled 'average' (41.2% of the sample). All their scores are between +0.70 and − 0.70 standard deviations (the middle 50% of each scale distribution). Cluster 2 was labeled 'average with low levels of expression of negative emotion' (18.4%). These children are hypothesized to have high levels of self-regulation of negative emotion. Cluster 3 children are labeled 'happy, social, and active, with strong self-regulation of negative emotional expression and attention (12.4%). One aspect of their self-regulation of negative emotion is that they are perceived to be uninhibited in new situation and have fewer fears than their peers. Children in Cluster 4 exhibit a similar profile to those in Cluster 3, but their self-regulation of negative emotion and attention are in the average range (10.0%). Cluster 5 children are labeled ' Active, distractible, negative' and their selfregulation of negative emotion and attention is hypothesized to be below average (7.3%). They are marginally more social and uninhibited/fearless than their peers. Cluster 6 and 7 children are perceived by teachers as being far less sociable and more inhibited than their peers. In addition, Cluster 6 children (6.0%) are also far less vigorous and physically active than their peers. Cluster 7 children (4.6%) have similar levels of social withdrawal and fearfulness to children in Cluster 6 but express more negative emotion and are more distractible than their peers to determine if demographic characteristics were related to temperamental profiles, chi square test for cross-tabulation analyses were calculated for profile by child grade, by gender, and by minority/majority status. No significant effects were found. The standardization procedures used in this research (described above) resulted in means of each school group (A and B) being very near zero with standard deviation near 1 for all temperament characteristics in both groups. Thus, there was no difference in temperament ratings by teacher in the two school groups. To check to see if the percentage of children in each profile was similar across the two groups of schools, a 2 (school groups) by 7 (temperament profiles) cross-tabulation was done, and the analyses indicated no significant association of profile proportions for the two school groups. Temperament and peer influence To determine the relationship between temperament profiles and influence, the total influence score was entered into a general linear model univariate analysis of variance as the dependent variable and temperament profiles were entered as the independent variables (using SPSS version 28). There was a significant effect for cluster (F = 17.07; df = 6; p < 0.001). The R 2 of 0.109 indicated that about 11% of the variance in peer perceived total influence was associated with temperament profiles. Children in Cluster 6 (withdrawn, fearful, and low activity level) had the lowest average influence score, while children in clusters 1 and 2 (average, and average with low levels of negative emotionality) and 7 (withdrawn, with poor selfregulation of negative emotion and attention) had near average influence scores. The three most influential groups were children in Cluster 3 (happy, social, active, and with strong selfregulation), Cluster 4 (happy, social, active, and with average self-regulation), and Cluster 5 (active, distractible, and negative), and of these three clusters, children in Cluster 5 were perceived to have the most influence on their peers. A post-hoc analysis using the Gabriel method (see Table 4) indicated that there were three statistically different (alpha set at p < 0.05) homogeneous subgroups of clusters with Cluster 5 being most influential and Cluster 6 being least. All other clusters were not significantly different from one another. Because children with different temperament profiles might be influential in different areas of child behavior, influence scores in each of five areas measured (academics, sports, cultural trends, games, and inappropriate classroom behavior) were analyzed separately. In the area of academics, a significant effect for temperament was obtained (F = 9.04; df = 6; p < 0.001; R 2 = 0.089) with Cluster 3 children (happy, social, active, and well self-regulated) having the highest influence score, and Cluster 6 (withdrawn, fearful, and low activity level) children having the least. There was a significant effect for influence in sports (F = 10.33; df = 6; p < 0.001; R 2 = 0.067). Children in Cluster 5 (active, distractible, and negative) had the highest influence, and again children in Entropy is an index of cluster separation; > 0.80 is good. 7 The size of the smallest cluster; we set a cut off at 4.0% of the sample. 8 Of all clusters in the model, the one with the lowest mean classification probability; > 70 is good. Means in bold are + 0.70 SD and means underlined are − 0.70 SD. These means are highlighted simply to aid the reader in seeing the primary characteristics that differentiate one cluster from another. 3 Variances around the mean for each temperament score is assumed to be the same for all profiles. N = 797 Frontiers in Psychology 07 frontiersin.org Cluster 6 had the lowest. Temperament had a significant effect on influence regarding cultural trends (hairstyle, music, etc.; F = 11.76; df = 6; p < 0.001; R 2 = 0.076) with children in Clusters 5, 2 (average with low negative emotionality), and 7 (withdrawn, low activity level, and low positive emotionality with low selfregulation) having the most and children in Cluster 6 having the least. Regarding make-believe games, there was a significant effect (F = 6.16; df = 6; p < 0.001; R 2 = 0.038), but the effect was small. Only Cluster 5 children were distinct from the remaining clusters, and they had the most influence. By far the strongest effect of temperament on influence was on inappropriate classroom behavior (e.g., fooling around when the teacher was out of the classroom, talking back to the teacher; F = 30.41; df = 6; p < 0.001; R 2 = 0.184). For these types of behaviors, children in Clusters 5 active, distractible, negative had the highest scores, and children in Clusters 6, 2, and 1 had the lowest. In summary, children who exhibited a high activity level, distractibility, and high negative emotionality (Cluster 5) were clearly the most influential children in these schools, while children who were socially withdrawn exhibited low levels of positive emotionality, and had a low activity level had the least influence on their peers. Association of temperament profiles with social status measures One purpose of this research was to determine if dominance and/or prestige-related behaviors were characteristic of children who had different temperament profiles. Seven different measures of student status-related characteristics as perceived by peers were examined. A series of ANOVAs were calculated in which scores on each peer nominated status-related variable served as the dependent variable and cluster by grade level, cluster by gender, and cluster by minority/majority were entered separately as the independent variables. These results indicated the variance explained by cluster in all analyses explained three to four times the amount of variance explained by grade, gender, or minority/ majority status. A small number of analyses resulted in significant main effects for the three demographic variables, and an even smaller number resulted in an interaction. Because the effects other than temperament explained less than 3.0% of the variance, these effects are not reported. Children in the three most influential cluster (5, 2, and Cluster 3) had a different blend of status-related attributes as viewed by their peers (see Table 5). Children in Clusters 2 and 3 are likely influential because they have skills (e.g., good at sports), valued attributes (e.g., tries hard at school), and interpersonal skills (e.g., sympathetic to peers) that contribute to being likeable and popular. Cluster 5 children who are the most influential are likely influential due to dominant, coercive behaviors (e.g., aggression) as well as being good at sports. Children in Cluster 6 were perceived as having the lowest social status of all six clusters and were the least influential. Marco-environmental effect on the association of peer influence and temperament profiles Children who attended schools in two different kinds of social environments were examined in this research. While the environmental contexts were similar for the two groups of schools in many ways (e.g., lived in rural areas and had median family income significantly less than the state and national average), the educational attainment of the adult male population was different (i.e., persons 25 years and older). This was the result of the differences in educational attainment among minority populations. The ethnic/racial composition of the county in which Group B schools were located was predominantly White, while counties in which Group A schools were located had a much higher percentage of minority adults (about one-third of the population). The percentage of minority children in the public schools was even larger in Group A schools, constituting about 50% of the public-school population. The children in Group B schools who live in an environment comprised a more educated adult population might value different behavior characteristics than those in the Group A schools, where educational attainment among the adult population is more limited. This social environmental difference might create difference in the types of children who are viewed as most influential by their peers. To investigate this notion, the total influence scores of children were submitted to a multifactor ANOVA in which temperament cluster and school group were conceptualized as independent variables and the total influence score as the dependent variable. The results are reported in Table 6. This analysis resulted in a significant main effect for cluster (F = 9.67; p < 0.001), no main effect for School Group (F = 0.57; p = 0.45), but there was a significant Interaction (F = 3.53; p = 0.002). To determine if there were significant differences within profiles, a one-way ANOVA across school groups for each profile was calculated and summarized in Table 6. This resulted in a significant effect for Clusters 2 and 3, with children exhibiting this temperamental profile in school group B (i.e., more educated adult environment) having more influence on the peers than children exhibiting this profile in school group A (i.e., less educated adult environment). These children exhibited a status profile in which trying hard in school was an important factor along being sympathetic toward others and being likeable (having a high social preference score). Children in Cluster 5 (Active, distractible, and negatively emotional) did not have a significantly different influence score in the two school settings, although their total influence score was more than twice as high in the School A group (lower levels of adult education) than in School B (more educated adult population). Thus, it appears that the macro-environment in which the two groups of schools were situated had an effect on whether dominance had the greatest effect on peer influence (school group A) or prestige-related methods had the greatest effect on influence (school group B). To further analyze the differences between the two school groups, a similar analysis was conducted on each of the five areas of influence. This was done separately for Cluster 3 children who had the most status in School Group B, and Cluster 5 children that had high status in both school groups. As summarized in Table 7, Cluster 3 children had significantly more influence on academic issues (getting good grades, doing the homework) in school group B. But they also had more influence on youth culture in school group B than in school group A. Cluster 5 children had more influence on academics in school group A (less educated adult population), and also on the imaginary games children play. Discussion Temperamental traits are important to psychological theory and in the practice of helping parents, teachers, and children because these traits can be observed very early in life and have been shown to relate to important outcomes for children throughout their developing years and even throughout the live span. These traits also have been shown to relate to various levels of the biology of the child (genes, the biochemistry of the nervous system, etc.). In the early stages of development of temperament theory, the biological underpinnings, particularly genetic influences, were viewed as one of the most important defining aspects of temperamental traits. As genetic research and its relationship to behavior and personality has progressed, it has become clear that almost all personality traits and behavioral responses have a genetic foundation (Shiner and Caspi, 2012;Plomin, 2018). Research on these traits would not have continued to grow as it has if the various traits typically thought of as being temperamental had not been demonstrated to be relatively stable (stability increasing with maturity; see Martin et al., 2020 for a review) and had they not been found to relate to behavior problems in childhood, diagnosed psychopathology, academic achievement, educational attainment in adulthood, and other important outcomes. But the focus on these guidepost outcomes in human life has not elucidated many of the social processes occurring in the life of the child that led to these outcomes. This is nowhere clearer than in the application of temperamental differences to children in schools, where the majority of research is on achievement and behavior problems. The research reported in this paper was designed to begin to fill one gap in our understanding of schooling; specifically, the influence students have on one another. Parents and teachers are aware that children who attend the same school have an influence on one another. The multi-billion-dollar industry of private schooling is to some extent built on this awareness. The awareness that children influence one another does not tell us which children are particularly influential, it does not tell us what areas of schooling are most impacted by peer influence (e.g., peer status, academic achievement, and inappropriate behavior in the classroom), and it does not address what individual differences of children lead to being influential. The research reported here is based on the hypothesis that individual differences in six temperamental traits has a substantial impact on influence processes in the classroom. This research is also based on the assumption that it is the configuration of these six traits considered together, rather than individual traits that will best illuminate how temperamental traits are related to peer influence in school. This assumption is based on research indicating that temperamental traits are correlated in complex and interactive ways. Research has demonstrated that temperamental traits are not highly stable. Correlations across 2-year periods, for example, typically vary from 0.40 to 0.70, but decline somewhat when longer retest intervals are used. Further, the impact of temperament in different social environments may be different. Thus, in the current context, it is important to determine what environmental factors affect change in how temperamental profiles are related to peer influence. In this study of approximately 800 rural public-school children in 4th and 5th grades, it was determined that one group of children (Cluster 5) was perceived by peers as having the most influence. Of the seven clusters of children defined empirically by their temperament profiles (assessed by teachers), a relatively small group of children (7.3%) was found to have the most influence on their peers. Children in this cluster were viewed by their teachers as highly active, with above-average ratings on sociability, but exhibited high levels of negative emotionality and low levels of positive emotionality. They were also above average in distractibility. This group can be conceptualized as having low levels of self-regulation of emotion and attention. Notably, this cluster was also rated as being among the least inhibited and fearful of all temperament clusters. We investigated what areas of peer interaction this temperament group (Cluster 5) had most influence. They were among the most influential of all profiles in peer cultural trends (hair style, music preference, and peer language), and in what games were played with peers. They also had particularly strong influence on inappropriate behavior in the classroom (e.g., fooling around when the teacher left the room, talking back to the teacher). Their high activity level and distractibility, as well as their low level of fearfulness probably played an important role in their inappropriate behavior in the classroom. In addition to investigating which group of children was most influential, one aspect of this research investigated how temperamental profiles and influence was related to indicators of social status as assessed by peers. Peers perceived the children in Cluster 5 to be 'cool' more often than any other clusters and they had high scores on aggression. They were mildly above average in popularity and athletic skill. The influence of this group seemed to be based in part on their athleticism, on being socially aggressive, but also on their lack of inhibition and fearfulness. Perhaps most of all, they seem to be viewed as charismatic as indicated by being nominated frequently as 'cool' . Thus, they can be thought of as using both domination and prestige forms of influence. Children who were perceived by peers as least influential across all five areas of school life were those belonging to Cluster 6 (6% of the sample). Their temperament profile was characterized by low activity level, low sociability, low levels of negative emotionality, and high inhibition/fearfulness. They had below-average scores on peer perceptions of likeability, popularity, trying hard at school, having sympathy for others, being cool, acting aggressively, and having athletic skill. Their lack of influence on others seemed to be a function of their withdrawal from social Frontiers in Psychology 10 frontiersin.org activities and being perceived as less skillful in sports. The two largest clusters (Clusters 1, 41.2%, and 2, 18.4%) were average on all temperamental characteristics, with Cluster 2 being viewed as of more negative mood Cluster 1. They also had near average scores on all types of influence based on peer nominations. Further, peer nominations of status-related characteristics were all in the average range as well, with Cluster 2 children being perceived as having moderately higher status than Cluster 1. One of the most important findings from this research was that the social milieu of the school had a significant effect on the influence exercised by the most influential groups of children. The aspect of the broader social environment that we focused on was the educational attainment of the adult population of the counties in which the children resided. Children in temperament Cluster 3 (happy, social, active, and, well self-regulated) were viewed as the second most influential group. When they lived in a county with higher adult educational attainment (particularly among adult males) they had more influence on academic behaviors (e.g., trying hard in school) than Cluster 3 who lived in rural counties with lower educational attainment. The reverse was true of children in Clusters 5. For these children, they had more influence when they lived in the counties with lower educational attainment. Theoretical implications Temperament researchers, spurred on by findings of significant stability of temperament traits, as well as long-term significant prediction of adult behavior from measures obtained in early childhood, have made major strides in the understanding of child behavior. However, they have not paid much attention to environmental factors that may alter the expression of temperamental traits. The major exception to the rule is the role of parenting on temperamental characteristics (Bridgett et al., 2015;Bornstein et al., 2018). All temperament theorists and researchers posit that temperament is not static. While most behavioral characteristics understood as being rooted in temperament have been shown to have moderate stability, all data available indicate some children are very stable, most children exhibit some change, and a few children exhibit major changes in their trait level scores. What is less clear are the mechanisms and social forces that influence these changes. There is another type of change that is even less well understood; that is, how do children with the same temperamental profile alter their social behavior to meet changing environmental demands. The research reported here did not study change over time, but it does open the door to thinking about this question. We found that children with the same temperamental profile who live in different social environments engage that environment in different ways. Stated another way, those children who are influential in one environment are less influential in another. These findings remind temperament researchers that human beings are social animals and that temperamental characteristics may have a different impact in different social circumstances. Strengths, limitations, and future research The research reported here utilizes measures of individual differences that have been shown to appear in the first few years of life (i.e., temperamental differences) to explore questions about which children have the most influence in the peer group in late elementary school. Temperamental individual differences were measured as individual traits based on teacher's perceptions of their students. One of the strengths of this study is that temperament profiles have rarely been empirically developed based on teacher perception. These profiles were then used to investigate influence patterns that occur among peers in schools. A second strength of this research is that this is one of the first attempts to relate empirically derived temperament profiles to peer influence in a school setting. Further, the status characteristics of children as viewed by other children were studied in the context of temperament profiles, revealing that children with different temperament profiles manifest their influence through different sets of status-related characteristics (e.g., popularity) and behavior (e.g., showing sympathy). These associations will help researchers in the area social processes understand some links between individual differences, status, and influence. Finally, the research demonstrated that the broader social context in which children live is related to how and by whom peer influence is exhibited. These findings were strengthened by having independent measures of behavior from teachers and students. A strong point of the research is that each type of measurement (teacher rated temperament, student perceptions of influence, and student perceptions of social status-related behavior) were all measured in detail as well as globally. That is, six dimensions of temperament were assessed, influence was assessed in five areas of school life, and status was measured through global indices (e.g., perceived popularity) as well as specific behaviors related to related to likeability and social stature. The availability of these more specific aspects of influence and status allowed for the determination of what type of status and influence was most affected by temperamental differences. Finally, the sample size was large enough to allow for an application of a modeling technique that requires relatively large samples (latent profile analysis) and to allow for a model of seven different profile types (n = 797). Having a sample of this size in conjunction with a detailed assessment of student social lives from two perspectives (teacher and student) is very rare in the temperament literature. Despite these strengths, the research had several limitations. The data analyzed in the current study were obtained from teachers and children during one development period, and on one occasion. Thus, temperament, the timing of effects of temperament on both social status and temperament remain unclear. Further, temperament was assessed from the point of view of a one teacher in each classroom. The research would have been strengthened if more than one teacher assessed the temperament of each student. Parental assessment would have also enhanced the temperament assessment. In addition, there was a confound between the Frontiers in Psychology 11 frontiersin.org interpretation of the social environment, described at the county level, and the minority status of the participants and their families. This occurred because ethnicity/race and school group were entangled to some extent. The Group B schools were less diverse than the Group A schools. The findings would have been stronger if the diversity of the two school systems were similar. This type of design would have clarified the effects of educational attainment independent of other cultural factors that are associated with rural southern culture. A further weakness of this study was the reliance on county-level educational attainment data. The results would have been much clearer if the educational attainment of each individual family had been assessed. The findings reported here clearly indicate the need for a longitudinal approach in which temperamental traits are measured at several time periods in different environments to determine effects of the environment (a) on the measurements of traits over time, (b) on the association of temperament with social status phenomena, and (c) the effects of environments and developmental level on social influence patterns. To enhance the understand of temperamental effects on peer influence in different environment, special care to measure the environments as precisely as possible is critical. Data availability statement The data analyzed in this study is subject to the following licenses/restrictions: The dataset was collected by the co-author and is still being analyzed for various publication. It is not available to the public. Requests to access these datasets should be directed to mlease@uga.edu. Ethics statement The studies involving human participants were reviewed and approved by University of Georgia Ethics Review Board. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
2022-10-23T15:04:03.974Z
2022-10-21T00:00:00.000
{ "year": 2022, "sha1": "b3a33817d6f1037d33f61e616de6a146600354bb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "ac4a1b31bc93043af6d1602f3183f36f6f9254ba", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
14951200
pes2o/s2orc
v3-fos-license
Fatigue and Quality of Life Outcomes of Palliative Care Consultation: A Prospective, Observational Study in a Tertiary Cancer Center Purpose: Fatigue is one of the most common symptoms seen in patients with advanced cancer. It is known to influence the Quality of Life (QoL) of patients. This study examines the interrelationship of fatigue and QoL in patients with advanced cancer on palliative care. Methods: A prospective cohort study was conducted in the outpatient clinic of the Department of Palliative Medicine from January to June 2014. Patients with advanced cancer registered with hospital palliative care unit, meeting the inclusion criteria (Eastern Cooperative Oncology Group [ECOG] ≤3, Edmonton Symptom Assessment Scale [ESAS] fatigue score ≥1), and willing to participate in the study were assessed for symptom burden (ESAS) and QoL (European Organization for Research and Treatment of Cancer QoL Core 15‐Palliative module [EORTC‐QoL PAL15]). All study patients received standard palliative care consultation and management. They were followed up in person or telephonically within 15–30 days from the first consult for assessment of outcomes. Results: Of a total of 500 cases assessed at baseline, 402 were available for follow‐up (median age of 52 years; 51.6% male). On the EORTC‐QoL PAL15 scale, overall QoL, emotional functioning, and constipation were found to be significantly associated with severity of fatigue at baseline (P < 0.05). Statistically significant improvement in fatigue score was observed (P < 0.001) at follow‐up. Improvement in physical functioning and insomnia were significantly associated with better fatigue outcomes. Conclusions: Fatigue improved with the standard palliative care delivered at our specialty palliative care clinic. Certain clinical, biochemical factors and QoL aspects were associated with fatigue severity at baseline, improvement of which lead to lesser fatigue at follow-up. report differently due to unique differences in expectations and coping abilities. [8] Several studies on Caucasian population have demonstrated the adverse impact of fatigue on physical, emotional, economic, and social aspects in the lives of cancer patients. [2,[9][10][11][12][13][14] In a study conducted in a group of cancer patients undergoing radiotherapy, fatigue as measured by the Multidimensional Fatigue Inventory was associated with poor QoL. It was considerably lower before treatment started than at posttreatment or follow-up, suggesting that fatigue can be encountered even when treatment has ended. [12] Tanaka et al. conducted a study on sixty patients with uterine cancer treated at a university hospital in Sweden. In this study fatigue was measured on MFI-20 and the European Organization for Research and Treatment of Cancer (EORTC) quality of life questionnaire (QLQ-C30) fatigue subscale. Results showed that fatigue was significantly associated with global QoL. [2] In another study, conducted on 171 patients with advanced lung cancer, fatigue was found to interfere with at least one daily life activity in more than half of the patients. [13] Interestingly, some studies reported that fatigue decreases at the end of life. This was explained to be a kind of adaptation to the situation as was shown in a study done by Sprangers and Schwartz in 1999. [15] This phenomenon has been backed by the proposals from the EAPC working group on fatigue in palliative care -fatigue in the final stage of life may serve as a protection mechanism which relieves suffering. Furthermore, Wu and McSweeney [16] have supported the same opinion and have associated fatigue with a positive meaning in life. According to them, it serves as a defense mechanism protecting the patient from psychological collapse. This is postulated to be because of the changes in patients' perception of goals of care, values or priorities in life in their last days, which in turn brings about a change in the perception of fatigue. In a systematic review of published literature, it was found that fatigue negatively affects patients' QoL in advanced cancer. [17] A majority of the studies were retrospective, outpatient-based or cross-sectional in nature; in some patients were not routinely screened before enrollment but a convenience sample was used; whereas, some had parameters other than fatigue as their primary outcome measure. [17,18] Fatigue -a subjective entity, varied in measurement across different studies due to the use of multiple symptom inventories, rather than a single standardized one. In some studies, there was lack of detailed information, i.e. history of cancer treatment, biological data of cancer, biochemical parameters, different stages and sites of metastatic lesions of cancer and a broader range of psychosocial data. The statistical models used to predict the factors associated with improvement of fatigue in certain instances failed to attain the required outcome measure (as indicated by adjusted R 2 ). Several authors highlighted the need for larger RCTs or prospective studies. [17,18] It is well established that fatigue significantly impacts all domains of QoL, but it is often underdiagnosed and undertreated. Although there are studies on pain and QoL, [19,20] there are none from Indian centers on cancer-related fatigue and its impact on QoL. Our study tries to address gaps in literature pertaining to Indian population. Using a prospective design, this study attempts to unravel the complex relationship between two very intermingled constructs, QoL, and cancer-related fatigue. It was conceptualized with a primary objective of determining the effect of fatigue on QoL items in patients with advanced cancer. The secondary objective looked into QoL items associated with improvement in fatigue with standard palliative care consultation. We postulated that fatigue negatively affects the QoL in patients with advanced cancer. Study patients This was a prospective observational study carried out over a period of 6 months from January 1, 2014 to June 30, 2014 at the Department of Palliative Medicine, Tata Memorial Centre (Mumbai). Posters were used to solicit the participation of prospective research subjects in the study. Due diligence was taken to ensure that the procedure for recruiting cases was not coercive or stated or implied a certainty of favorable outcomes or other benefits beyond what is outlined in the consent document and the protocol. All patients presenting to the outpatient clinic of the palliative care service were screened and accrued as per the inclusion criteria. The inclusion criteria were all literate adult patients (age ≥18) with advanced cancer having an Eastern Cooperative Oncology Group (ECOG) score ranging from 0 to 3, a fatigue score >0 on Edmonton Symptom Assessment Scale (ESAS), and prognosis of >4 weeks' predicted survival who were willing to adhere to a follow-up schedule at the hospital or over the phone 15-30 days after the visit. The exclusion criteria were patients with ECOG score of 4, ESAS fatigue score of 0, predicted survival of ≤4 weeks, or unwilling to adhere to follow-up. All patients who participated in the study completed written informed consent form at the time of their initial enrollment. Compensation in any form was not provided for taking part in this study. However, necessary facilities, emergency treatment, and professional services were made available to the study subjects, as similar to the usual procedures of the hospital. Due diligence was taken to protect the patients' confidentiality. The Institutional Review Board of the Hospital approved the study (Project No: 1181) and it was registered with the Clinical Trials Registry of India (CTRI REF/2014/02/006537). Study procedures All study-related procedures including data collection were performed by the author and coauthors, all physicians trained in palliative medicine for 1-3 years. We did a baseline assessment of the participants at the first visit to the outpatient clinic. It involved medical consultation, recording of sociodemographic information and symptom scores using ESAS (to be completed by study subjects), performance score using ECOG, basic anthropometry (height and weight), blood investigations (hemoglobin and albumin), recording of daily morphine/oral morphine equivalent consumption, and QoL assessment using the EORTC QLQ-Core 15-Palliative module (EORTC QLQ-C15-PAL). Follow-up assessment was done 2-4 weeks after baseline assessment either by personally or by telephone for a small proportion of patients who could not come. Procedure of follow-up assessment was similar to that at the baseline. All patients received standard palliative care intervention from our clinic. Because some patients may require more frequent visits with the palliative care team, either the patients or the palliative care clinician requested and scheduled more frequent visits at their discretion. If study patients were admitted to the hospital in the due course of the study, the palliative care team visited them on a daily basis throughout their admission. Patients received referrals to other care providers as and when needed. Palliative care intervention: Standard procedure A consultation includes a thorough palliative care-focused history, physical examination, and discussion of recommendations for further assessment or therapy with the physician. A comprehensive care plan is formulated. It addresses uncontrolled physical symptoms and correction of correctable parameters (anemia, electrolyte abnormalities, etc.). Fatigue is managed by a rational use of a combination of drugs (megestrol acetate, dexamethasone), dietary counseling, addition of diet supplements such as L-carnitine, protein supplements in consultation with dietician (if required), and exercise with light-to moderate-intensity walking programs initially for shorter periods of time that builds in intensity with time and patient education. The clinic also addresses psychological issues such as anxiety, adjustment disorders, depression, and anger. It facilitates in decision-making with thorough discussions about the understanding of the extent of illness, treatment options, and complications, addressing communication needs not addressed earlier. The medical social workers help in empowerment and enhancement of social support in the present family situation to address loss of income or identity. Counselors help in enhancing spiritual support and dwindling faith. Rehabilitation therapists help in solving practical issues such as mobility and impaired activities of daily living. After these initial procedures, patients are reassessed on follow-up appointments or through home-based care as per the necessity. Any palliative care medication or nutritional supplement needed by the patient is dispensed to them in the clinic, and the nurse explains the patients and their family what they are and how to use them. [21] Study end-points • Determining the associations of sociodemographics and disease-related information during initial visit on QoL and fatigue • Determining the associations of QoL items with severity of fatigue at baseline • Identifying which QoL items are associated with an improvement in fatigue on follow-up after a standard palliative care consultation. Statistical analysis Sample size calculation: This study was an observational prospective study conducted at 5% significance level. No formal sample size and power estimation were done as no prior information regarding the factors affecting fatigue to calculate the sample size in our population was available. It was decided that all eligible consenting consecutive patients having a fatigue score in ESAS ≥1 would be enrolled for the study over a period of 6 months. Total number of subjects enrolled was 500. Distributions of data were examined by analyzing the data graphically. If the data appeared to be nonnormally distributed, nonparametric equivalents of the parametric tests described in the results below were used for analyses: • Descriptive statistics -to summarize patients' details such as, age, gender, geographic distribution, income, education, marital status, cancer type, stage, metastasis, comorbidities, type of treatment received, ECOG, hemoglobin, albumin, body weight, daily oral morphine consumption equivalent, ESAS symptom score of fatigue, albumin, hemoglobin, and QoL as measured in EORTC QLQ-C15-PAL were recorded at baseline and follow-up visit • Correlation coefficient -to determine if there was any association between fatigue and other parameters at baseline using Chi-square test/Spearman's rank order for association • Multiple linear regressions of data at baseline with fatigue in ESAS as the dependent variable was used to determine the predictive factors associated with the severity of fatigue • Mean/median ESAS fatigue score was recorded at baseline and follow-up • Comparison of ESAS fatigue scores and QoL scores as measured in EORTC QLQ-C15-PAL at baseline and follow-up by Wilcoxon signed-rank test for nonparametric data to determine if there is any significant improvement in fatigue from baseline to follow-up • Logistic regression model was used to predict improvement in fatigue at follow-up • Analysis between patients on follow-up (n = 402) and who did not (n = 98) at baseline by comparing continuous variables using Mann-Whitney U-test for nonparametric data and categorical variables was determined using Chi-square test. All analyses were carried out using SPSS 20 (IBM Corp., SPSS Statistics for Windows, Version 20.0. Armonk, NY, USA). Missing data were noted and excluded from analyses and P values of 0.05 or less were deemed to be statistically significant. [22] RESULTS A total of 1542 new patients were referred to the Department of Palliative Medicine from January 1, 2014, to June 30, 2014. Five-hundred eligible cases participated in the study [ Figure 1]. Demographic information At baseline assessment, 51.6% of the cases were male and had a median age of 52 years (standard deviation [SD] =13.1 years). Of these, 54.6% of them earned <Rs. 5000 (76.61 USD) per month and only 35.8% had secondary education. Most (83.4%) of the cases were married. The most common primary cancer type was head and neck cancer (23.2%), followed by gastrointestinal cancer (21.2%). Of these, 92% of the patients had stage IV cancer and 37% had more than one site of metastasis. More than half of the study cases (53.6%) had received multimodal therapy as standard treatment [ Table 1]. Quality of Life The mean QoL score in EORTC QLQ-C15-PAL at the first follow-up visit (67.45) was better than that at initial assessment (51.93) with significant changes on all the items (P < 0.001) [ Table 4]. Interrelationship between fatigue and Quality of Life A linear regression model was constructed at baseline, with fatigue as the dependent variable and EORTC QLQ-C15-PAL items as independent variables. Factors found to be associated with fatigue were overall QoL (P < 0.001), emotional functioning (P < 0.001), and constipation (P = 0.038). In this predictive model, adjusted R 2 = 0.607 [Table 5a]. At follow-up, a logistic regression model was constructed to measure the relationship between fatigue as a categorical-dependent variable and other factors as independent variables, by estimating probabilities, using a logistic function. From this model, the predicted improvement of fatigue was observed in 239 patients (47.8%). Change in QoL associated with physical functioning and insomnia was predictor of improvement in fatigue. The most significant change was associated with improvement in insomnia. The logistic regression model explained 43.6% (Nagelkerke R 2 = 0.436) of the variance in improvement in fatigue [ Table 5b]. Fatigue improved at follow-up, and a logistic regression model was constructed to find the predictors of the improvement in fatigue. The probabilities were estimated considering fatigue as a categorical-dependent variable and other factors as independent variables when using a logistic function. From this model, the predicted improvement of fatigue was observed in 239 patients (47.8%). Changes in biochemical parameters (hemoglobin and albumin level) were predictors of improvement in fatigue. The most significant change was associated with improvement in albumin level from ESAS scores at initial and follow-up visit. The ESAS tool was designed to assess common symptoms in cancer patients. The severity at the time of assessment of each symptom is rated from 0 to 10 on a numerical scale, 0 -symptom is absent and 10 -it is of the worst possible severity. It has been further rated as mild, moderate and severe as in the Patients who did not follow up (n = 98) had similar characteristics to those who did, in terms of demographic variables, body weight, and daily oral morphine consumption. However, they had lower hemoglobin (P = 0.02) and albumin (P < 0.001) levels, poorer Overall QoL (lower values indicating a lower QoL) (P < 0.001). Although it was not statistically significant, poorer scores were also found in function scales (physical functioning, and emotional functioning) and in symptom scales (dyspnea, pain, fatigue, and nausea/vomiting) of EORTC QLQ-C15-PAL [ Table 7]. DISCUSSION Our findings provide evidence that palliative care consultation for advanced cancer patients seen at an outpatient palliative care clinic in a comprehensive cancer center was associated with lessening of fatigue with improved QoL at the time of the first follow-up 2-4 weeks later. Moderate to severe fatigue was common (80.8%) in the patients we evaluated. The severity of fatigue at the time of consultation significantly correlated with many of the EORTC QLQ-C15-PAL items in the predictive model, such as overall QoL (P < 0.001), emotional functioning (P < 0.001), and constipation (P = 0.038). At follow-up, an improvement in fatigue scores was observed in 47.8% of patients. Thus, the results of this study show preliminary evidence that palliative care consultation was successful in reducing the severity of fatigue. This is supported by our model which captured most of the factors associated with improvement of fatigue as indicated by adjusted R 2 of 0.607. The observed improvement in fatigue score in this study is 1 point, which corresponds to the minimal clinically important difference (MCID) of fatigue (MCID of fatigue item is ≥1 point in ESAS). [24] The factors responsible for amelioration in fatigue were improvement in biochemical parameters (hemoglobin and albumin) and betterment in QoL associated with physical functioning and insomnia as were evident in follow-up visits. When compared to baseline, patients who did not follow up had poorer parameters. These findings are similar to those seen in other studies. [1][2][3][4][5][9][10][11][12][13][14]17,18] The strengths of this study are that the assessments for fatigue and other symptoms were done prospectively at both initial and subsequent follow-up visits, using validated tools in a dedicated palliative care clinic by trained palliative care physicians. The study consisted of a heterogeneous sample of patients with different cancer diagnoses, presenting at different stages of the disease process. The fatigue assessment was completed by the patients under the supervision of the physician investigator who is trained specifically in palliative care, and standardized management was provided by a specialized palliative care team in accordance with our institutional standard operating procedure (see Methods). Moreover, this study differed from other studies in its relatively large population sample size of 500 patients observed prospectively and its focus on factors associated with improvement in fatigue after an outpatient palliative care consultation. Fatigue improved despite increase in opioid prescribing as a result of consultations. This finding could be important, more so, in the Indian setting, given the strong prejudice against using opioids and the misconception that it will increase drowsiness and possibly fatigue in patients. We have shown the opposite and this is important in this context. Fatigue has been an essential component influencing the construct of QoL. Future prospective trials are needed; however, our findings suggest that in populations similar to ours, fatigue interventions should, therefore, incorporate a multimodal interdisciplinary approach including the treatment of fatigue-related symptoms in addition to specific pharmacological and nutritional interventions for fatigue. These results are consistent with prior studies. [25][26][27] However, female gender, which prior studies have found to be predictive of severity of fatigue, [28,29] was not associated with fatigue in our study. This result could be due to the unique composition of population studied. Our study revealed that with appropriate nutritional evaluation and provision of oral nutrition supplements and iron, our patients were able to improve their hemoglobin and albumin levels in 2-4 weeks, and also improve their fatigue levels. These are important findings given the responsibility of hospital as similar to ours in a resource-poor environment and a high patient load. The government provides a subsidy to only a limited number of patients to ensure the availability of treatment, but our nutrition clinic has devised low-cost nutrition supplements for our patients. Our hospital also provides automated short messaging services alert and phone calls from the hospital for follow-ups to ensure better compliance. We do home visits by dedicated care teams at regular intervals and involve local general practitioners in care for the patient to maintain continuity. [21] Importantly, with the recent amendment in narcotic drugs and psychotropic substances (NDPS) act, there is easy availability of opioid analgesics for pain and dyspnea control. [30] Moreover, regular counseling sessions at clinic-or home-based care by the departmental counselors might have been able to reduce the distress associated with the biopsychosocial component of fatigue in advanced cancer. This can bring about a change in patients' perception of goals of care, values or priorities in life, which in turn brings about a change in the perception of fatigue. This is a kind of adaptation to the situation and can serve as an important defense mechanism to relieve suffering in the final stage of life as has been seen elsewhere. [15,16] In our study, 98 patients were not available for second follow-up. In these patients, the hemoglobin and albumin levels were low, severity of fatigue and other ESAS symptoms were greater, and more strikingly they had significantly lower overall QoL scores than in those who had at least one follow-up visit (the study sample) [ Table 7]. This high attrition rate could be justified by the fact that the sickest patients might not have been able to participate for a follow-up which is commonly seen in other studies done in similar patient population. [2,3,31] Further study is warranted because these screens may be predictive of those patients with a particularly short prognosis. That information would be a key for decisions about disease-directed therapy and for selecting patients for end-of-life planning. Our study has several limitations, the most important of which was its lack of patients in the setting other than an outpatient palliative care clinic. Furthermore, because of our concern that patients with advanced cancer might be unable to complete multiple questionnaires, we elected to use only the ESAS scale. No additional questionnaire for assessing derangement of cognitive function was used to assess the patients. Although the study was supervised by the investigator, minor errors arising from reduced cognitive ability cannot be ruled out. Symptom-specific instruments (e.g. brief fatigue inventory) were not used in the palliative care setting as they were considered even more challenging to frail patients. Another limitation is the use of single-item measure to assess physical and emotional symptoms using ESAS. However, prior studies have shown that ESAS items and other single item questionnaires correlate well with multi-item symptom assessment tools. [32][33][34][35][36] Although, in the vast majority cases, patients complete the ESAS by themselves in the outpatient setting, there is a possibility that in the most fatigued patients, the caregiver could have introduced the bias by assisting the patients. More research is necessary to address this possibility. This study also did not assess other well-known factors that may contribute to fatigue such as inflammatory biomarkers, which play an important role in fatigue causation. [37][38][39] Some of the statistically significant associations in this study may be a result of multiple regression analyses. Future prospective, randomized controlled trials of larger sample sizes with no fatigue as another cohort are required to validate these important findings. Another important aspect to explore will be the effect of dietary supplementation on betterment of biochemical parameters associated with fatigue and QoL scores. These can have strong economic implications in resource-poor settings. Furthermore, compliance with medications in serial follow-ups can be a predictor of durable response which needs further research. CONCLUSIONS Our findings suggested that our Indian patients with advanced cancer had moderate to severe fatigue which could be improved with comprehensive palliative care, nutritional supplementation, and provision of opioids for those who were dyspneic or in pain, all of which are the standard palliative care procedures in our outpatient palliative care clinic. Fatigue was associated with worsening overall QoL, emotional functioning, and constipation. Improvement in physical functioning and insomnia lessened fatigue.
2018-04-03T02:37:18.284Z
2016-10-01T00:00:00.000
{ "year": 2016, "sha1": "750a0f73425b5d629c96f871935f40bc438a2862", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0973-1075.191766", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "750a0f73425b5d629c96f871935f40bc438a2862", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
256183353
pes2o/s2orc
v3-fos-license
Key Genes of Immunity Associated with Pterygium and Primary Sjögren’s Syndrome Pterygium and primary Sjögren’s Syndrome (pSS) share many similarities in clinical symptoms and ocular pathophysiological changes, but their etiology is unclear. To identify the potential genes and pathways related to immunity, two published datasets, GSE2513 containing pterygium information and GSE176510 containing pSS information, were selected from the Gene Expression Omnibus (GEO) database. Differentially expressed genes (DEGs) of pterygium or pSS patients compared with healthy control conjunctiva, and the common DEGs between them were analyzed. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis were conducted for common DEGs. The protein–protein interaction (PPI) network was constructed using the STRING database to find the hub genes, which were verified in clinical samples. There were 14 co-upregulated DEGs. The GO and KEGG analyses showed that these common DEGs were enriched in pathways correlated with virus infection, antigen processing and presentation, nuclear factor-kappa B (NF-κB) and Th17 cell differentiation. The hub genes (IL1R1, ICAM1, IRAK1, S100A9, and S100A8) were selected by PPI construction. In the era of the COVID-19 epidemic, the relationship between virus infection, vaccination, and the incidence of pSS and pterygium growth deserves more attention. Introduction Pterygium is a fibrovascular-like tissue that connects to the conjunctiva and grows towards the surface of the cornea [1][2][3][4], which causes corneal astigmatism, blockage of the visual axis, and eventually causes visual impairment [5,6]. Due to population and regional differences, the incidence of pterygium presence ranges from 2.3% to 58.8% [7][8][9][10][11]. Currently, surgical resection is the most commonly used clinical treatment, but postoperative discomfort and high recurrence rate are very serious issues [12,13]. The pathological mechanism is still not completely understood. Previous epidemiologic studies suggested that geographical location, exposure time to sunlight and sand, dry eyes, type I allergy, and human papillomavirus infection were risk factors for the occurrence and progress of pterygium growth [10,11]. Abnormal tear function has been observed in patients with pterygium, including higher tear osmolarity, decreased percentage of crystals, and lower goblet cell density [14]. In addition, a long-term postoperative inflammatory state caused by dry eye can aggravate the recurrence rate of pterygium, while using artificial tears after excision could lower the recurrence rate [15,16]. Previous studies have shown a potential association between pterygium development and dysfunction of the ocular surface. Primary Sjögren's syndrome is an autoimmune disease characterized by dry mouth and dry eyes, which remains of unknown specific etiology. Lymphocyte foci in exocrine glands can be seen on histological examination [17]. The prevalence of pSS in adults is about 5%, with a male-to-female ratio of 1:9, and is especially higher in Asian women [18,19]. [18,19]. Ocular pathophysiological changes, such as aqueous-deficient dry eyes, me mian gland dropout, and mucin deficiency, are often associated with pSS [20][21][22]. C pared with dry eye but non-pSS patients, the ocular surface symptoms of pSS patients more severe, and even complications such as corneal perforation and scleritis may o [23]. The exact etiology and pathogenesis of pSS are also unknown. However, there known associations between autoimmune diseases and genetic variants in the human kocyte antigen (HLA) region [24]. Antigen presentation involving T-cells and activa of type I interferon (IFN-1) signaling pathway play an important role in pSS pathogen [25]. As described above, a pterygium and pSS are both related to ocular surface dysf tion and eye inflammation, but their commonalities in etiology are lacking in recogni [10,11,[14][15][16][17][20][21][22]. Bioinformatics analysis of DEGs helps to discover key genes pathways in diseases. Yuting Xu's research revealed the competing endogenous R (ceRNA) regulation mechanisms during pterygium pathogenesis. The lncRNA LIN00 dominated ceRNA network containing multiple miRNAs and downstream target g participates in the pathological processes, such as abnormal cell adhesion and prolif tion of the pterygium, through the PID/FOXM1 pathway [26]. Siying He's study fo that several differentially expressed miRNAs and DEGs, especially the miR-29-3p and lagen family genes, were involved in regulating cell death, extracellular matrix br down, and the EMT process of the pterygium [27]. Naoko's study showed that UV e sure promotes DEGs expression in the pterygium [28]. Recent studies show that DEG pSS were enriched in viral infection, activation of immune cells, and mitochondrial tabolism-related signaling pathways. [29,30]. All in all, there have been many studie the bioinformatics analysis of pterygium growth and pSS, but the analysis of the cor tion between these two diseases is lacking. In this research, we worked for a better knowledge about the pathogenesis at genetic level between pterygium growth and pSS by analyzing data from the GEO d base. This is the first report of common DEGs for the two clinically relevant diseases, p ygium growth and pSS. The flow chart of this research is shown in Figure 1. First, thro the datasets downloaded from the GEO database, we identified the DEGs of pteryg growth or pSS compared with healthy control conjunctiva and screened out comm genes. Then hub genes and related pathways were obtained through GO and KEGG richment analysis and PPI network construction. Consequently, the hub genes (IL ICAM1, IRAK1, S100A9, and S100A8), and immune response to viral infection, IL-1 S100A8/A9 related signaling pathways were selected. Identification of DEGs and Intersection Set in Pterygium Growth and pSS In the pterygium dataset GSE2513, 1601 DEGs were screened out when we compared the eight pterygium samples with four healthy controls. A total of 906 DEGs were upregulated, while 695 DEGs were down-regulated (p < 0.05). In the pSS dataset GSE176510, 147 immune pathway-related DEGs were screened out when we compared the seven pSS samples with 19 healthy controls. A total of 137 DEGs were up-regulated, while 10 DEGs were down-regulated (p < 0.05). The top DEGs expression profiles of pterygium and pSS samples are presented by volcano plots and heatmaps ( Figure 2). After analysis using the online analysis tool Venn, 14 genes (IRAK1, LEF1, CTSS, HLADPA1, S100A8, ARHGDIB, CD59, TAP2, ICAM1, IL4R, CEACAM1, BAX, IL1R1, S100A9) that were up-regulated in both pterygium and pSS samples were screened out, while no DEGs were commonly down-regulated ( Figure 3). The selected 14 commonly up-regulated DEGs were used for subsequent functional analysis. Identification of DEGs and Intersection set in Pterygium Growth and pSS In the pterygium dataset GSE2513, 1601 DEGs were screened out when we compared the eight pterygium samples with four healthy controls. A total of 906 DEGs were upregulated, while 695 DEGs were down-regulated (p < 0.05). In the pSS dataset GSE176510, 147 immune pathway-related DEGs were screened out when we compared the seven pSS samples with 19 healthy controls. A total of 137 DEGs were up-regulated, while 10 DEGs were down-regulated (p < 0.05). The top DEGs expression profiles of pterygium and pSS samples are presented by volcano plots and heatmaps ( Figure 2). After analysis using the online analysis tool Venn, 14 genes (IRAK1, LEF1, CTSS, HLADPA1, S100A8, ARHGDIB, CD59, TAP2, ICAM1, IL4R, CEACAM1, BAX, IL1R1, S100A9) that were up-regulated in both pterygium and pSS samples were screened out, while no DEGs were commonly down-regulated ( Figure 3). The selected 14 commonly up-regulated DEGs were used for subsequent functional analysis. healthy control conjunctiva were screened from the dataset using the classical Bayesian method with a threshold of p < 0.05. GO and KEGG Enrichment Pathway Analysis Functional enrichment and pathway analyses of 14 commonly up-regulated DEGs in pterygium and pSS samples were performed at the threshold of p<0.05 (Figures 4 and 5). Changes in GO biological processes (BP) mainly included immune response and cell-cell adhesion (e.g., T-cell activation, regulation of cell-cell adhesion, leukocyte cell-cell adhesion, and neutrophil activation and degranulation). Changes in cellular component (CC) were notably focused on enrichment of cell outer membranes, such as collagen-containing extracellular matrix, tertiary granule, transport vesicle and external side of plasma membrane. Moreover, in the molecular function (MF) section, changes were significantly occurred in receptor activity (RAGE receptor binding, Toll-like receptor binding, and cytokine receptor binding), and binding-related function (fatty acid binding and heat shock protein binding). In particular, the KEGG analysis indicated that changes in signaling pathways were mostly enriched in virus infection (Epstein-Barr virus infection, Human T-cell leukemia virus 1 infection, Herpes simplex virus 1 infection), antigen processing and presentation, and immune-related pathways (NF-κB signaling pathway, Th17 cell differentiation and phagosome). healthy control conjunctiva were screened from the dataset using the classical Bayesian method with a threshold of p < 0.05. GO and KEGG Enrichment Pathway Analysis Functional enrichment and pathway analyses of 14 commonly up-regulated DEGs in pterygium and pSS samples were performed at the threshold of p<0.05 (Figures 4 and 5). Changes in GO biological processes (BP) mainly included immune response and cell-cell adhesion (e.g., T-cell activation, regulation of cell-cell adhesion, leukocyte cell-cell adhesion, and neutrophil activation and degranulation). Changes in cellular component (CC) were notably focused on enrichment of cell outer membranes, such as collagen-containing extracellular matrix, tertiary granule, transport vesicle and external side of plasma membrane. Moreover, in the molecular function (MF) section, changes were significantly occurred in receptor activity (RAGE receptor binding, Toll-like receptor binding, and cytokine receptor binding), and binding-related function (fatty acid binding and heat shock protein binding). In particular, the KEGG analysis indicated that changes in signaling pathways were mostly enriched in virus infection (Epstein-Barr virus infection, Human T-cell leukemia virus 1 infection, Herpes simplex virus 1 infection), antigen processing and presentation, and immune-related pathways (NF-κB signaling pathway, Th17 cell differentiation and phagosome). PPI Network Analysis and Hub Gene Selection To discriminate the hub genes from the 14 commonly up-regulated DEGs in pterygium and pSS samples, a PPI network was constructed. Interleukin 1 receptor type I (IL1R1), intercellular adhesion molecule 1 (ICAM1), interleukin-1 receptor-associated kinase 1 (IRAK1), S100 calcium binding protein A9 (S100A9) and S100 calcium binding protein A8 (S100A8) showed comparatively higher degrees in the PPI network and were discriminated as hub genes. Furthermore, the PPI network was divided into three clusters centered on IL1R1 and S100 proteins A8/9 ( Figure 6). PPI Network Analysis and Hub Gene Selection To discriminate the hub genes from the 14 commonly up-regulated DEGs in pterygium and pSS samples, a PPI network was constructed. Interleukin 1 receptor type I (IL1R1), intercellular adhesion molecule 1 (ICAM1), interleukin-1 receptor-associated kinase 1 (IRAK1), S100 calcium binding protein A9 (S100A9) and S100 calcium binding protein A8 (S100A8) showed comparatively higher degrees in the PPI network and were discriminated as hub genes. Furthermore, the PPI network was divided into three clusters centered on IL1R1 and S100 proteins A8/9 ( Figure 6). Hub Genes Expression in Clinical Samples There was no significant difference in gender and age among the three groups of clinical samples (Table 1). Compared with the healthy control conjunctiva, the expression of five hub genes (IL1R1, ICAM1, IRAK1, S100A9, and S100A8) in pterygium or pSS samples was statistically significant (p < 0.05) (Figure 7). Hub Genes Expression in Clinical Samples There was no significant difference in gender and age among the three groups of clinical samples (Table 1). Compared with the healthy control conjunctiva, the expression of five hub genes (IL1R1, ICAM1, IRAK1, S100A9, and S100A8) in pterygium or pSS samples was statistically significant (p < 0.05) (Figure 7). Discussion In the current study, gene expression analysis was performed on previously published datasets of pterygium and pSS patients to uncover shared gene signatures underlying their clinical relevance. Several DEGs in pterygium and pSS tissues were identified from the datasets, and five hub genes (IL1R1, ICAM1, IRAK1, S100A9, and S100A8) were finally screened out. Bioinformatics analyses show the common DEGs are remarkably enriched in pathways associated with virus infection, antigen processing and presentation, NF-κB signaling pathway, Th17 cell differentiation, and neurotrophin signal transduction. Discussion In the current study, gene expression analysis was performed on previously published datasets of pterygium and pSS patients to uncover shared gene signatures underlying their clinical relevance. Several DEGs in pterygium and pSS tissues were identified from the datasets, and five hub genes (IL1R1, ICAM1, IRAK1, S100A9, and S100A8) were finally screened out. Bioinformatics analyses show the common DEGs are remarkably enriched in pathways associated with virus infection, antigen processing and presentation, NF-κB signaling pathway, Th17 cell differentiation, and neurotrophin signal transduction. Further functional investigations of these genes and pathways are necessary to elucidate their roles in the pathogenesis of pterygium and pSS occurrence. The IL-1 family (IL-1F) is a group of highly structurally conserved exocrine cytokines involved in diverse immune responses [31,32]. Among the 11 members of this family, IL-1α and IL-1β are the most studied, the former is widely expressed in various cells, and the latter is mainly secreted by monocyte-macrophages [33]. The receptor for IL-1F consists of 10 transmembrane proteins with a similar structure, which is formed by three Ig-like domains responsible for ligand binding, a transmembrane domain, and an intracellular portion with the Toll-IL-1-receptor (TIR) domain, responsible for signal transduction. The IL-1F binds to co-receptors to form a ligand-receptor complex, which mediates interleukin-1dependent activation of NF-κB, MAPK, and other pathways by recruiting TOLLIP, MYD88, IRAK1 or IRAK2 and other receptor proteins [34][35][36]. Previous studies have shown that, compared with healthy controls, the expression of IL-1α and IL-1β is obviously increased in the lacrimal and salivary gland tissues of patients with pSS [37][38][39]. In addition, the high expression of IL-1F in pSS patients could recruit T-lymphocytes and cause long-term inflammatory response, which led to ocular surface squamous metaplasia [40]. Previous study also shows that IL-1β promotes the matrix metallopeptidase-9 (MMP-9) production and migration of pterygium fibroblasts [41]. Blockers of IL1RI and IL-1β have been used in the treatment of rheumatoid arthritis, breast cancer, and other diseases, but there is still a lack of research and application in ocular surface inflammatory diseases [42][43][44]. The S100 proteins are calcium-binding proteins that can combine with Ca 2+ and other metal ions to exert intracellular activity and regulate calcium homeostasis, cell cycle, and cell growth. The S100 proteins can also bind to receptors such as advanced glycation end products and Toll-like receptors through paracrine for extracellular regulation. These processes can lead to activation of T-cells and release of inflammatory factors, thereby damaging the immune homeostasis of the conjunctiva and causing ocular surface inflammation [45][46][47]. Protein and gene level testing confirmed that the expression level of S100A8/9 proteins in pterygium tissue was higher than that in normal conjunctiva tissue. The highly expressed S100A8/9 protein can bind to the keratin filament expressed during terminal differentiation, which may promote the reorganization of cytoskeleton in the process of pterygium hyperproliferation [48]. Compared with primary pterygium tissue, S100A7 expression was increased in recurrent pterygium tissue and MAPK inflammatory response pathway was activated [49]. Proteomic Profiling showed that S100A9, Histone H1.4, and neutrophil collagenase were upregulated in saliva and tears of pSS patients [50]. In the pSS rabbit eye model, S100A6 and S100A9 proteins in tears were up-regulated, while the polymeric immunoglobulin receiver (pIgR), and immunoglobulin gamma chain C region were downregulated [51]. These phenomena are inseparable from the function of S100 proteins in activating the innate immune system and altering the immune tolerance of the eye. In the corneal neovascularization animal model, the expression levels of S100 proteins, especially S100A8/9, were significantly increased, and subconjunctival injection of antibodies could inhibit angiogenesis [52]. Antibody treatment with anti-S100A8/9 has been proven effective in mouse models of colitis, acute myocardial infarction, and pancreatitis, which reveals the effect of anti-S100A8/9 antibody treatment on inflammatory diseases [53][54][55]. Although direct study of pterygium or pSS syndromes with anti-S100A8/9 is lacking, the treatment of other diseases effectively reduces a variety of inflammatory factors, which are also involved in ocular immunity, thus suggesting the feasibility of anti-S100A8/9 in the treatment of pterygium or pSS syndromes [47]. Our research revealed that common DEGs in pSS and pterygium disease were enriched in pathways associated with virus infection, antigen processing, and presentation. The microbiota, including bacteria, fungi, viruses, protozoa, and eukaryotes, contributes to maintaining ocular surface homeostasis and immune tolerance, but can be destroyed by infection [56]. Persistence of human papillomavirus (HPV) infection was found to be correlated with postoperative pterygium recurrence [57]. Recent research provides strong serological evidence for the association among Epstein-Barr virus (EBV), human T-cell leukemia virus type 1 (HTLV-1) infection and pSS [58][59][60]. The coronavirus disease 2019 (COVID-19) pandemic is posing a serious threat to global public health. Viral infections and ocular surface diseases deserve more attention. Di Ma's research revealed that pterygium tissue had higher expression of SARS-CoV-2 receptors ACE2 and TMPRSS2 than normal conjunctiva in the mouse model [61]. Many studies have shown that patients with pSS are more susceptible to COVID-19 infection, and the type I interferon pathway may be a common mechanism [62]. It has also been suggested that COVID-19 vaccination may lead to early onset of subclinical pSS, but long-term validation with large sample sizes is lacking [63]. At present, sustained antiviral ocular drug delivery systems including nanocarriers, prodrugs and in situ gels are receiving extensive attention. However, due to the existence of ocular barrier and the lack of specific drugs for viruses, further research is still required. The pathogenesis of pterygium growth remains unknown. Some hypotheses suggested that the occurrence and development of a pterygium were related to the destruction of limbal stem cell barrier, cell aging, and epithelial-mesenchymal transition (EMT). However, our analysis shows that gene expression related to virus infection and immunity may play an important role in the pathogenesis of a pterygium and pSS. Although the data from the GEO database are insufficient, qPCR detection of clinical samples confirmed that these hub genes really have significant differences in expression in pSS conjunctiva and in a pterygium, which deserves more attention. This is the first analysis of common DEGs for these two clinically relevant diseases. Literature studies suggest that IL-1, S100A8/9, viral infection, and the above-involved signal molecules could be used as pathogenesis, prognosis predictors and potential therapeutic targets of pterygium growth and pSS, but further validation experiments are needed. Microarray Data Collection The datasets were obtained from the GEO database using "pterygium", "pterygia" and "Sjögren's syndrome" as search keywords. We established the following screening criteria: (1) The dataset is Homo sapiens data. (2) The database contains sequencing information of total RNA extracted from conjunctiva of patients and healthy controls. (3) The dataset has no missing data. (4) The original data of the dataset can be downloaded. Finally, we chose the GSE2513 and GSE176510 datasets. The GSE2513 dataset contained the gene expression profiles of eight pterygium patients and four healthy conjunctiva controls and was analyzed using the GPL96 [HG-U133A] Affymetrix Human Genome U133A Array [64][65][66]. The GSE176510 dataset contained the expression profiles of immune pathway-related genes of seven pSS patients and 19 age-and-sex-matched healthy controls and was analyzed using the GPL28577 NanoString Human Immunology v2 Code Set. Data Processing and Identification of DEGs All original data were downloaded from the GEO database and standardized using the limma package of R 3.2.3. Probes were annotated according to annotation files and would be removed without corresponding gene symbols. When multiple probes matched the same gene symbol, the average value was calculated for subsequent experiments. The DEGs between pterygium patients and healthy controls were screened from the GSE2513 dataset using the classical Bayesian method with a threshold of p < 0.05. The DEGs between the pSS patients and healthy controls based on the GSE176510 were screened using the same method. Then we used the online diagram tool (http://bioinformatics.psb.ugent. be/webtools/Venn/ (accessed on 20 March 2022)) to construct a Venn diagram of the DEGs screened from the two datasets to get their intersection set, which were used for subsequent analysis. To visualize the DEGs, R 3.2.3 and a free online application (http: //www.heatmapper.ca/expression/ (accessed on 20 March 2022)) was used to construct volcano plots and heatmaps. Functional Enrichment Analyses for Intersection DEGs To evaluate relevant biological functions and signaling pathways, GO and KEGG pathway enrichment analyses were carried out using the Database for Annotation Visualization and Integrated Discovery (DAVID) (https://david.ncifcrf.gov/home.jsp (accessed on 20 March 2022)) for intersecting common DEGs, with the threshold of p-adjusted < 0.05 [67,68]. The Benjamini-Hochberg method was used to correct the p-value. Construction of PPI Network and Identification of Hub Genes Using the protein interaction network to trace the upstream and downstream relationships of signal transmission and gene expression regulation, the key genes and functional modules involved in disease occurrence and progression can be effectively identified. To further explore the interactions among the common DEGs of these two clinically correlated diseases, a PPI network was established through the STRING database (https://cn. string-db.org/ (accessed on 20 March 2022)), with the organism as "Homo sapiens" [69]. Moderate confidence (0.400) was set as the minimum required interaction score to ensure statistical significance. The clustering option of the PPI network was kmeans clustering. In the network exported pictures, the nodes represent the proteins, while the lines represent the interactions between proteins. Then, the Cyto-Hubba plugin of Cytoscape 3.7.2 was used to examine the PPI network and nominate the hub genes with top node degrees. Acquisition of Clinical Samples In order to verify the hub DEGs screened from the database, we selected three pterygium patients, three pSS patients, and three healthy controls for analysis. The experiment was conducted according to ethical requirements. All subjects signed an informed consent form. Each participate with a pterygium or pSS met the following criteria: (1) The patient was definitely diagnosed as having a pterygium or pSS. (2) The patient had no other eye diseases. (3) The patient had no major systemic disease. (4) Randomized gender of patients. (5) The patient was 30-80 years old. The basis for the inclusion of healthy controls was the same as the above-mentioned criteria, but all eye diseases were excluded. Patients with a pterygium volunteered to undergo pterygium resection, and pterygium tissue was collected during the operation. Conjunctival cells were obtained from pSS and healthy controls by impression cytology method. All acquired organizations were immediately stored at −80 • C. Tissue RNA Extraction and Quantitative Real-Time PCR (qPCR) Total RNA was extracted from the frozen tissues with RNAiso Plus reagent (Takara, Dalian, China), and then reverse transcribed using PrimeScript™ RT Master Mix reagent (Takara, Dalian, China). The qPCR experiment was conducted using SuperReal PreMix Plus (SYBR Green) reagent (Tiangen, Beijing, China) in the CFX96™ Real-Time PCR Detection system (Bio-Rad, Hercules, CA, USA). The gene GAPDH was selected as the internal reference gene. The expression of all genes was measured by Ct value and normalized with the healthy control group. Primers used for qPCR detection come from the data published by PrimerBank (Table 2). Gene Forward Primer Reverse Primer GTATGAACTGAGCAATGTGCAAG GTTCCACCCGTTCTGGAGTC Statistical Analysis For comparison of two groups of quantitative data, unpaired t-tests were conducted in GraphPad Prism program. For comparison of multiple groups of quantitative data, one-way ANOVA analysis was used. The mean of each test group was compared with the mean of control group. For comparison of qualitative data, chi-square test was conducted in SPSS program. Statistical significance was regarded as p < 0.05. Conclusions We identified a shared gene signature (IL1R1, ICAM1, IRAK1, S100A9, and S100A8) between pterygium presence and pSS. The analysis of this signature underlined that immune response to viral infection, IL-1, and S100A8/A9 related signaling pathways probably play vital roles in the development of these two clinically correlated diseases. This finding may help in the identification of new therapeutic targets and understanding of the pathological mechanisms.
2023-01-24T16:23:15.772Z
2023-01-20T00:00:00.000
{ "year": 2023, "sha1": "c7f50469771a2bdae41cdcbcbc5d5e9d2b7aa0cd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/3/2047/pdf?version=1674194867", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a476fd50d7e8a66e59687bad01e6fb898732732", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10214882
pes2o/s2orc
v3-fos-license
Regulation of Macropinocytosis by Diacylglycerol Kinase ζ Macropinosomes arise from the closure of plasma membrane ruffles to bring about the non-selective uptake of nutrients and solutes into cells. The morphological changes underlying ruffle formation and macropinosome biogenesis are driven by actin cytoskeleton rearrangements under the control of the Rho GTPase Rac1. We showed previously that Rac1 is activated by diacylglycerol kinase ζ (DGKζ), which phosphorylates diacylglycerol to yield phosphatidic acid. Here, we show DGKζ is required for optimal macropinocytosis induced by growth factor stimulation of mouse embryonic fibroblasts. Time-lapse imaging of live cells and quantitative analysis revealed DGKζ was associated with membrane ruffles and nascent macropinosomes. Macropinocytosis was attenuated in DGKζ-null cells, as determined by live imaging and vaccinia virus uptake experiments. Moreover, macropinosomes that did form in DGKζ-null cells were smaller than those found in wild type cells. Rescue of this defect required DGKζ catalytic activity, consistent with it also being required for Rac1 activation. A constitutively membrane bound DGKζ mutant substantially increased the size of macropinosomes and potentiated the effect of a constitutively active Rac1 mutant on macropinocytosis. Collectively, our results suggest DGKζ functions in concert with Rac1 to regulate macropinocytosis. Introduction Macropinocytosis is a form of endocytosis in which extracellular fluid and plasma membrane are internalized in large vesicles [1,2]. Macrophages and immature dendritic cells use constitutive macropinocytosis to sample molecules for antigen presentation [3,4]. In most cells however, it is a transient response to growth factor stimulation and represents an efficient route for the non-selective uptake of nutrients and solutes. Macropinocytosis also contributes to infection since many pathogenic bacteria and viruses exploit it as a pathway to gain entry into cells [5][6][7]. It has also been implicated in the modulation of cell-cell adhesion by regulating the internalization of E-cadherin-catenin complexes [8,9]. Moreover, recent evidence suggests Ras-transformed cancer cells use macropinocytosis to internalize extracellular protein to supply amino acids for their proliferation and growth [10]. Thus, macropinocytosis is a fundamental cellular process used by a variety of cell types and underpins several different biological functions. Macropinosomes are large diameter (0.2-5 um) vesicles formed from actin-rich, sheet-like extensions of the plasma membrane called ruffles [1]. Most ruffles dissolve back into the plasma membrane, but some peripheral ruffles fold back on themselves, forming fluid-filled compartments. Other, circular-shaped ruffles form open, cup-like membrane extensions that close and trap extracellular fluid and solutes [2]. In both cases, macropinosome formation requires constriction of the distal margin of these pockets followed by membrane fusion and fission events to separate the macropinosome from the plasma membrane [1]. Newly formed macropinosomes then transition into early endosomes and eventually fuse with lysosomes [11]. Organized movements of membranes and the actin cytoskeleton are coordinated during membrane ruffling and macropinocytosis by a variety of signaling molecules. Membrane phosphoinositides, protein and lipid kinases, and Rho GTPases have all been implicated [2]. Generally speaking, discrete changes in phospholipid species in ruffles and macropinocytic cups at various stages of their formation induce coordinated changes in the activities of molecules that regulate actin organization, particularly Rho GTPases [1,[12][13][14]. Rho GTPases are key regulators of actin organization. They function like molecular switches, cycling between inactive GDP-bound and active GTP-bound states [15]. In their GTP-loaded conformation they bind downstream effectors to elicit actin reorganization. Expression of a GTPase-defective, and thus constitutively active, mutant of the Rho GTPase Rac1 induces extensive membrane ruffling and macropinocytosis in fibroblasts [16] and plays a role in the transformation of membrane ruffles into macropinosomes [17]. GTP-bound Rac1 binds to and activates a variety of effectors that regulate lamellipodia formation and membrane ruffling including p21-activated kinase 1 (PAK1) [18,19]. Activated PAK1 stimulates macropinocytosis [20] and the PAK1 target ctBP1/BARS is required for scission of the macropinocytic cup from the plasma membrane [21]. Thus, proper regulation of Rac1 activity is important for driving the cellular changes required for membrane ruffling and macropinocytosis. Diacylglycerol kinases (DGKs) are enzymes that phosphorylate diacylglycerol (DAG) to yield phosphatidic acid (PA). The ten mammalian DGK isozymes differ in structure, patterns of expression, enzymatic properties and subcellular localization, suggesting they modify distinct DAG signaling events and are regulated by separate molecular mechanisms [22,23]. By metabolizing DAG, DGKs simultaneously attenuate the activity of DAG-activated proteins and stimulate the activity of proteins activated by PA [24]. Previously, we showed the zeta isoform (DGKz) acts upstream of Rac1 and contributes to its activation by releasing it from its inhibitor RhoGDI [25]. DGKz exists in a multi-protein signaling complex with Rac1, PAK1 and RhoGDI and the scaffold protein syntrophin. DGKz-derived PA activates PAK1, which then phosphorylates RhoGDI to trigger Rac1 release [25,26]. Collectively, our findings established a mechanism whereby a change in the lipid second messenger PA modulates the amount of active Rac1 and thus contributes to the regulation of the actin cytoskeleton. Here, we demonstrate that growth factor-induced macropinocytosis is defective in fibroblasts lacking DGKz. We show DGKz is associated with membrane ruffles and nascent macropinosomes. In addition, we provide evidence that DGKz, in concert with Rac1, regulates the size of macropinosomes. Finally, our analyses reveal that DGKz is required for ruffling and macropinocytosis induced by constitutively active Rac1. Together, these results suggest a novel role for DGKz in linking lipid signaling to Rac1-dependent macropinosome biogenesis. Defective macropinocytosis in DGKζ-null cells We showed previously that membrane ruffling, a prerequisite for macropinocytosis, is deficient in DGKz-null cells [25]. To investigate potential roles for DGKz in macropinocytosis, we analyzed phase contrast, time-lapse images of nascent macropinosomes forming from membrane ruffles following stimulation with platelet-derived growth factor (PDGF; S1, S2 and S3 Movies). To minimize the possibility that potential differences in macropinocytosis between wild type and DGKz-null cells could be due to defects in the process of membrane ruffle formation per se, we only scored macropinosomes that were clearly derived from already-formed membrane ruffles (see Materials and Methods; Fig 1A). In this way, we could distinguish membrane ruffling defects from possible defects in macropinosome formation. In other words, we measured successful transitions from membrane ruffles to macropinosomes. If a membrane ruffle fails to give rise to a macropinosome, then we can conclude that the failure occurred at a point downstream of ruffle formation. Following stimulation with PDGF, wild type cells were significantly more likely to form macropinosomes from ruffles ( Fig 1B) and had, on average, more macropinosomes per cell than DGKz-null cells ( Fig 1C). Moreover, the mean macropinosome size in wild type cells was significantly larger than in null cells (Fig 1D). A cumulative frequency distribution showed that macropinosomes in null cells did not have areas larger than~2 um 2 , whereas those in wild type cells often exceeded 2 um 2 ( Fig 1E). These observations suggest a defect in PDGF-induced macropinosome formation in fibroblasts lacking DGKz. To demonstrate that the defect in macropinocytosis is due directly to DGKz loss, a hemagglutinin (HA)-tagged version of the wild type protein was reintroduced into null cells by adenoviral infection. Expression of HA-DGKz restored PDGF-induced macropinocytosis to wild type levels (Fig 2A and 2B). In contrast, macropinosomes detected in cells expressing a kinase dead mutant (DGKz kd ) were significantly smaller and less numerous (Fig 2A-2C). We also tested a mutant (DGKz M1 ) that mimics serine phosphorylation of the DGKz MARCKS domain [27]. Compared to the wild type protein, DGKz M1 exhibits increased plasma membrane localization and has greater biological activity in assays of neurite outgrowth [28,29]. Equivalent levels of expression of HA-DGKz M1 in null cells yielded approximately the same percentage of cells containing macropinosomes as wild type DGKz (Fig 2B), but the macropinosomes produced were substantially larger (Fig 2A-2C). Taken together, these results indicate that the loss of DGKz enzymatic activity is the primary cause of the macropinocytosis defect in DGKz-null fibroblasts. Decreased uptake of vaccinia virus in DGKζ-null fibroblasts The observed decrease in macropinocytosis in DGKz-null cells might be explained by a reduced number of macropinosomes or by a reduction in macropinosome size to below the limit of resolution, in which case the number of macropinosomes might actually be increased, but would not be counted in our assay. Alternatively, both the number and size of macropinosomes might be decreased. To begin to distinguish among these possibilities, we sought to employ an independent method to quantify macropinocytosis in wild type and DGKz-null cells. The most common method of quantifying macropinocytosis involves the uptake of the soluble enzyme horseradish peroxidase (HRP) from the extracellular environment and measuring enzyme activity in the cell lysates [30]. However, this technique poses limitations including low sensitivity and non-specific binding of HRP to the cell surface. Alternatively, the uptake of fluorescent dextran into cells can be measured using by fluorimetry, but again, the extent to which the dextran adheres non-specifically to cells poses limitations to accurate quantification, especially using cells with low levels of constitutive macropinocytosis [10,21,31]. To circumvent these experimental limitations, we made use of a cellular assay that measures macropinocytosis based on the uptake of vaccinia virus. Mature virions of vaccinia virus (VV) make exclusive use of macropinocytosis to infect mammalian cells [32]. The expression of viral proteins in the infected cells provides a clear quantitative endpoint assay and functional readout of macropinocytosis. This method offers advantages over traditional assays, including greatly improved sensitivity, which is useful when measuring uptake in cells like mouse embryonic fibroblasts (MEFs) that have low levels of constitutive macropinocytosis. To measure Values are the mean ± SEM from at least four independent experiments. A single asterisk denotes a significant difference (p<0.05) and two asterisks, a highly significant difference (p<0.01), between wild type and null cells by Student's t test. (E) Cumulative frequency distribution showing the distribution of macropinosome sizes (defined as the two-dimensional area) in PDGF-stimulated wild type and null cells. differences in macropinocytosis between wild type and DGKz-null cells, we analyzed the uptake of a VV strain engineered to express GFP in the cytoplasm of infected cells [33]. Importantly, GFP expression is directly proportional to the number of viral genomes in the cell and therefore provides an accurate measure of macropinocytosis. Western blots of VV-infected cells revealed a~60% decrease in GFP levels in DGKz-null cells compared to wild type cells (Fig 3A and 3B). Reintroducing HA-DGKz wt or HA-DGKz M1 , but not DGKz kd , into null cells was able to rescue virus uptake as measured by GFP expression (Fig 3C). Different cell types can internalize VV via several potential mechanisms [34,35], including direct fusion with the plasma membrane [36,37]. To determine the extent that mechanisms other than macropinocytosis contribute to VV-GFP uptake in fibroblasts, we tested several pharmacological inhibitors shown to affect VV uptake by macropinocytosis [32]. In our experiments, amiloride and the Rac1 inhibitor NSC23766 had modest effects on VV-GFP uptake at the concentrations tested, however a selective PAK1 inhibitor potently blocked uptake ( Fig 3D). Since PAK1 is essential for macropinocytosis [20] and VV infection [32], these data suggest macropinocytosis is the main route of VV-GFP entry into fibroblasts under these conditions [32]. Collectively, the results from these VV uptake experiments support the proposition that macropinocytosis is deficient in DGKz-null cells and reaffirm the requirement of DGKz catalytic activity for macropinocytosis. To study the role of DGKz during macropinosome biogenesis, we exploited time-lapse microscopy of living, wild type fibroblasts co-expressing a yellow fluorescent protein (YFP)-DGKz fusion protein and the N-terminal membrane-targeting domain (20 amino acids) of neuromodulin (GAP-43) fused to the N-terminus of mCherry (NMTD-mCherry). This region of neuromodulin, which contains two cysteine residues that undergo palmitoylation in mammalian cells, is sufficient for plasma membrane and Golgi targeting (Liu et al., 1994). Consistent with this, NMTD-mCherry demarcated membrane ruffles and macropinosome membranes in PDGF-stimulated, wild type MEFs (Fig 4A'-4F'). Three to four minutes after stimulating the cells with PDGF, Z-axis stacks were collected for the green/yellow and red channels every 15-20 seconds from cotransfected cells. YFP-DGKz appeared to be concentrated in membrane ruffles, in macropinocytic cups, and in newly formed macropinosomes, where it colocalized with NMTD -mCherry ( Fig 4A-4C", arrows and S4 Movie). Additionally, YFP-DGKz was concentrated in the cytoplasm surrounding newly formed macropinosomes ( Fig 4D-4D" and 4E-4E"). However, when we examined cells expressing YFP alone, it also appeared to be concentrated around large, newly formed macropinosomes in lamellipodia ( Fig 4F-4F" arrows and S5 Movie), suggesting proteins may non-specifically accumulate there. This likely reflects the fact that many macropinosomes have diameters that are larger than the thickness of the lamellipodium (0.1-0.3 um) and thus distend the plasma membrane, allowing more cytoplasm in that area. Therefore, to determine if YFP-DGKz is specifically recruited to the membrane surrounding macropinosomes, we normalized the intensity of YFP (control) or YFP-DGKz fluorescence signals to the mCherry-NMTD membrane marker, which we reasoned should have constant signal intensity in both conditions. To facilitate comparisons between macropinosomes from different cells and experiments, we restricted our analysis to around the time when macropinosomes first closed. Since time-lapse images were captured at approximately 20 second intervals, the first time point we analyzed was approximately 15-20 seconds after ruffle closure. This was also the time when the YFP-DGKz signal appeared to be the strongest; when newly formed macropinosomes were changing shape, connecting to and merging with neighboring macropinosomes. The duration of YFP-DGKz association with macropinosomes was variable but generally decreased during the motile phase when macropinosomes moved centripetally towards the nucleus (Yoshida et al., 2009). After normalizing the signal intensities and compensating for background fluorescence (see Materials and Methods), individual pixel intensity ratios were calculated for regions immediately surrounding macropinosomes in deconvolved optical slice images. The intensity ratios for YFP and YFP-DGKz were sorted into bins and plotted as probability distributions, which could be modeled by a three parameter log-normal distribution (Fig 5). Pixel ratios greater than 1 indicate the protein is more concentrated than the mCherry-NMTD membrane marker, while values less than 1 indicate a lower concentration. The YFP-DGKz data was skewed to more positive values than the YFP data, suggesting DGKz is more concentrated at and around the macropinosome membrane than YFP alone. Consistent with this, the percentage of pixels in each of the bins above an average ratio of 1.1 was significantly higher for YFP-DGKz than for YFP alone (Fig 5). Taken together, these results suggest DGKz is specifically concentrated at the macropinosome membrane. A phosphomimetic DGKζ mutant potentiates Rac1-induced macropinocytosis A constitutively active Rac1 mutant (Rac1 V12 ) is sufficient to promote macropinocytosis [16,20]. We previously showed DGKz forms a multiprotein signaling complex with Rac1 and PAK1 to regulate Rac1 activation and membrane ruffling [25]. To further investigate the relationship of DGKz and Rac1 to macropinocytosis, HA-tagged DGKz was co-expressed with myc-tagged Rac1 V12 in wild type MEFs. As expected, when myc-Rac1 V12 was expressed alone in these cells it induced the formation of macropinosomes (Fig 6A). Quantification of static images of myc-Rac1 V12 -expressing cells revealed the macropinosomes had a mean size of 0.3 ± 0.05 um 2 (Fig 6B). A cumulative frequency distribution shows that 99% of the macropinosomes had areas smaller than~1 um 2 (Fig 6C). When expressed alone in MEFs, none of the DGKz constructs we tested induced membrane ruffling or macropinocytosis (not shown), consistent with our previous studies [28]. However, when wild type DGKz was co-expressed with Rac1 V12 (Fig 6A) there was a significant increase in the average macropinosome size to~0.6 ± 0.09 um 2 ( Fig 6B) and an increased frequency of larger macropinosomes (Fig 6C). Co-expression of Rac1 V12 with the MARCKS domain phosphomimetic mutant (DGKz M1 ) led to a further increase in macropinosome size (Fig 6A-6C). In contrast, co-expression with the catalytically inactive mutant (kd) produced macropinosomes of a similar size to those produced by Rac1 V12 alone (Fig 6A-6C). To extend the generality of these findings, we repeated these experiments in C2C12 myoblasts and obtained similar results, namely that DGKz M1 significantly increased macropinosome size (S1 Fig). Western blotting confirmed that all DGKz constructs were expressed at equivalent levels (not shown). Collectively, these results suggest DGKz has a role in determining macropinosome size that may depend on phosphorylation of the DGKz MARCKS domain. Individual pixel intensity ratios were calculated for regions immediately surrounding the macropinosomes. The intensity ratios for YFP/NMTD-mCherry (gray bars) and YFP-DGKζ/NMTD-mCherry (black bars) were sorted into bins and plotted as probability distributions. The data were modeled by a three-parameter lognormal distribution (red and blue lines, respectively). Pixel ratios greater than 1 indicate the protein is more concentrated than the mCherry-NMTD membrane marker, while values less than 1 indicate a lower concentration. Asterisks indicate a significant difference in the percentages of pixels in each bin with the indicated intensity ratio. Discussion We showed previously that DGKz-null cells have defects in both peripheral and circular membrane ruffling [25]. In this report, we provide evidence that DGKz has a role in growth factorinduced macropinocytosis and macropinocytosis-dependent vaccinia virus infection-both were attenuated in DGKz-null fibroblasts. The formation of macropinosomes from membrane ruffles is a continuous process but can be divided into a series of distinct morphological stages to facilitate quantitative analysis: (1) formation of an irregular ruffle, (2) transition to a curved (C-shaped) ruffle, (3) closure into a circular ruffle or membrane cup, and (4) cup closure, which marks the separation of the macropinosome from the plasma membrane [34]. Once closed, macropinosomes are dynamic and often extend tubules and merge with other macropinosomes (unpublished observations). This DGKζ Regulates Macropinocytosis is followed by the motile phase, when a fully formed macropinosome migrates towards the nucleus. By following the fate of pre-existing membrane ruffles using time-lapse imaging and quantifying the number of ruffles that closed to form macropinosomes, we were able to establish that DGKz-null cells have a defect in macropinosome biogenesis in addition to defective membrane ruffling. In live imaging studies, DGKz was associated with irregular and curved membrane ruffles and continued to be associated with membrane cups and fully formed macropinosomes. While Rac1 localization to macropinosomes was shown to be only slightly greater than on other regions of the plasma membrane, it is selectively activated at these sites after ruffle closure [34]. A recent study found that the product of DGK activity (i.e. PA) was necessary for Rac1 activation and macropinosome formation in macrophages and immature dendritic cells [38]. Our central finding (i.e. DGKz loss reduces the frequency and size of macropinosomes) suggests DGKz provides the PA needed for Rac1 activation during macropinocytosis. In support of this conclusion, the macropinocytosis defect in DGKz-null fibroblasts was rescued by reintroducing catalytically active DGKz, but not a kinase-dead mutant. It is possible also that other DGK isoforms might contribute to constitutive and growth factor-induced macropinocytosis in different cell types. Indeed, similar to DGKz, at least one other DGK isoform, DGKα, mediates growth factor-induced membrane ruffling and Rac1 activation in hepatocytes by regulating Rac1 release from RhoGDI [39,40]. Enlarged macropinosomes caused by expression of the MARCKS domain phosphomimetic mutant DGKz M1 are reminiscent of those formed by a constitutively active Rab5 mutant, which induces the formation of unusually large endosomes by homotypic fusion of early endosomes [36]. However, our live imaging analyses show DGKz is associated with membrane ruffles and nascent macropinosomes, which argues against endosome fusion as the main mechanism for their increased size and instead, suggests DGKz acts at a time when ruffles close to form macropinosomes. Since DGKz M1 is more membrane-associated than the wild type protein [28,37], but only has half the enzymatic activity [41], it might remain associated with the macropinosome membrane longer and delay ruffle closure, giving rise to larger macropinosomes. Alternatively, DGKz M1 may slow macropinosome shrinkage or the rate at which they are metabolized. Additional experiments will be required to distinguish among these possibilities. Diacylglycerol and its non-hydrolyzable analogue phorbol myristate acetate (PMA), potently stimulate membrane ruffling and macropinocytosis [40,41]. Moreover, both DAG and PA, the product of the DGK reaction, accumulate in structures morphologically analogous to macropinocytic cups [12,42]. Exactly how these lipids drive macropinocytosis remains unclear; however, the results presented here provide a plausible mechanistic explanation for how lipid signals can be translated into cytoskeletal changes underlying macropinosome biogenesis. DAG and PMA are potent activators of protein kinase Cα (PKCα), which forms a regulated signaling complex with DGKz [43]. PKCα-mediated phosphorylation of the MARCKS domain dissociates the complex and leads to increased association of DGKz with the plasma membrane [28,37], where it can access DAG and metabolize it to yield PA. DGKz-derived PA activates PAK1, which subsequently phosphorylates RhoGDI, leading to the release and activation of Rac1. Since Rac1 activity normally increases around the time of ruffle closure [34] and the PAK1 effector CtBP1/BARS has a role in macropinosome scission [21], it is likely that DGKz also functions during these periods. Both Rac1 and PAK1 have previously been shown to play an essential role in macropinosome formation [16,20]. Therefore, it is likely that the loss of DGKz has significant consequences for downstream signaling events that mediate the formation of macropinosomes from ruffles. Consistent with our previous results showing DGKz directly interacts with both active and inactive Rac1 [29], the findings presented here argue that DGKz and Rac1 function synergistically to regulate the formation and size of macropinosomes upstream and downstream of Rac1 activation. The decreased uptake of VV particles in DGKz-null cells (Fig 3) supports the idea that macropinocytosis is deficient in the absence of DGKz. The large size of VV mature virions necessitates the use of macropinocytosis for cellular entry and infection because pathogens of this size are generally too big for other endocytic mechanisms [32]. VV-GFP uptake was rescued by wild type DGKz, but not by a kinase-dead mutant, again indicating that DGKz catalytic activity is required for macropinocytosis. In addition, VV-GFP uptake was potently blocked by a specific PAK1 inhibitor, suggesting infection does not occur via an alternative mechanism such as membrane fusion [36,37]. Since VV uptake was reduced in the absence of DGKz, manipulating or interfering with DGKz function might lead to novel strategies that reduce or inhibit infection of viruses and bacteria that require macropinocytosis to gain entry into cells. Clinically, modified vaccinia viruses have been adapted for use in oncolytic virus therapy [38]. Our recent finding that increased DGKz levels and Rac1 activity can contribute to the metastatic phenotype of certain colorectal, prostate, and breast cancers [39] raises the intriguing possibility that such tumors might be more sensitive to VV uptake than neighboring normal cells and thereby potentiate oncolytic lysis. Future studies will provide additional insights into precisely how DGKz transduces lipid signals to regulate membrane ruffling/macropinocytosis and the impact this might have on pathogenic infection. Plasmids and constructs Plasmids encoding wild-type (wt) DGKz, a membrane-associated mutant mimicking phosphorylation of the MARCKS domain (DGKz M1 ), and a kinase-dead mutant (DGKz kd ), all with three tandem, N-terminal HA epitope tags, have been previously described (Topham et al, 1998;Hogan et al, 2001). N-terminal myc-tagged Rac1 V12 in pEFmPLINK was a gift from Dr. Andrew Thorburn (University of Colorado, Denver, CO). N-terminal yellow fluorescent protein (YFP)-tagged Rac1 V12 has been previously described ). CFP-DGKz M1 was generated by subcloning DGKz M1 from pcDNA3.1 into pECFP-C1 (Clonetech) using EcoRI and XbaI restriction sites. Cloning and production of adenoviral constructs encoding green fluorescent protein (GFP), DGKz WT , DGKz M1 , or DGKz kd have been described previously (Yakubchyk et al., 2005). A plasmid encoding YFP-DGKz was made by first subcloning DGKz in pCMV-HA [28] into pcDNA3.1 using EcoR I and Not I restriction sites. Then, the resulting plasmid was digested with EcoR I and Xba I enzymes and ligated into pEYFP-C1 (Clonetch, Mountain View, CA) digested with the same enzymes. Cell culture, transfection/infection and immunofluorescence microscopy Wild type and DGKz-deficient immortalized mouse embryonic fibroblasts (MEFs) were generated as previously described . All animal experimental studies were approved by the University of Ottawa Animal Care Committee and conformed to the guidelines set forth by the Canadian Council on Animal Care and Canadian Institutes of Health Research. MEFs were cultured under standard growth conditions in Dulbecco's Modified Eagle Medium (DMEM) supplemented with 10% fetal calf serum, 2 mM L-glutamine, 100 U ml -1 penicillin, and 100 U ml -1 streptomycin and grown at 37°C in a humidified incubator with 5% CO 2 . C2C12 myoblasts were maintained as described previously [42]. Fibroblasts were seeded onto Matrigel-coated coverslips in 24-well dishes at a density of 40,000 cells per well, serumstarved overnight then stimulated with either PDGF or vehicle (0.14M HCL and 0.1% bovine serum albumin). Cells were fixed and processed as described previously . For experiments using Rac1 V12 , cells were transfected with myc-or YFP-tagged Rac1 V12 constructs for 24 h. In some experiments, HA-or YFP-tagged DGKz, DGKz M1 , or DGKz kd constructs were cotransfected with myc-tagged Rac1 V12 . Cells were visualized by time-lapse microscopy or were fixed in 4% paraformaldehyde and processed for immunocytochemistry. For rescue experiments, DGKz-null fibroblasts were infected with adenoviruses encoding DGKz constructs at a multiplicity of infection of 100 for 1 h at 37°C. Cells were incubated a further 24 h under standard growth conditions as described previously [25]. Phase contrast live imaging Wild-type and DGKz knockout fibroblasts were seeded on 35mm culture dishes (MatTek, Ashland, MA) and grown to 70-80% confluence. The cells were serum-starved overnight in serum-free DMEM containing penicillin/streptomycin and glutamine. The next day, the cells were washed twice in pre-warmed, serum-free DMEM then placed in a stage-fixed cell chamber at 5% CO 2 , 37°C prior to imaging. The cells were stimulated with 25 ng/mL platelet-derived growth factor (PDGF) then filmed for 30 minutes using an Axiovert 200M microscope (Carl Zeiss, Germany) and EC plan Neofluar 40x/1.30 oil objective (for high-resolution images) or 10x objective (for quantitation of macropinosome size and number). Phase-contrast images were captured every 10 sec using an Axiocam HRm charged-coupled device camera. With phase-contrast light microscopy macropinosomes are readily observed as large, phase-bright vesicles [43]. Images were recorded using AxioVision software (version 4.6), then exported and processed using Adobe Photoshop. Macropinosome quantification from time lapse images Macropinosomes were defined as circular, phase-bright organelles that dissociated from circular ruffles or peripheral ruffles in cells stimulated with PDGF. For each cell in a field, successful transitions from membrane ruffles to macropinosomes were quantified. To calculate the percentage of cells with macropinosomes, the number of cells with at least one macropinosome was divided by the total number of cells in the field. To calculate the number of macropinosomes per cell, the total number of observed macropinosomes was divided by the number of cells containing at least one macropinosome. To quantify macropinosome surface area in fibroblasts, cells were imaged at 10x magnification, and the macropinosomes were traced using the "outline" tool in AxioVision. Data were graphed using SigmaPlot 12.5. To quantify the number and area of Rac1 V12 -induced vesicles in fibroblasts or C2 myoblasts, the transfected cells were fixed and stained with monoclonal antibody 9E10 against myc epitope-tagged Rac1. The cells were photographed on a Zeiss Axioscop 2 microscope fitted with a 40x objective using a Zeiss Axiocam digital camera. The captured images were processed using the AxioVision software as follows: Vesicles were outlined in white using the circle/ellipse tool. The annotated images were then saved in the (8 bit) tagged image file format (TIFF). The saved images were imported into Image J. First, the Set Scale function was used to convert pixels into a known distance with the calibration settings obtained from the microscope. Then the images were processed using the Thresholding tool until only the annotated vesicle outlines were visible. The analyze particle function was used to determine the number and area of the vesicles. Vaccinia virus uptake Equivalent numbers or wild type or DGKz-null fibroblasts plated on plastic dishes in serumfree medium were infected with equal volumes (multiplicity of infection = 1) of a Vaccinia virus strain engineered to express GFP in the cytoplasm of infected cells McCart, 2001. The cells were incubated in a humidified chamber at 37°C with 5% CO 2 for 6 h to allow for GFP expression and then were lysed and extracted as described (Ard et al., 2012). Equivalent amounts of protein were analyzed by SDS-PAGE and immunoblotting with an anti-GFP antibody, followed by a horseradish peroxidase-conjugated secondary antibody. Bound antibody was detected by chemiluminescence. Digital images of western blots were captured using a Kodak Image Station 440 CF (Rochester, NY). The band intensities were measured by densitometric analysis and normalized to the relative amount of tubulin in each sample. For rescue experiments, DGKz-null cells were infected with infected with adenovirus bearing HA-tagged DGKz wt , DGKz M1 , or DGKz kd as described previously (Ard et al., 2012) and were incubated at 37°C with 5% CO 2 for 36 hours to allow for protein expression before infecting with VV. Fluorescence imaging and quantification Live cell imaging experiments were carried out using a wide-field deconvolution-based fluorescence microscope system (DeltaVision CORE; Applied Precision, Issaquah, WA) equipped with a three-dimensional motorized stage, temperature-and gas-controlled environmental chamber, Xenon light source, and quantifiable laser module. Images were collected using a 60 × NA 1.4 Plan-Apochromat objective and recorded with a CoolSNAP coupled-charge device (CCD) camera (Roper Scientific, Trenton, NJ). The microscope was controlled by Soft-WorX acquisition and deconvolution software (Applied Precision, Seattle, WA). MEFs were seeded at 15% confluence onto optically clear 35 mm dishes (ibidi, Verona, WI) then incubated overnight in a humidified chamber at 37°C with 5% CO 2 . The cells were cotransfected with pTK165_Mem-mCherry-4.1G-CTD (Addgene Plasmid 46362 from Dr. Iain Cheeseman) encoding the N-terminal palmitoylation domain (20 amino acids) of neuromodulin (GAP-43) fused to the N-terminus of mCherry (NMTD-mCherry), and either YFP-DGKz or an empty YFP expression vector using Effectene transfection reagent (Qiagen Inc., Toronto, ON). Eight hours following transfection, the cells were washed with phosphate buffered saline, pH 7.4 and the media was replaced with phenol-and serum-free medium and incubated for an additional 16 hours. On the day of imaging, the media was replaced with phenol-and serum-free media supplemented with 25 mM HEPES, pH 7.4. The cells were placed into the microscope environmental chamber and allowed to acclimatize for 30 minutes before stimulating with 50 ηg/ml PDGF-BB (Sigma-Aldrich, St. Louis, MO). Imaging commenced four minutes after stimulation to allow for re-centering of cotransfected cells within the field of view at each location. For each cell, a stack of five (512 x 512 pixels, 4.673 pixels/μm) images with 0.75μm separation was acquired in the using the Sedat quad filter set: FITC/GFP (485 ± 20 nm Excitation, 535 ± 30 Emission) and TRITC (560 ± 25 nm Excitation, 607 ± 36 nm Emission (Chroma Technology Corp, Bellows Falls, VT). Images were captured every 15-20 seconds for approximately 20 minutes. Typical exposures times were 0.25-0.4 seconds. Deconvolution of the images was performed using the SoftWorX software and the files were imported into Image J software (version 1.47f) using the Bio-Formats plug-in Linkert, 2010. Variation in the expression of the tagged constructs was compensated for by splitting the colour channels of each image set into separate files and normalizing each using the enhance contrast option with 0.1% saturation. The divide operation of the image calculator tool set was then used to produce an image where each pixel represents the ratio of YFP to mCherry signal. To measure YFP-DGKz fluorescence on macropinosome membranes, the perimeter of individual macropinosomes was selected using the Freehand Selection Tool in Image J. The interior of the macropinosome was subtracted from each selection then the intensity of each pixel within the selection area was recorded in a text file using a macro written with Image J. To subtract the background fluorescence, the pixel intensity values were normalized to the average of the mean gray values from three separate regions surrounding each macropinosome. S4 Movie. Time-lapse observation of a wild type MEF expressing YFP-DGKz and NMTD-mCherry. Wild type fibroblasts were transiently transfected with YFP-DGKz (green) and NMTD-mCherry (red) and were visualized 24 h later by time-lapse fluorescence microscopy after stimulation with 50 ηg/ml PDGF-BB. One of five deconvolved optical sections is shown. Scale bar = 10 um. (AVI) S5 Movie. Time-lapse observation of a wild type MEF expressing YFP and NMTD-mCherry. Wild type fibroblasts were transiently transfected with YFP (green) and NMTD-mCherry (red) and were visualized 24 h later by time-lapse fluorescence microscopy after stimulation with 50 ηg/ml PDGF-BB. One of five deconvolved optical sections is shown. Scale bar = 10 um. (AVI)
2017-04-02T23:36:25.044Z
2015-12-23T00:00:00.000
{ "year": 2015, "sha1": "21d302be0910d100e463b5852050793be02c2e21", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0144942&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "21d302be0910d100e463b5852050793be02c2e21", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
266387373
pes2o/s2orc
v3-fos-license
Nitrogen- and Sulfur-Codoped Strong Green Fluorescent Carbon Dots for the Highly Specific Quantification of Quercetin in Food Samples Carbon dots (CDs) doped with heteroatoms have garnered significant interest due to their chemically modifiable luminescence properties. Herein, nitrogen- and sulfur-codoped carbon dots (NS-CDs) were successfully prepared using p-phenylenediamine and thioacetamide via a facile process. The as-developed NS-CDs had high photostability against photobleaching, good water dispersibility, and excitation-independent spectral emission properties due to the abundant amino and sulfur functional groups on their surface. The wine-red-colored NS-CDs exhibited strong green emission with a large Stokes shift of up to 125 nm upon the excitation wavelength of 375 nm, with a high quantum yield (QY) of 28%. The novel NS-CDs revealed excellent sensitivity for quercetin (QT) detection via the fluorescence quenching effect, with a low detection limit of 17.3 nM within the linear range of 0–29.7 μM. The fluorescence was quenched only when QT was brought near the NS-CDs. This QT-induced quenching occurred through the strong inner filter effect (IFE) and the complex bound state formed between the ground-state QT and excited-state NS-CDs. The quenching-based detection strategies also demonstrated good specificity for QT over various interferents (phenols, biomolecules, amino acids, metal ions, and flavonoids). Moreover, this approach could be effectively applied to the quantitative detection of QT (with good sensing recovery) in real food samples such as red wine and onion samples. The present work, consequently, suggests that NS-CDs may open the door to the sensitive and specific detection of QT in food samples in a cost-effective and straightforward manner. Introduction Flavonoids are secondary metabolites that belong to the polyphenolic compounds.Quercetin (3,3 ′ ,4 ′ ,5,7-pentahydroxyflavone; QT) is a member of the flavonol subclass in the flavonoid family and is naturally found in a wide range of vegetables, fruits, and beverages.It abundantly exists in capers, onions, broccoli, apples, cherries, and berries, as well as in red wine, cocoa, and tea [1].QT has been the subject of interest owing to its in vitro antioxidant properties for scavenging free radicals, which are favorable for human health and disease prevention [2,3].Moreover, the catechol ring and OH groups in its chemical structure qualify QT to partake in potent biological activities, including antiviral, antiallergic, anti-inflammatory, antimutagenic, anticarcinogenic, and cardioprotective activities [4,5].However, high dietary consumption of QT may lead to headaches, renal failure, and poor glutathione S-transferase activity [6,7].Therefore, the detection of QT is highly crucial for pharmaceutical chemistry, biochemistry, and clinical medicine. Traditional methods available for the detection of QT, including spectrophotometry, electrochemical detection, capillary electrophoresis, and high-performance liquid chromatography (HPLC), are restricted in their application due to the limitations resulting from Materials 2023, 16, 7686 2 of 14 sophisticated instrumentation, operational difficulty, sluggish real-time responsivity, and poor sensitivity [8][9][10][11].On the other hand, the fluorescence-based analytical technique has generated much attention for the quantitative detection of various organic and inorganic analytes because of its benefits over other approaches, including its rapid responsivity, superior sensitivity, high specificity, inherent simplicity, and low operating costs [12][13][14].Fluorophores usually include metal nanoclusters, semiconductor quantum dots (QDs), and organic dyes [15][16][17].Despite their high quantum yield and chemical stability, these materials are usually very toxic and complex to synthesize [18]. Zero-dimensional (0D) carbon-based nanomaterials, known as carbon quantum dots or carbon dots (CDs), have been gaining popularity as a novel class of sensing probes in fluorescence-based analytical techniques [19].CDs, quasi-spherical nanomaterials with size confinement below 10 nm, were accidentally discovered in 2004 during the purification process of single-walled carbon nanotubes (CNTs) [20].They have many advantages, such as tunable spectral luminescence, reduced overlap between excitation and emission spectra, good biocompatibility, low toxicity, high chemical stability, excellent photostability, good water-dispersibility, and the availability of abundant sources [21].These special qualities lead them to be exploited in biomedicine, catalysis, optoelectronics, and sensing [22].Still, most of the fabricated CDs have a lower quantum yield (QY) than conventional QDs, which limits their practical use [23,24]. The fluorescence characteristics of CDs arise from the quantum size effects, carboncore states, conjugated π-domains, molecule states, and surface states [25,26].It is highly expected that surface states are the main source of fluorescence-so-called fluorescence centers, created by the synergistic hybridization of carbon-core and associated chemical groups [27].Very recently, heteroatom doping has become a more convenient strategy for engineering the surface states and electron distribution in CDs to greatly enhance their fluorescence properties [28].The introduction of heteroatoms would create more surface states for electron trapping, which facilitates improved radiative recombination that results in a high QY, excitation-independent emission, and emission redshift [27,29].Among the various heteroatoms for doping, nitrogen is a popular dopant because its atomic radius (0.075 nm) is comparable to that of carbon (0.077 nm).Similarly, the electronegativity of sulfur (2.58) is nearly equivalent to that of carbon (2.55), which enables easier electron transition [30,31].Sulfur also significantly increases the density of graphitic nitrogen, which results in a redshift in the emission spectrum [32].Hence, the codoping of nitrogen (N) and sulfur (S) atoms into CDs allows us to expect the synergistic effect of the two individual dopants. We fabricated N-and S-codoped CDs (NS-CDs) using two simple precursors, i.e., p-phenylenediamine (p-PD) and thioacetamide, through a one-step solvothermal route with ethanol as a solvent.The p-PD has two -NH 2 groups, while thioacetamide has C-NH 2 and C=S groups.In contrast to previously reported CDs for QT sensing, the current NS-CDs were prepared under mild reaction conditions without any acids, complex molecules, or metal salts.The as-synthesized NS-CDs exhibited a large Stokes shift (~125 nm) with green emission for the excitation at 375 nm.However, the fluorescence was effectively quenched by QT through the inner filter effect (IFE), with a noticeable color change from green to a colorless solution.The quenching-based sensing strategy offered the lowest detection limit of 17.3 nM within the linear range of 0-29.7 µM, with good specificity against various interfering elements.The present work enabled the quantification of QT in food samples, which would be beneficial for maintaining good health and preventing chronic diseases.A diagram of the NS-CDs' synthesis and QT detection is given in Scheme 1. Scheme 1. Synthesis of NS-CDs and detection of QT by fluorescence quenching. Preparation of NS-CDs Typically, 0.108 g of p-PD was weighed and dissolved in 5 mL of ethanol to prepare a 0.1 M solution of p-PD.With this solution, 0.15 M thioacetamide (0.112 g) dissolved in 5 mL of ethanol was gently mixed on a magnetic stirrer.The solution (net volume of 10 mL) was ultrasonicated for 5 min and then shifted to a 50 mL Teflon liner covered by a stainless steel vessel.The reactor was permitted to undergo a solvothermal reaction for 10 h inside an oven (Daihan Scientific, Wonju, Republic of Korea) at 180 °C temperature.After the reaction time was completed, the solution was spontaneously cooled to room temperature and centrifuged using a centrifuge (Combi-514R, Hanil Co., Ltd., Seoul, Republic of Korea) at 10,000 rpm for 10 min.Subsequently, the supernatant was lyophilized at a temperature of −80 °C using a freeze-dryer (FD8508, IlShinBioBase, Seoul, Republic of Korea) and stored at 4 °C for further use.The amount of NS-CDs recovered at the end of the preparation was 0.084 g. Characterization of NS-CDs Transmission electron microscopy (TEM) images were collected using a transmission electron microscope (Tecnai G2 F30, FEI, Hillsboro, Oregon, USA) at an operating voltage of 300 kV.The chemical compositions were examined using an X-ray photoelectron spectroscope with a monochromatic Al Kα source (VG MultiLab/2000, Thermo Scientific, Waltham, MA, USA).Fourier-transform infrared (FT-IR) spectra were measured using an FT-IR spectrometer (FT-IR/4600, JASCO, Tokyo, Japan).UV-vis absorption was studied on a spectrophotometer (V/770, JASCO, Tokyo, Japan).Photoluminescence spectra were recorded on a fluorescence spectrophotometer (FS/2, SCINCO, Seoul, Republic of Korea) at room temperature using a 1 cm path-length quartz cell.The slit width was set to 5 nm for both excitation and emission.The relative QY (Ф) was determined by measuring the emission intensity with a spectrofluorometer (Fluorolog3, Horiba, Tokyo, Preparation of NS-CDs Typically, 0.108 g of p-PD was weighed and dissolved in 5 mL of ethanol to prepare a 0.1 M solution of p-PD.With this solution, 0.15 M thioacetamide (0.112 g) dissolved in 5 mL of ethanol was gently mixed on a magnetic stirrer.The solution (net volume of 10 mL) was ultrasonicated for 5 min and then shifted to a 50 mL Teflon liner covered by a stainless steel vessel.The reactor was permitted to undergo a solvothermal reaction for 10 h inside an oven (Daihan Scientific, Wonju, Republic of Korea) at 180 • C temperature.After the reaction time was completed, the solution was spontaneously cooled to room temperature and centrifuged using a centrifuge (Combi-514R, Hanil Co., Ltd., Seoul, Republic of Korea) at 10,000 rpm for 10 min.Subsequently, the supernatant was lyophilized at a temperature of −80 • C using a freeze-dryer (FD8508, IlShinBioBase, Seoul, Republic of Korea) and stored at 4 • C for further use.The amount of NS-CDs recovered at the end of the preparation was 0.084 g. Characterization of NS-CDs Transmission electron microscopy (TEM) images were collected using a transmission electron microscope (Tecnai G2 F30, FEI, Hillsboro, Oregon, USA) at an operating voltage of 300 kV.The chemical compositions were examined using an X-ray photoelectron spectroscope with a monochromatic Al Kα source (VG MultiLab/2000, Thermo Scientific, Waltham, MA, USA).Fourier-transform infrared (FT-IR) spectra were measured using an FT-IR spectrometer (FT-IR/4600, JASCO, Tokyo, Japan).UV-vis absorption was studied on a spectrophotometer (V/770, JASCO, Tokyo, Japan).Photoluminescence spectra were recorded on a fluorescence spectrophotometer (FS/2, SCINCO, Seoul, Republic of Korea) at room temperature using a 1 cm path-length quartz cell.The slit width was set to 5 nm for both excitation and emission.The relative QY (ϕ) was determined by measuring the emission intensity with a spectrofluorometer (Fluorolog3, Horiba, Tokyo, Japan).Fluores-cence lifetime measurements were conducted by Fluorolog3 (Horiba, Tokyo, Japan) with a time-correlated single-photon counting instrument equipped with a 390 nm laser. Quantum Yield (ϕ) Measurement The relative QY of NS-CDs, i.e., ϕ CD , was determined by a standard method using rhodamine B as a reference (ϕ R = 31% in water), with the equation given by [33]: where I is the integrated emission intensity.The subscripts "CD" and "R" represent the NS-CDs and rhodamine B, respectively.A is the absorbance and n is the refractive index of the solvent.Both NS-CDs and rhodamine B were dissolved in water.Hence, the values of n CD and n R were taken as 1.33.The absorbance of fluorophore solutions was maintained below 0.05 to minimize the reabsorption of emitted light. Fluorescence Sensing of QT To determine the fluorescence response of NS-CDs towards QT, the NS-CDs solution (500 µL) was dispersed in phosphate buffer (10 mM, pH = 7).Then, the final volume of the solution was adjusted to 3 mL by adding water and used as a fluorescence probe.After 2 min of incubation time, the solution was excited at 375 nm, and the fluorescence intensity (F 0 ) of the emission at 500 nm was recorded in the wavelength range of 470-700 nm.Successively, various quantities of quercetin (3.3-39.6 µL) were introduced into the NS-CDs solution, and the fluorescence intensities were recorded.F 0 and F are the fluorescence intensities of the NS-CDs in the absence and presence of QT, respectively.To study the specificity of the NS-CDs towards QT, common potential interferents (e.g., phenols, biomolecules, amino acids, metal ions, and flavonoids) were introduced into the NS-CDs solution under the same experimental conditions.The measurements were repeated five times (n = 5). Recovery Test in Food Samples Commercial red wine and onion were bought from a local supermarket (Emart, Seongnam, South Korea) for the purpose of real sample analysis of the NS-CDs fluorescence probe.The red wine sample was used directly as purchased.To prepare the onion sample, onion paste was made, centrifuged at 10,000 rpm for 10 min, filtered through a 0.45 µm filter membrane to remove the impurities, and then stored at 4 • C for analysis.The recovery study was performed by adding different concentrations of standard QT (0, 5.0, 10.0, 15.0, and 30.0 µM) to the red wine and onion samples in an aqueous medium.The pH value of all sample solutions was regulated to 7. The recovery (%) was estimated from the following equation [34]: Analysis of Morphology and Chemical Compositions The morphology and size distribution of the NS-CDs were examined using TEM and HR-TEM.Figure 1a displays a TEM image of NS-CDs that are spherical with uniform dispersibility.The particles are distributed between 1.1 nm and 3.7 nm, with an average size of 2.2 ± 0.41 nm (inset of Figure 1a).The average particle size and standard deviation were calculated from the histogram plotted by measuring the diameter of 65 particles using ImageJ 1.41 software.As shown in Figure 1b, the synthesized NS-CDs exhibited good crystallinity, with an interplanar spacing of 0.19 nm (inset of Figure 1b), which coincides with the (102) diffraction facets of graphitic carbon [35].good crystallinity, with an interplanar spacing of 0.19 nm (inset of Figure 1b), which coincides with the (102) diffraction facets of graphitic carbon [35].The functional groups on the surface of the NS-CDs were studied through FT-IR spectroscopy.In Figure S1, the peaks at 3346 cm −1 and 2972 cm −1 can be linked to the stretching vibrations of the N-H and C-H groups, respectively.The characteristic peaks centered at 1743 cm −1 and 1653 cm −1 correspond to the stretching vibrations of the C=C group.The peak located at 1382 cm −1 can be ascribed to the deformation vibrations of the O-H group.The peaks at 1084 cm −1 and 1045 cm −1 are assigned to the stretching vibrations of the C=S/C-O and C-N groups, respectively.The peaks at 833 cm −1 and 630 cm −1 are associated with the deformation vibrations and rocking vibrations of the C-H group, respectively [36][37][38].The FT-IR spectral results confirm the existence of amino-and sulfur-containing groups on the NS-CDs' surface. Moreover, the surface chemical states and elemental compositions of the NS-CDs were revealed by XPS analysis.In Figure S2, the XPS survey scan discloses four peaks around 163.93 eV, 285.16 eV, 400.66 eV, and 531.45 eV, corresponding to S2p, C1s, N1s, and O1s, respectively.This indicates that the N and S atoms were successfully doped into the CDs.Particularly, the deconvoluted C1s spectrum in Figure 2a can be resolved into three different peaks at 284.63 eV, 286.05 eV, and 288.32 eV, which can be attributed to C-C/C=C, C-N/C-O, and C=O, respectively [39].In the high-resolution N1s spectrum (Figure 2b), the two peaks located at 399.32 eV and 400.76 eV are due to the amino-N and pyrrolic-N groups, respectively [40].The high-resolution O1s spectrum (Figure 2c) exhibits two peaks at 531.14 eV and 532.79 eV, which are assigned to the C=O and C-O-C/C-OH groups, respectively.The high-resolution S2p spectrum (Figure 2d) consists of two characteristic peaks at 163.33 eV and 164.61 eV, arising from the 2p3/2 and 2p1/2 positions in the C-S-C covalent bonds due to the spin-orbit coupling, respectively.Another peak located at 167.96 eV can be attributed to the presence of -C-SOx-(x = 2) species [39,41].The XPS analysis results are in good agreement with the FT-IR measurements, verifying that the synthesized NS-CDs have rich nitrogen and sulfur groups to qualify NS-CDs with good luminescence properties.The functional groups on the surface of the NS-CDs were studied through FT-IR spectroscopy.In Figure S1, the peaks at 3346 cm −1 and 2972 cm −1 can be linked to the stretching vibrations of the N-H and C-H groups, respectively.The characteristic peaks centered at 1743 cm −1 and 1653 cm −1 correspond to the stretching vibrations of the C=C group.The peak located at 1382 cm −1 can be ascribed to the deformation vibrations of the O-H group.The peaks at 1084 cm −1 and 1045 cm −1 are assigned to the stretching vibrations of the C=S/C-O and C-N groups, respectively.The peaks at 833 cm −1 and 630 cm −1 are associated with the deformation vibrations and rocking vibrations of the C-H group, respectively [36][37][38].The FT-IR spectral results confirm the existence of amino-and sulfur-containing groups on the NS-CDs' surface. Moreover, the surface chemical states and elemental compositions of the NS-CDs were revealed by XPS analysis.In Figure S2, the XPS survey scan discloses four peaks around 163.93 eV, 285.16 eV, 400.66 eV, and 531.45 eV, corresponding to S2p, C1s, N1s, and O1s, respectively.This indicates that the N and S atoms were successfully doped into the CDs.Particularly, the deconvoluted C1s spectrum in Figure 2a can be resolved into three different peaks at 284.63 eV, 286.05 eV, and 288.32 eV, which can be attributed to C-C/C=C, C-N/C-O, and C=O, respectively [39].In the high-resolution N1s spectrum (Figure 2b), the two peaks located at 399.32 eV and 400.76 eV are due to the amino-N and pyrrolic-N groups, respectively [40].The high-resolution O1s spectrum (Figure 2c) exhibits two peaks at 531.14 eV and 532.79 eV, which are assigned to the C=O and C-O-C/C-OH groups, respectively.The high-resolution S2p spectrum (Figure 2d) consists of two characteristic peaks at 163.33 eV and 164.61 eV, arising from the 2p 3/2 and 2p 1/2 positions in the C-S-C covalent bonds due to the spin-orbit coupling, respectively.Another peak located at 167.96 eV can be attributed to the presence of -C-SO x − (x = 2) species [39,41].The XPS analysis results are in good agreement with the FT-IR measurements, verifying that the synthesized NS-CDs have rich nitrogen and sulfur groups to qualify NS-CDs with good luminescence properties. Optical Properties The optical properties of the NS-CDs were investigated using UV-vis absorption and fluorescence spectra.In Figure 3a, a wide absorption region around 300 nm arises from the π-π* transitions of the aromatic sp 2 domains [42].Another absorption peak at 377 nm can be ascribed to the trapping of excited-state energy of the surface states contributed by the functional groups connected to the surface of the NS-CDs [43].This peak might be consistent with the optimal excitation peak for NS-CDs.As shown in the inset of Figure 3a, the aqueous solution of NS-CDs was a wine-red color in visible light and displayed a strong green fluorescence under UV illumination (λ = 365 nm), illustrating the excellent luminescence properties of NS-CDs with good dispersibility.The photostability of the NS-CDs was verified at different time intervals (0, 20, 40, 60, 80, 100, and 120 min) by illuminating the NS-CDs with UV light continuously for up to 120 min.As displayed in Figure 3b, the fluorescence intensities were not significantly different before and after the UV illumination.This is further detailed by the bar chart in Figure 3c, showing that the NS-CDs have excellent photostability against photobleaching. Optical Properties The optical properties of the NS-CDs were investigated using UV-vis absorption and fluorescence spectra.In Figure 3a, a wide absorption region around 300 nm arises from the π-π* transitions of the aromatic sp 2 domains [42].Another absorption peak at 377 nm can be ascribed to the trapping of excited-state energy of the surface states contributed by the functional groups connected to the surface of the NS-CDs [43].This peak might be consistent with the optimal excitation peak for NS-CDs.As shown in the inset of Figure 3a, the aqueous solution of NS-CDs was a wine-red color in visible light and displayed a strong green fluorescence under UV illumination (λ = 365 nm), illustrating the excellent luminescence properties of NS-CDs with good dispersibility.The photostability of the NS-CDs was verified at different time intervals (0, 20, 40, 60, 80, 100, and 120 min) by illuminating the NS-CDs with UV light continuously for up to 120 min.As displayed in Figure 3b, the fluorescence intensities were not significantly different before and after the UV illumination.This is further detailed by the bar chart in Figure 3c, showing that the NS-CDs have excellent photostability against photobleaching.Figure 3d displays the fluorescence emission spectra of the NS-CDs for different excitation wavelengths ranging between 350 nm and 385 nm.The position of the emission peak was not shifted with respect to the excitation wavelength, and the strongest emission peak appeared at 500 nm for the excitation at 375 nm, which is well consistent with the UV-vis absorption spectrum.This excitation-independent emission behavior arises from the homogeneous surface states' emissive trap sites, and it could also be useful in condensing the interfering effect of autofluorescence during the analyte detection [44].Hence, the excitation and emission wavelengths were set at 375 nm and 500 nm, respectively, to record the fluorescence intensity.The Stokes shift was about 125 nm, indicating the potential of NS-CDs for analytical application.Moreover, the QY of the NS-CDs was determined under 375 nm excitation with reference to rhodamine B, and a high QY of 28% was achieved.This is because the N-S-codoping increases the degree of conjugated π-domains or makes it easier for electrons to be trapped by the newly developed surface states, thereby promoting a high yield of radiative recombination [29]. Optimized Conditions The parameters that could affect the performance of the NS-CDs fluorescence probe in the detection of QT were optimized.First, to ascertain the kinetic response of the fluorescence probe, the incubation time was monitored for 30 min.The fluorescence response of the NS-CDs was checked every 2 min following the addition of QT (10 µM). Figure 3d displays the fluorescence emission spectra of the NS-CDs for different excitation wavelengths ranging between 350 nm and 385 nm.The position of the emission peak was not shifted with respect to the excitation wavelength, and the strongest emission peak appeared at 500 nm for the excitation at 375 nm, which is well consistent with the UVvis absorption spectrum.This excitation-independent emission behavior arises from the homogeneous surface states' emissive trap sites, and it could also be useful in condensing the interfering effect of autofluorescence during the analyte detection [44].Hence, the excitation and emission wavelengths were set at 375 nm and 500 nm, respectively, to record the fluorescence intensity.The Stokes shift was about 125 nm, indicating the potential of NS-CDs for analytical application.Moreover, the QY of the NS-CDs was determined under 375 nm excitation with reference to rhodamine B, and a high QY of 28% was achieved.This is because the N-S-codoping increases the degree of conjugated π-domains or makes it easier for electrons to be trapped by the newly developed surface states, thereby promoting a high yield of radiative recombination [29]. Optimized Conditions The parameters that could affect the performance of the NS-CDs fluorescence probe in the detection of QT were optimized.First, to ascertain the kinetic response of the fluorescence probe, the incubation time was monitored for 30 min.The fluorescence response of the NS-CDs was checked every 2 min following the addition of QT (10 µM). As shown in Figure S3, QT could completely react with the NS-CDs within 2 min, which resulted in a sharp decrease in fluorescence intensity.Following this time, the fluorescence intensity was almost stable, suggesting that 2 min would be the ideal incubation time for the sensing studies. Next, the effect of pH (3.0-9.0) on the fluorescence intensity of the NS-CDs was investigated.As displayed in Figure S4, it could be observed that the fluorescence intensity gradually increased in the pH range of 3.0-7.0.Afterward, the fluorescence intensity did not remarkably change up to the pH value of 9.0.As the highest fluorescence intensity of the NS-CDs was attained at pH ~7.0, this value was selected as the optimal pH for QT detection. Sensitive Detection of QT As the developed NS-CDs probe exhibited excellent fluorescence properties, the performance of the fluorescence probe in the quantitative detection of QT was examined by carrying out titration experiments under optimal conditions.For the excitation at 375 nm, the fluorescence intensity of the NS-CDs was exactly 500 nm.As shown in Figure 4a, the fluorescence intensity was sensitive to QT and was systematically reduced with the increase in the QT concentration (0-39.6 µM), signifying a typical concentration-dependent behavior of the probe.Upon increasing the concentration to 39.6 µM, the fluorescence of the NS-CDs was almost quenched by 97%, signifying the competence of the proposed fluorescence probe.As shown in Figure 4b, the fluorescence intensity of the NS-CDs has a good linear relationship with the concentration of QT in the range between 0 and 29.7 µM.This linear portion of the plot can be fitted into the equation F 0 /F = −1666[QT] + 47,416, R 2 = 0.9816, where F 0 and F represent the fluorescence intensities of the NS-CDs without QT and with QT, respectively, and [QT] is the quencher's concentration.The LOD was determined from the equation LOD = 3σ/m, where σ is the standard deviation obtained from the blank measurement (n = 5) and m is the slope acquired from the linear plot [45].The value of the LOD was 17.3 nM.In terms of the linear detection range and LOD, a comparison of the current fluorescence probe with previously reported probes is given in Table 1.Obviously, the LOD of the present approach is significantly lower than in other reported works, which reveals that NS-CDs can perform well in practical fluorescence-based analytical applications.As shown in Figure S3, QT could completely react with the NS-CDs within 2 min, which resulted in a sharp decrease in fluorescence intensity.Following this time, the fluorescence intensity was almost stable, suggesting that 2 min would be the ideal incubation time for the sensing studies. Next, the effect of pH (3.0-9.0) on the fluorescence intensity of the NS-CDs was investigated.As displayed in Figure S4, it could be observed that the fluorescence intensity gradually increased in the pH range of 3.0-7.0.Afterward, the fluorescence intensity did not remarkably change up to the pH value of 9.0.As the highest fluorescence intensity of the NS-CDs was attained at pH ~7.0, this value was selected as the optimal pH for QT detection. Sensitive Detection of QT As the developed NS-CDs probe exhibited excellent fluorescence properties, the performance of the fluorescence probe in the quantitative detection of QT was examined by carrying out titration experiments under optimal conditions.For the excitation at 375 nm, the fluorescence intensity of the NS-CDs was exactly 500 nm.As shown in Figure 4a, the fluorescence intensity was sensitive to QT and was systematically reduced with the increase in the QT concentration (0-39.6 µM), signifying a typical concentration-dependent behavior of the probe.Upon increasing the concentration to 39.6 µM, the fluorescence of the NS-CDs was almost quenched by 97%, signifying the competence of the proposed fluorescence probe.As shown in Figure 4b, the fluorescence intensity of the NS-CDs has a good linear relationship with the concentration of QT in the range between 0 and 29.7 µM.This linear portion of the plot can be fitted into the equation F0/F = −1666[QT] + 47,416, R 2 = 0.9816, where F0 and F represent the fluorescence intensities of the NS-CDs without QT and with QT, respectively, and [QT] is the quencher's concentration.The LOD was determined from the equation LOD = 3σ/m, where σ is the standard deviation obtained from the blank measurement (n = 5) and m is the slope acquired from the linear plot [45].The value of the LOD was 17.3 nM.In terms of the linear detection range and LOD, a comparison of the current fluorescence probe with previously reported probes is given in Table 1.Obviously, the LOD of the present approach is significantly lower than in other reported works, which reveals that NS-CDs can perform well in practical fluorescence-based analytical applications. Fluorescence Quenching Mechanism Of the various types of quenching mechanisms, Förster resonance energy transfer (FRET) ensues when the emission spectrum of the fluorophore overlaps the absorption spectrum of the quencher molecule, as well as if the distance between them is less than 10 nm [53].In the case of the IFE, a spectral overlap occurs between the excitation and/or emission spectrum of the fluorophore and the absorption spectrum of the quencher molecule [54].Also, in the presence of a quencher, the fluorescence lifetime of the fluorophore decreases in the FRET process, while it remains constant in the IFE process [55].So, in order to reveal the quenching mechanism, the spectra of QT and NS-CDs were studied first.In Figure 5a, the absorption band of QT at 370 nm overlaps with the excitation band of the NS-CDs at 375 nm, suggesting that the QT-induced fluorescence quenching effect may be due to the presence of an IFE or FRET.To elucidate this further, the fluorescence lifetimes of the NS-CDs without QT (τ 0 ) and with QT (τ 1 ) were measured.The data were fitted using a triple exponential model and are shown in Figure 5b.The values were τ 0 = 8.73 ns and τ 1 = 8.54 ns for bare the NS-CDs and QT-added NS-CDs, respectively.As the lifetime values were almost the same, the possibility of the FRET mechanism could be ruled out.So, the IFE could be the main mechanism for the fluorescence quenching of NS-CDs.Moreover, the quenching action of QT on the fluorescence response of NS-CDs can be classified into static and dynamic quenching, which can be evaluated by following the standard Stern-Volmer equation [56,57]: where F0 and F are the fluorescence intensities of the fluorophore without and with the Moreover, the quenching action of QT on the fluorescence response of NS-CDs can be classified into static and dynamic quenching, which can be evaluated by following the standard Stern-Volmer equation [56,57]: where F 0 and F are the fluorescence intensities of the fluorophore without and with the quencher, respectively, k q is the bimolecular quenching constant, τ 0 is the lifetime of the fluorophore without the quencher, [QT] is the quencher's concentration, and K SV is the Stern-Volmer constant.Static quenching is usually caused by means of the non-fluorescent ground-state complex formation between the fluorophore and the quencher molecule.In contrast, dynamic quenching occurs due to the collisions between the above molecular systems.It is commonly known that quenching action can be either static or dynamic if the Stern-Volmer plot displays a linear relationship; the coexistence of both static and dynamic quenching is evidenced by a nonlinear upward curve in the plot [58,59].As shown in Figure S5, the Stern-Volmer plot establishes good linearity in the concentration range of 3.3-16.5µM, with a regression coefficient (R 2 ) of 0.9799.From the slope of the linear plot, K SV was calculated to be 1.387 × 10 5 M −1 .Substituting the values of K SV and τ 0 in Equation ( 3), the quenching constant (k q ) was determined to be 1.588 × 10 12 M −1 s −1 .The obtained k q value was much greater than the dynamic quenching constant (1.0 × 10 10 M −1 s −1 ), representing the static quenching process by the ground-state complex (non-fluorescent NS-CDs-QT complex) formation.This can be verified based on the theory of "hard and soft acids and bases (HSAB)".According to this model, the high polarizable donor atoms (i.e., N, S heteroatoms) on the NS-CDs' surface belong to the soft bases.Due to the hydroxyl groups attached to the aromatic rings in phenol, QT belongs to the soft acids.Consequently, an NS-CDs-QT complex might have formed as a result of the electrostatic interaction between the NS-CDs and QT, which may restrict the transfer of non-radiative electrons and cause the fluorescence intensity to decrease [49,60].Overall, the NS-CDs showed good sensitivity and high specificity for QT detection due to the occurrence of a strong IFE and a remarkable static quenching process. Specificity of QT Detection The specificity of the NS-CDs fluorescence probe for QT was appraised in ultrapure water based on the fluorescence change upon excitation at 375 nm.Various chemical compounds, such as phenols, biomolecules, amino acids, metal ions, and flavonoids, acted as potential interferents against QT.Hydroquinone (HQ), resorcinol (Res), catechol (Cat), serotonin (Ser), dopamine (Dop), bovine serum albumin (BSA), cysteine (Cys), methionine (Met), lysine (Lys), Fe 3+ , Cu 2+ , Mg 2+ , kaempferol (Kae), myricetin (Mye), galangin (Gag), morin (Mor), myricitrin (Myi), gallic acid (Gal), and rutin (Rut) were taken fivefold with NS-CDs, and the corresponding fluorescence intensity was recorded.As displayed in Figure 6, QT quenched the fluorescence intensity of NS-CDs effectively compared to the same concentration of other interferents.The poor quenching efficiency of some interferents for the fluorescence response of NS-CDs might be due to the minimal overlap between the excitation or fluorescence spectrum of NS-CDs and the absorption spectrum of the interferents, leading to a weak IFE. Detection of QT in Food Samples To determine the competence of the NS-CDs fluorescence probe, different QT concentrations (0, 5.0, 10.0, 15.0, and 30.0 µM) were added to red wine and onion samples, and the assessment was carried out by the standard addition method.After spiking the QT in each sample, the recovery was performed by measuring the fluorescence spectrum.The obtained results are listed in Table 2 with the relative standard deviation (RSD), indicating that the NS-CDs probe possesses good accuracy and high specificity to QT in food samples.Moreover, the recovery (93.87-102.27%)and RSD (1.08-2.86%)values are within the accept-able ranges.These results clearly suggest that the proposed NS-CD-based fluorescence probe offers significant value for QT detection in food samples.methionine (Met), lysine (Lys), Fe 3+ , Cu 2+ , Mg 2+ , kaempferol (Kae), myricetin (Mye), galangin (Gag), morin (Mor), myricitrin (Myi), gallic acid (Gal), and rutin (Rut) were taken fivefold with NS-CDs, and the corresponding fluorescence intensity was recorded.As displayed in Figure 6, QT quenched the fluorescence intensity of NS-CDs effectively compared to the same concentration of other interferents.The poor quenching efficiency of some interferents for the fluorescence response of NS-CDs might be due to the minimal overlap between the excitation or fluorescence spectrum of NS-CDs and the absorption spectrum of the interferents, leading to a weak IFE. Detection of QT in Food Samples To determine the competence of the NS-CDs fluorescence probe, different QT concentrations (0, 5.0, 10.0, 15.0, and 30.0 µM) were added to red wine and onion samples, and the assessment was carried out by the standard addition method.After spiking the QT in each sample, the recovery was performed by measuring the fluorescence spectrum.The obtained results are listed in Table 2 with the relative standard deviation (RSD), indicating that the NS-CDs probe possesses good accuracy and high specificity to QT in food samples.Moreover, the recovery (93.87-102.27%)and RSD (1.08-2.86%)values are within the acceptable ranges.These results clearly suggest that the proposed NS-CD-based fluorescence probe offers significant value for QT detection in food samples. Conclusions Highly green fluorescent NS-CDs were prepared through a solvothermal method, using p-PD and thioacetamide as the precursors.The NS-CDs showed emission at 500 nm under the excitation at 375 nm.The excitation-independent emission and superior QY (28%) of the NS-CDs were achieved due to the uniform distribution of surface states by means of the amino and sulfur groups attached to the NS-CDs' surface.Fluorescence quenching of the NS-CDs, caused by both a QT-induced IFE and ground-state complex formation, was used for the quantitative detection of QT.The LOD of 17.3 nM was achieved, reflecting very high sensitivity together with remarkable specificity to QT against various potential interferents.The NS-CD-based detection strategy also demonstrated the successful analysis of QT in red Scheme 1 . Scheme 1. Synthesis of NS-CDs and detection of QT by fluorescence quenching. Figure 1 . Figure 1.(a) TEM photograph of NS-CDs.Inset: histogram of particle size distribution.(b) HR-TEM image of a single NS-CD.Inset: the corresponding lattice fringes. Figure 1 . Figure 1.(a) TEM photograph of NS-CDs.Inset: histogram of particle size distribution.(b) HR-TEM image of a single NS-CD.Inset: the corresponding lattice fringes. Figure 3 . Figure 3. (a) UV-vis spectrum of NS-CDs.Inset: photograph of NS-CDs in visible light.(b) Fluorescence emission spectra of NS-CDs at different times of UV light exposure (λ = 365 nm).Inset: photographs of NS-CDs under UV light for (i) 0 min and (ii) 120 min.(c) Bar chart showing the photostability of NS-CDs in different time intervals under UV light.(d) Changes in the fluorescence emission intensity of NS-CDs at different excitation wavelengths ranging from 350 nm to 385 nm. Figure 3 . Figure 3. (a) UV-vis spectrum of NS-CDs.Inset: photograph of NS-CDs in visible light.(b) Fluorescence emission spectra of NS-CDs at different times of UV light exposure (λ = 365 nm).Inset: photographs of NS-CDs under UV light for (i) 0 min and (ii) 120 min.(c) Bar chart showing the photostability of NS-CDs in different time intervals under UV light.(d) Changes in the fluorescence emission intensity of NS-CDs at different excitation wavelengths ranging from 350 nm to 385 nm. Materials 2023 , 15 Figure 5 . Figure 5. (a) Spectral overlap between absorption of QT and excitation of NS-CDs.(b) Fluorescence lifetime decay of NS-CDs without QT and with QT (40 µM). Figure 5 . Figure 5. (a) Spectral overlap between absorption of QT and excitation of NS-CDs.(b) Fluorescence lifetime decay of NS-CDs without QT and with QT (40 µM). Figure 6 . Figure 6.Specificity of NS-CDs to QT over potential interferents. Figure 6 . Figure 6.Specificity of NS-CDs to QT over potential interferents. Table 1 . Comparison of NS-CDs with various fluorescence probes for QT detection. Table 2 . Recovery results of food samples spiked with QT at different standard levels. Table 2 . Recovery results of food samples spiked with QT at different standard levels.
2023-12-21T16:29:30.070Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "5cf94efbaca3d5a93e5f52e4c5aa627b6cb503bb", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "26413b77991fe0e64dcdc4147439b1d7addf0af6", "s2fieldsofstudy": [ "Chemistry", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
5685593
pes2o/s2orc
v3-fos-license
Infrared thermography: A potential noninvasive tool to monitor udder health status in dairy cows The animal husbandry and livestock sectors play a major role in the rural economy, especially for the small and marginal farmers. India has the largest livestock population in the world and ranks first in the milk production. Mastitis is the most common and expensive infectious disease in dairy cattle. The global economic losses per year due to mastitis amounts to USD 35 billion and for Indian dairy industry ₹6000 crores per year. Early detection of mastitis is very important to reduce the economic loss to the dairy farmers and dairy industry. Automated methods for early and reliable detection of mastitis are currently in focus under precision dairying. Skin surface temperature is an important indicator for the diagnosis of cow’s illnesses and for the estimation of their physiological status. Infrared thermography (IRT) is a simple, effective, on-site, and noninvasive method that detects surface heat, which is emitted as infrared radiation and generates pictorial images without causing radiation exposure. In human and bovine medicine, IRT is used as a diagnostic tool for assessment of normal and physiological status. Introduction India ranks first in the world total milk production, the total milk production in the country being 146.3 million tons in 2014-2015 [1]. Mastitis ranks first among the diseases of dairy cows with high prevalence and incidence rate, which causes severe economic losses to the dairy farmers. Mastitis is the inflammation of udder tissue causing pathological changes in udder parenchyma and characterized by physical, chemical, and microbiological changes in milk. The losses are either due to loss of milk production (temporary or permanent), poor milk quality, discarding of milk from affected animals, and reduced milk productive life or culling of the cow. Delay in the detection of subclinical mastitis and lack of appropriate and accurate technique are contributing to the higher incidence of clinical mastitis. The loss due to subclinical mastitis is higher than the clinical mastitis and milk yield loss due to mastitis ranges from 100 to 500 kg/cow per lactation [2]. Several diagnostic tests exist for detection of mastitis, viz., milk color, pH test, Electrical Conductivity (EC), California Mastitis Test (CMT), Somatic Cell Count (SCC), culture test, biomarkers, proteomic technique, and immunoassay. Biosensor system that analyzes lactose and EC has been claimed to have a sensitivity o f >90% for identifying quarters with ≥100,000 cells/mL [3]. These techniques are subjective, laborious, and not adequately precise for detection of early signs of the disease. Automated methods for early and reliable detection of mastitis are currently in focus. Infrared thermography (IRT) is a simple, effective, on-site, and noninvasive method that detects surface heat, which is emitted as infrared radiation and generates pictorial images without causing radiation exposure. In bovine medicine, IRT is used for early detection of subclinical mastitis [4] (Table-1), heat detection and prediction of ovulation in cows [5,6], detection and assessment of lameness [7,8], assessment of animal welfare, and feed utilization efficiency [9]. This review presents a more comprehensive understanding of the potential application of the IRT technique to monitor Available at www.veterinaryworld.org/Vol.9/October-2016/7.pdf udder health status and early detection of mastitis in dairy animals. Udder Health and its Importance Mastitis is categorized into contagious mastitis and environmental mastitis. Contagious mastitis is caused by bacteria, and it can be divided into clinical mastitis, subclinical mastitis, and chronic mastitis. Clinical mastitis (peracute mastitis, acute mastitis, and subacute mastitis) is characterized by the presence of gross inflammatory signs (redness, heat, swelling, pain, and loss of function). The subclinical mastitis is characterized by the change in milk composition with no signs of gross inflammation or milk abnormalities. In chronic mastitis, inflammatory process, that exists for months and may continue from one lactation to another. Environmental mastitis is caused by organisms such as Escherichia coli [10]. Mastitis in dairy animals leads to economic losses to the dairy farmers and to the dairy industry as a whole in different forms, viz., reduction in milk production (70%), premature culling (14%), veterinary expenses (9%), and milk discarded or low graded (7%). The global estimated economic losses per year due to mastitis amounts to USD 35 billion and INR. 6000 crores for the Indian Dairy Industry [11]. Diagnosis of Mastitis: Traditional versus Recent Trends Among various indicators of mastitis, viz., gross examination of udder, milk color, pH test, EC, CMT, SCC, culture test, biomarkers, proteomic technique, and immunoassay ( Figure-1). SCC in milk is highly accepted as an important and rapid indicator of the inflammatory status of the udder. Somatic cells are milk-secreting epithelial cells that shed from the lining of the gland, and white blood cells that have entered the mammary gland in response to injury or infection. The milk somatic cells include 75% leukocytes and 25% epithelial cells. Mammary gland infection level (mastitis), stage of lactation, age, breed, parity, season, stress, diurnal variations, milk transportation, and management are some factors that influence milk SCC at individual and herd level. The SCC is lower than 1×10 5 cells/mL of milk from a healthy mammary gland, whereas bacterial infection can cause it to increase to above 1×10 6 cells/mL [12]. Mean normal values of SCC in the milk of indigenous, viz., Deoni 1.95±0.24 lakhs/mL, Ongole 1.57±0.22 lakhs/mL, and HF crossbred 4.14±0.17 lakhs/mL, dairy cows [13]. IRT Infrared was discovered by a British astronomer, Sir William Herschel, in 1800. Infrared is an electromagnetic wave, with wavelength ranging from 700 nm to 1 mm. Every object, whose surface temperature is above absolute zero, radiates energy at a wavelength corresponding to its surface temperature. All objects emit infrared radiation proportional to their body temperature according to Stefan-Boltzmann [4]. The process in which energy is emitted as waves is known Available at www.veterinaryworld.org/Vol.9/October-2016/7.pdf as radiation. Emissivity refers to an object's ability to emit radiation [14]. The total radiation energy emitted or absorbed by the animal body depends on the emissivity of the skin, the emissivity of most of the objects is <1 [4]. Early infrared imaging systems were developed during the 1940s and found use in industry and medicine in 1959 [15]. IRT is a simple, effective, on-site, and noninvasive method that detects surface heat, which is emitted as infrared radiation and generates a pictorial image without causing radiation exposure. In a thermogram, the warmest region appears as white or red, whereas the coolest region appears as blue or black ( Figures-2 and 3) [16]. Application of IRT Thermal imaging has got a potential application in industry, agriculture, and medicine. IRT is employed for assessment of structures, locating the source of distress, assessment of damage potential in concrete and building materials structures, sensing moisture ingress and flow through pipes, etc. In agriculture, IRT is employed for assessing seed viability, estimating soil, water status, estimating crop water stress, scheduling irrigation, determining disease and pathogen of affected plants, estimating fruit yield and evaluating the maturity of fruits and vegetables, etc. There are several applications of IRT in the field of human medicine such as assessment of neurological disorders, open-heart surgery, vascular diseases, reflex sympathetic dystrophy syndrome, urology problems, mass fever screening, arthritis, and evaluation of breast cancer [14]. Application of IRT in Veterinary Medicine In bovine medicine, IRT is used for diagnostic purposes, assessment of animal welfare, and feed utilization efficiency. IRT is widely used to identify localized areas of inflammation, such as mastitis in lactating cows [17][18][19], foot and mouth disease [20], assessment of tissue damage and healing due to hot versus cold branding in cattle [21], Laminitis [7,8], detection of bovine viral diarrhea in calves [22], monitoring respiratory disorders [23], Actinobacillus pleuropneumoniae infection in pigs [24], detection of estrus and prediction of ovulation in cattle and gilts [5,6,25,26], to assess the effect of scrotal temperature on sperm production in bulls [27][28][29], assessing meat quality in pigs [30], for identification of stress [31], measurement of feather cover [32], effects of machine milking on teat and udder [33], surface temperature, estimation of heat and methane production in dairy cattle [34], screening of cattle for feed utilization efficiency [9,35], pregnancy diagnosis in mare and wild animals [36,37], evaluation of thermal status of neonatal pigs [38], monitoring stress during animal transit and welfare in wild animals [39], assessment of surface temperature of buffaloe bulls and its correlation with rectal temperature [40] and for the evaluation of thermoregulatory capacity of dairy buffaloes [41]. Milking process Aljumaah et al. [42] investigated the effect of machine milking on normal physiological parameters of lactating camels and concluded machine milking had no effect on average rectal (37.88±0.23°C) and vaginal temperatures (37.94±0.14°C), as well as respiratory (16.12±0.23 breath/min) and heart rates (56.78±1.89 beat/min). A significant decrease in udder (−1.0°C) and teat (−1.6°C) surface temperatures was detected 1 h immediately after milking as a consequence of machine milking. Similarly, Alejandro et al. [33] studied the effect of machine milking on teat tissue changes in Murciano-Granadina goats, and they found that machine milking caused a significant increase (p<0.05) of the mean temperature by 6.6, 4.9, 2.5, and 1.5°C at the tip, 1, 2, and 3 cm from the teat end. Poikalainen et al. [4] determined the possibilities of cow's thermal profile registration at free stall housing, investigated the possibilities of automatic cow's udder thermograms registration at milking parlor and milking robot, and to compare the temperatures of the udder before and after milking. The authors found no significant difference between temperature of left and right udder quarters before and after milking, and they concluded that udder surface temperature does not depend on milking. Subclinical and clinical mastitis Polat et al. [19] determined the interrelationships among mastitis indicators and evaluated the mastitis detection ability of IRT in comparison with the CMT in Brown Swiss dairy cows. Subclinical mastitis quarters showed 2.35°C greater skin surface temperature than healthy quarters (Table-2). The UST was positively correlated with CMT score and SCC. There was an exponential increase in SCC and a linear increase in UST as the CMT score increased. The authors concluded that IRT can be employed as a noninvasive, quick tool, for screening subclinical mastitis via measuring UST, with a high predictive diagnostic ability similar to CMT when microbial culturing is unavailable. Berry et al. [43] investigated magnitude and pattern of daily variation in the UST of Holstein-Friesian dairy cows in various stages of lactation using IRT technology. Measures of SCC were below 250,000 with the exception of one animal; it had an average SCC below 250,000 before the trial; however, during the trial, the temperature and SCC rose to levels indicative of subclinical mastitis. No visible signs of mastitis were evident in its foremilk, and bacteriological analysis did not detect any pathogenic organisms. Porcionato et al. [44] used IRT for the detection of subclinical mastitis in Gir cows of the second and third lactation. The researchers using IRT measured the surface temperature of teat three heights (upper, median, and lower) and correlated with milk SCC microbiological tests for pathogen. There was the difference in temperature between the heights, with higher temperature values in the upper region than in the other regions of the teats (median and lower). There was no significant correlation between log SCC and UST measured or between the type of pathogens and UST in three different heights. The authors concluded that the use of thermal camera allowed the identification of variations in skin surface temperature in different heights of the udder of Gir cows. However, this technique was not effective in the diagnosis of subclinical mastitis. Martins et al. [35] evaluated the use of IRT for mastitis diagnosis in relation with SCC and milk composition in sheep. The UST was higher for subclinical mastitis group. The clinical mastitis group had highest fat and protein levels as well as the lowest lactose level. The authors concluded that infrared udder temperatures can be a good auxiliary diagnostic method to mastitis in sheep, principally to subclinical mastitis. Therefore, thermography is a promissory technique for subclinical mastitis diagnosis in sheep. Hovinen et al. [18] tested thermal camera for its capacity to detect clinical mastitis, by experimentally inducing mastitis in cows with 10 μg of E. coli lipopolysaccharide. The thermal camera was successful in detecting 1-1.5°C temperature change on udder skin associated with clinical mastitis in all cows because the temperature of the udder skin of the experimental and control quarters increased in line with the rectal temperature (Table-2). Colak et al. [16] conducted an experiment to determine the merit of IRT for early detection of subclinical mastitis. As the CMT score increased, quarters skin surface temperature increased linearly. IRT was sensitive enough to perceive changes in skin surface temperature in response to the severity of the mammary gland infection as reflected by the CMT score. The study suggests that IRT can be employed for screening dairy cows for mastitis. Willits [17] suggested that IRT is a suitable tool for screening and early detection of mastitis in dairy herds. Metzner et al. [45] compared different algorithms for the evaluation of udder skin thermogram for detection of E. coli-induced acute mastitis in dairy animals. Analysis of udder surface temperature using different geometric analysis tools (polygons, rectangles, and lines) and descriptive parameters (minimum, maximum, range, arithmetic mean, and standard deviation) revealed that significant changes can be detected best through using the analysis tool "polygons" and the descriptive parameter "maximum." They also suggested that IRT was only valid for testing of the hind quarters as this combination yielded the highest correlation with rectal temperature. The greatest difference in the temperatures between control and E. coli inoculated quarters (2.06°C) was found in "polygons" and "rectangles" using "maximum." Factors Influencing the IRT Imaging of Udder Before taking IRT image, animal must be tied properly in standing position under shaded shed. Udders are then brushed or wiped with the clean towel to remove dung and dirt. Mechanical brushing causes the changes in udder skin surface temperature [43]. After brushing or wiping, the udder quarter is allowed resting for 10-15 min before images are taken. Images are taken in the standing position at a distance of 0.6-1 m [46,47]. Images of the front quarters are taken from the lateral side of the animal, and the hind quarters were taken from the lateral or [19] posterior side. IRT images must be taken out of direct solar radiation (sunlight) and wind speed. Increased solar radiation and wind speed cause the rise in the UST [48]. Animal factors such as parity, stage of lactation, and pregnancy may also influence the IRT udder surface temperature pattern. Berry et al. [43] suggested that IRT was only valid for monitoring hind quarters. Front quarters possibly display different patterns of surface temperature, especially since the surface temperature might be more or less affected by thermal radiation emanating from the medial aspect of the adjacent legs. Berry et al. [43] reported that the hind quarters were more exposed to environmental temperature than the front quarters. Similarly, Chun-He et al. [49] found the normal temperature distribution between rear left and rear right quarters. However, there was no significant difference in UST among the four quarters [19]. Future Areas of Research Application of IRT technique would require a great deal of basic data for different breeds under a wide variety of climatic and environmental conditions and management system. The analyzed IRT data would help to establish breed and location specific normal thermographic profile. Detailed studies are needed to observe the influence or interaction of interquartile difference, breed difference, period of lactation/parity, milk yield, stage of lactation, day-today variation of the UST, within day variation of the UST, body temperature versus UST, clinical condition (mastitis -subclinical/clinical) and reproductive status (pregnant/nonpregnant), causative organism, and their virulence factors resulting in different forms of mastitis. Conclusion IRT is a noninvasive, handheld tool with specific analyzing software, has remarkable priorities in precision dairy farming and veterinary medicine (diagnostics of mastitis, leg injuries, body surface damages, milking hygiene, etc.). IRT could prove conclusively as an important diagnostic tool in addition to conventional techniques available for monitoring udder health and early detection of mastitis. Authors' Contributions SJ, AM, HAP, MS, KPR, DND, and MAK hypothesized the concept of this review paper. MS, SJ, and AM prepared the manuscript. GJ, MAP, and DKR assisted in collecting and compiling the resource material and in manuscript preparation. All authors read and approved the final manuscript.
2018-04-03T03:26:03.264Z
2016-10-01T00:00:00.000
{ "year": 2016, "sha1": "3cf53d5a70e3fa03ac76e3b3710eb922704f01c1", "oa_license": "CCBY", "oa_url": "http://www.veterinaryworld.org/Vol.9/October-2016/7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3cf53d5a70e3fa03ac76e3b3710eb922704f01c1", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
210131178
pes2o/s2orc
v3-fos-license
Extract of Cornus officinalis Protects Keratinocytes from Particulate Matter-induced Oxidative Stress The skin is one of the large organs in the human body and the most exposed to outdoor contaminants such as particulate matter < 2.5 µm (PM2.5). Recently, we reported that PM2.5 induced cellular macromolecule disruption of lipids, proteins, and DNA, via reactive oxygen species, eventually causing cellular apoptosis of human keratinocytes. In this study, the ethanol extract of Cornus officinalis fruit (EECF) showed anti-oxidant effect against PM2.5-induced cellular oxidative stress. EECF protected cells against PM2.5-induced DNA damage, lipid peroxidation, and protein carbonylation. PM2.5 up-regulated intracellular and mitochondrial Ca2+ levels excessively, which led to mitochondrial depolarization and cellular apoptosis. However, EECF suppressed the PM2.5-induced excessive Ca2+ accumulation and inhibited apoptosis. The data confirmed that EECF greatly protected human HaCaT keratinocytes from PM2.5-induced oxidative stress. Introduction Particulate matter (PM) is an air pollutant with harmful effects on the human skin that contribute to conditions such as skin cancers, alopecia, and skin aging [1,2]. In particular, the harmful effects of PM depend on the composition of deleterious contents such as heavy metals (Cu, Mn, Ni, Pb, and Ti) and polycyclic aromatic hydrocarbons [3]. PM < 2.5 µm (PM2.5) is considered fine PM and its detrimental effects on the human skin are mediated by the generation of excessive intracellular reactive oxygen species (ROS), which creates oxidative stress [4][5][6]. PM 2.5 -mediated excessive ROS generation could elicit lipid peroxidation, DNA damage, apoptotic protein expression, and mitochondria-dependent apoptosis, which eventually results in skin irritation and damage [7]. There are more than 65 species classified under the genus Cornus (family Cornaceae), but only two species, Cornus mas and Cornus officinalis, have been reported as medicinal plants used in traditional medicine [8]. These plants are mainly distributed in eastern Asia including Korea, Japan, and China. C. officinalis is commonly known as cornel dogwood or Asiatic dogwood [9]. C. officinalis grows up to 4-10 m high, has papery leaves that are 5.5-10 cm long, its flowers consist of four petals with a yellow lanceolate tongue that is 3.3 mm long [10]. C. officinalis fruit has been used to treat high blood pressure, kidney deficiency, dizziness, spermatorrhea, and waist and knee pain since ancient times [10,11]. Most related pharmacological studies have revealed that the ethanol extract of C. officinalis fruit (EECF) possesses anti-hyperglycemia, anti-aging, immune-regulatory, and renal and neuro-protective effects [12]. In addition, the neuro-protective, antioxidant, antiinflammatory, cardiovascular, and anti-diabetic effects of the EECF have been revealed [13]. Furthermore, C. officinalis fruit contain high amounts Ivyspring International Publisher of volatile compounds, organic acids, carbohydrates, tannins, and iridoids. Particularly, iridoid glycosides are one of the active ingredients in the C. officinalis fruit [14]. However, there are few reports of the cytoprotective effect of EECF against PM 2.5 -induced oxidative stress in human keratinocytes. Therefore, this study was conducted to investigate the potential of EECF to cure the PM 2.5 -induced cell damage. Reagents and chemicals The dried fruit of C. officinalis collected from an area around the city of Gurye (Jeollanam-do Province, Republic of Korea), were provided by Gurye Sansuyu Farming Association Corporation. For the preparation of EECF, the dried fruit (20 g) were cut into small pieces and extracted three times with 400 mL 70% ethanol at 4°C for 3 h. After filtering, the filtrate was concentrated using a vacuum rotary evaporator (EYELA SB-1000, Tokyo Rikakikai Co. Ltd., Tokyo, Japan). The residue was then freeze-dried using a freeze dryer and stored at -80°C. The powder (EECF) was dissolved in dimethyl sulfoxide (DMSO, Sigma-Aldrich Chemical Co., St. Louis, MO, USA) to obtain a final concentration of 100 mg/mL (extract stock solution), and was stored at 4°C. The stock solution was diluted with culture medium to the desired concentrations prior to use. EECF was dissolved in DMSO. Diesel PM2.5 (NIST SRM 1650b, PM 2.5 ) was purchased from Sigma-Aldrich Chemical Co. and was dissolved in DMSO to prepare the stock solution (25 mg/mL). To avoid agglomeration of the suspended PM 2.5 , the solution was sonicated for 30 min [15]. Cell culture The human HaCaT keratinocytes (Cell Line Service, Heidelberg, Germany) were cultured in Dulbecco's modified Eagle's medium (DMEM, Life Technologies Corporation, Staley Rd, Grand Island, USA). The medium was supplemented with antibiotic solution consisting of 100 units/mL penicillin, 100 µg/mL streptomycin, and 0.25 µg/mL amphotericin B (Gibco, Life Technologies Co., Grand Island, NY, USA). In addition, the medium was supplemented with 10% fetal bovine serum. The cultured cells were incubated in a 100% humidified atmosphere at 37°C with 5% CO2. Cell viability The cytotoxicity of the EECF on HaCaT cells was measured using the 3-(4,5-dimethylthiazol-2-yl)-2,5diphenyl tetrazolium bromide (MTT) assay. Cells were cultured in a 96-well plate at a density of 1.0 × 10 5 cells per well and specific wells were separately treated with EECF at final concentrations of 25, 50, 100, 200, 300, 400, and 500 µg/mL. The MTT stock solution (2 mg/mL) was incubated with the cells for 4 h until formazan crystals were formed. The crystals were then dissolved in DMSO and the absorbance of the reaction solution was detected using a multi-well spectrophotometer at a wavelength of 540 nm. ROS detection Cells were seeded in 96 well palate at a 1.5 × 10 5 cell density and intracellular ROS levels those generated via 1 mM hydrogen peroxide (H 2 O 2 ), were measured using the 2',7'-dichlorofluorescein diacetate (DCF-DA, Sigma-Aldrich) assay. Cells were seeded on chamber slides at a 1.5 × 10 5 cell density and incubated with PM 2.5 (50 µg/mL) for 1 h. The cells were stained with DCF-DA for 30 min and the fluorescence emission was detected using confocal microscopy (Carl Zeiss, Oberkochen, Germany). Lipid peroxidation assay A four-well chamber slide was used to plate the cells in the presence of 200 µg/mL EECF, followed by exposure to PM 2.5 (50 µg/mL) for 24 h and staining with diphenyl-1-pyrenylphosphine (DPPP) for 30 min in the dark. Images were analyzed using a confocal microscope [15]. Protein carbonylation assay Cells were incubated with 200 µg/mL EECF for 1 h and treated with PM 2.5 (50 µg/mL) for 24 h. Protein oxidation was assessed using an OxiSelect TM protein carbonyl enzyme-linked immunosorbent assay kit (Cell Biolabs, San Diego, CA, USA) according to the manufacturer's instructions. Single-cell gel electrophoresis Cells were seeded in the medium with 200 µg/mL EECF in a 1 mL micro tube for 30 min and treated with PM 2.5 (50 µg/mL) for another 30 min. After coating with 110 µL 0.7% low-melting agarose, the cells were immersed in lysis buffer (2.5 M NaCl, 100 mM Na 2 EDTA, 10 mM Tris, 1% N-lauroylsarcossinate) for 1 h at 4°C. An electrical field (300 mA, 25 V) was used for electrophoresis. Slides were stained with 40 µL ethidium bromide (10 µg/mL) and analyzed using the comet 5.5 image analyzer (Andor Technology, Belfast, UK). The percentage total fluorescence and tail lengths were recorded (50 cells per slide). Western blotting Harvested cells were lysed using 150 μL of protein lysis buffer and the collected cell lysates were centrifuged at 13,000 rpm for 5 min. The resulting suspensions were collected and protein levels were analyzed as previously described [19]. Aliquots were electrophoresed using 12% sodium dodecyl sulfatepolyacrylamide gel electrophoresis. Then, the separated proteins were transferred onto the nitrocellulose membranes, which were sequentially incubated with the appropriate primary and secondary antibodies. Protein bands were detected using the Amersham ECL western blotting detection reagents and analysis system (GE healthcare, Amersham place, UK). Hoechst 33342 staining Cells were treated with 200 µg/mL EECF for 1 h, followed by PM 2.5 (50 µg/mL) for 18 h. The cells were stained with Hoechst 33342 (20 µM) and DNA-specific fluorescence was visualized using a fluorescence microscope. Nuclear condensation levels were evaluated and quantified for the apoptotic cells. Statistical analysis All experiments were performed in triplicate. Data are presented as the means ± standard error and were analyzed using the Sigma Stat 3.5 version software (Systat Software Inc., San Jose, CA, USA) using Tukey's test and analysis of variance (ANOVA). A P < 0.05 was considered statistically significant. EECF reduced ROS generation Before commencing the experiment, we sought to determine if EECF had any cytotoxicity on human HaCaT keratinocytes using the MTT assay with different EECF concentrations (0, 25, 50, 100, 200, 300, 400, and 500 µg/mL, Figure 1A). The results confirmed that EECF was not cytotoxic against HaCaT cells at any of the tested concentrations. EECF showed DPPH radical scavenging activity at all the tested concentrations compared with N-acetylcysteine (NAC), a well-known antioxidant ( Figure 1B). Next, the ROS scavenging ability of EECF was tested, and concentrations of 25-200 µg/mL showed rapidly increasing ROS (generated via 1 mM H2O2, respectively) scavenging activity and, therefore, 200 µg/mL was selected as the optimal concentration for further experiments ( Figure 1C). To assess the ability of EECF (200 µg/mL) to scavenge superoxide anion, ESR spectrometry was performed. Superoxide anions produced by the xanthine/xanthine oxidase system were reduced by EECF, as shown in Figure 1D. The generated signal of 2,996 in the control was reduced to 1,505 in the presence of EECF. Intracellular ROS generation assessed using the DCF-DA assay revealed that 200 µg/mL EECF ameliorated the green color intensity caused by the PM2.5, which was visualized using confocal microscopy ( Figure 1E). EECF significantly attenuated PM 2.5 -induced lipid peroxidation, protein carbonylation, and DNA damage The lipid peroxidation amount was assessed by visualizing the fluorescent intensity of oxidized DPPP, which is an indicator of lipid peroxidation. The DPPP oxidase intensity was higher in PM 2.5 -treated cells than it was in control cells. Pretreatment with EECF significantly reduced the florescent intensity of PM 2.5 -containing cells (Figure 2A). The results indicated that EECF treatment has the potential to reduce ROS generation and further confirmed the ROS scavenging properties. Then, protein carbonylation was measured. Carbonyl groups are formed during the process of protein oxidation [16]. PM2.5 significantly increased the expression of carbonyl moieties, whereas EECF-pretreated cells exhibited notably reduced formation of protein carbonyl when they were exposed to PM 2.5 ( Figure 2B). Furthermore, PM 2.5 -induced DNA damage was monitored using a comet assay. As shown in Figure 2C, treatment with the PM 2.5 distinctly elongated the comet tail and increased the damaged DNA around the nuclei. Pre-treatment of HaCaT cells with EECF before exposure to PM 2.5 obviously reduced the level of damaged DNA in comet tails. Finally, the level of 8-oxoG was analyzed using confocal microscopy, and PM 2.5 -treated cells showed the highest 8-oxoG level. PM 2.5 exposure caused severe DNA lesions in cells, which were revealed by avidin-TRITC binding. Furthermore, EECF was shown to attenuate the PM 2.5 -induced DNA lesions ( Figure 2D). Ethanol extract of C. officinalis fruit (EECF) protected cells against PM2.5-induced lipid peroxidation, protein carbonylation, and DNA damage. (A) EECF effect on PM2.5-induced lipid peroxidation was assessed using confocal microscopy after DPPP staining. (B) Protein oxidation was assessed by measuring carbonyl formation. * p < 0.05 and # p < 0.05, compared to control and PM2.5-treated group, respectively. (C) DNA damage was assessed using comet assay. * p < 0.05 and # p < 0.05, compared to control and PM2.5-treated group, respectively. (D) Avidin-TRITC conjugate was examined to evaluate DNA oxidative adducts of 8-oxoG using confocal microscopy. EECF attenuated PM 2.5 -induced mitochondrial stress Initially, we hypothesized that the oxidative ability of PM 2.5 was mediated by mitochondrial stress. Therefore, intracellular Ca 2+ level were assessed, because previous studies have reported that disruption of Ca 2+ homeostasis generates mitochondrial stress [15]. Cells were stained with Fluo-4-AM dye, and confocal microscopy analysis revealed that Ca 2+ fluorescence was much higher in the PM 2.5 -treated group than in the other cells. Pretreatment with EECF obviously reduced the intracellular Ca 2+ level of PM 2.5 -treated cells ( Figure 3A). The mitochondrial Ca 2+ level was assessed by staining cells with Rhod-2-AM dye and the confocal microscopy analysis revealed that treatment with EECF notably reduced the level ( Figure 3B). The Δψ m was assessed using JC-1 dye, where red and green fluorescence indicated the polarized and depolarized state of the mitochondria, respectively [20]. The results indicated that mitochondrial depolarization was enhanced by PM2.5 but was notably reduced by EECF, as shown in the confocal microscopy analysis ( Figure 3C). Figure 4A). This finding suggests that caspase-3 was likely involved in the observed cell apoptosis. However, pretreatment with EECF attenuated the cell apoptosis, and the cell nuclei stained with Hoechst 33342 were analyzed using microscopy, which showed significant nuclear condensation in PM 2.5 -treated cells. Cells pretreated with EECF were observed to be normal ( Figure 4B). Discussion The skin is the largest organ in the body and it protects the body by acting as a barrier to the external environment [21][22][23]. PM 2.5 is considered an air pollutant, which has harmful effects on the skin, such as skin aging and inflammatory skin diseases, mediated by the generation of intracellular ROS [24,25]. One of the most recent studies has reported that dried sarcocarp of C. officinalis consists of 11 highly polar compounds, particularly, iridoid isomers (7α-O-methylmorroniside, 7β-O-methylmorroniside, 7α-O-ethylmorroniside, and 7β-ethylmorroniside) [26]. Gallic acid, 5-hydroxymethylfurfural, morroniside, and loganin are the most abundant compounds in the C. officinalis fruit; however, their content could vary with the state of the fruit, depending on whether they are processed or crude. Particularly, loganin possesses immune-regulatory and anti-inflammatory activities, while morroniside is involved in the prevention of diabetic angiopathy [27]. In the present study, EECF did not show cytotoxicity at any of the tested concentrations as reported previously [28]. A previous study reported that EECF has relatively high DPPH radical scavenging activity, mediated by its antioxidant activity [29]. In agreement with previously reported findings, the results of this study indicate that EECF scavenged DPPH radical and superoxide anion (Figures 1B and 1D). It has been reported that EECF contains flavonoids, which are known to possess antioxidant activity via hydrogen donation [28,30]. Our results showed that EECF exhibited antioxidant activity by attenuating hydrogen peroxide-induced ROS generation, and ameliorated intracellular ROS generation, as revealed by DCF-DA staining (Figures 1C and 1E). PM 2.5 -induced ROS caused oxidative damage, which resulted in protein carbonylation, lipid peroxidation, and DNA damage [31]. ROS attack proteins by oxidation, which is the main mechanism of protein modification. Furthermore, protein modification can be reversible or irreversible. Protein modification leads to protein carbonylation, protein-protein cross linking, and adduct formation with lipid peroxidation products. Eventually, proteins become fragmented and degraded by ROS-mediated protein modification [32]. ROS affect lipids, mainly through hydroxyl radical and hydroperoxyl. Especially, polyunsaturated fatty acids are converted to lipid peroxyl radical and hydroperoxide as the result of oxygen insertion. Eventually, lipid peroxidation negatively affects cellular functions such as protein synthesis and alters biochemical properties [33]. It has been reported that PM2.5 can arrest the cell cycle, resulting in DNA damage and the level of 8-OHdG (an oxidative DNA adduct) [34]. DPPP staining revealed that, EECF has an ability to reduce the PM 2.5 -induced lipid peroxidation (Figure 2A). In addition, our results illustrated that EECF pretreatment significantly attenuated protein carbonyl formation in cells while the comet assay revealed the protective effect of EECF on PM 2.5induced DNA damage. EECF significantly attenuated the DNA strand breaking and at 200 µg/mL, also reduced the elevated 8-oxoG level in PM 2.5 -treated cells. A previous study reported that cellular oxidative stress causes mitochondria stress, which eventually results in cell apoptosis [35]. Oxidative stress could be further enhanced by mitochondrial Ca 2+ accumulation, while the endoplasmic reticulum releases Ca 2+ . As previously reported, ROS degrade the Δψ m [36], and our results revealed that EECF strongly ameliorated PM 2.5 -induced excessive Ca 2+ accumulation in the cell and mitochondria. This effect restored cellular Ca 2+ homeostasis and EECF restored the Δψ m . In conclusion, our results confirmed that EECF has considerable antioxidant activity against PM 2.5induced skin damage.
2020-01-09T09:03:51.289Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "9864131dc89ed8c07ef0d15ab5c3f94208ef1104", "oa_license": "CCBY", "oa_url": "https://www.medsci.org/v17p0063.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "148533bf2d0929a3417587764725ab1af45f8bce", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
246492711
pes2o/s2orc
v3-fos-license
Who loses more? Identifying the relationship between hospitalization and income loss: prediction of hospitalization duration and differences of gender and employment status Background The major determinants of health and well-being include wider socio-economic and political responses to poverty alleviation. To data, however, South Korea has no related social protection policies to replace income loss or prevent non-preferable health conditions for workers. In particular, there are several differences in social protection policies by gender or occupational groups. This study aimed to investigate how hospitalization affects income loss among workers in South Korea. Methods The study sample included 4876 Korean workers who responded to the Korean Welfare Panel Study (KoWePS) for all eight years from 2009 to 2016. We conducted a receiver operating characteristics (ROC) analysis to determine the cut-off point for the length of hospitalization that corresponded to the greatest loss of income. We used panel multi-linear regression to examine the relationship between hospitalization and income loss by gender and employment arrangement. Results The greatest income loss for women in non-standard employment and self-employed men was observed when the length of hospitalization was seven days or less. When they were hospitalized for more than 14 days, income loss also occurred among men in non-standard employment. In addition, when workers were hospitalized for more than 14 days, the impact of the loss of income was felt into the subsequent year. Conclusion Non-standard and self-employed workers, and even female standard workers, are typically excluded from public insurance coverage in South Korea, and social security is insufficient when they are injured. To protect workers from the vicious circle of the poverty-health trap, national social protections such as sickness benefits are needed. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-022-12647-6. Introduction The World Health Organization (WHO) has proposed universal health coverage (UHC) as a strategy for national health policies that "all people and communities can use the promotive, preventive, curative, rehabilitative, and palliative health services they need, of sufficient Open Access *Correspondence: hpolicy@korea.ac.kr 6 BK21FOUR R&E Center for Learning Health Systems, Korea University, Seoul, Republic of Korea Full list of author information is available at the end of the article quality to be effective, while also ensuring that the use of these services does not expose the user to financial hardship [1]. " Given this definition, the WHO's UHC considers catastrophic health expenditures as a determinant of household income insecurity. Despite the global attention that South Korea has received for its achievement in rapidly forming a universal health insurance system in 12 years, the system requires only about 65% of copayment which cause higher level of out-of-pocket healthcare expenditure. In addition, financial hardship due to sudden and unpredictable illness can be caused by not only direct medical costs (i.e., catastrophic health expenditure), but also indirect costs such as loss of earned income and transportation fees associated with the use of medical services [2]. Those in poor health have a lower chance of achieving favorable levels of income. Employers may decide to cut employees' wages, demote them to lower positions, and dismiss or replace them to offset costs incurred by employees' lower productivity and/or absenteeism. Poverty is a significant social determinant of ill health. The absence of social health protection to guarantee workers' optimum level of income for living when they are sick (i.e., paid sick leave, sickness benefit, etc.) entraps them into a vicious cycle of poverty and poor health. In this regard, sickness benefits or paid sick leave have been introduced to ensure both leave from work and cash benefits to replace wage loss during workers episodes of illness. While sickness benefits or paid sick leave are instrumental in protecting workers' and their families health and economic status [3], South Korea is one of the few countries that do not require employers to provide leave to employees for non-work-related illness. Furthermore, only about 7 % of enterprises provide paid sick leave to their workers [4]. Recently, Korean public policy has been making efforts to mitigate the blind spot of social coverage resulting from a sudden change in employment relations. In particular, the COVID-19 pandemic has prompted active discussions on the introduction of sickness benefits in South Korea. Therefore, it would be timely to generate empirical evidence on the effectiveness of sickness benefits to inform the design and introduction of a nationwide system. Previous studies have shown an association between health shocks and income loss. Most studies have examined income loss or decline among individuals diagnosed with cancer [5][6][7][8][9]. Other studies have operationalized health shock as diabetes [10], health satisfaction [11], or hospital admission [12,13]. This evidence supports the notion that health shocks are associated with income loss. Three studies conducted in South Korea have also shown that health shocks decrease income [14][15][16]. All three studies have defined health shock differently: health expenditures in relation to earned income [14], having serious diseases or not [15], and hospitalization longer than three days [16]. Higher out-of-pocket expenses for using medical services and medication costs compared to earned income were associated with decreased total income, including wages, private and public transfers, and asset income [14]. When cancer, a cardiac disorder, or cerebrovascular disease occurs, workers' income is substantially reduced (33.7 % for cancer, 29.3 % for cardiac disorder, and 45.1 % for cerebrovascular disease) [15]. Workers who experienced hospitalizations longer than three days in the previous two years earned 23.6 % less than workers in comparable positions who had not [16]. However, the empirical question about the degree to which health can influence income for different groups based on their differential exposure to social determinants (e.g., employment and occupational status, gender) remains unclear [17]. It is necessary to identify the different patterns of causal relationships between health shock and income loss, stratified by employment arrangements and gender, while simultaneously considering the dual and gendered labor market of South Korea. In South Korea, the proportion of non-standard workers accounts for nearly 32.9 % (6.5 %) of the entire workforce, twice higher than the number in other Organization for Economic Co-operation and Development (OECD) member countries. Onequarter of Korean women are employed in low-paying, non-standard positions. The wage gap between Korean women and men is 32.5 %, ranked first among the OECD countries [18]. Moreover, if better health protects against income loss, then a more unequal distribution of health between different social groups should lead to larger income disparities. Indeed, total family income declined by up to 4.8 % among men and 85 % among women when they are diagnosed with cancer [5]. One study examined whether the association between health shock and income loss differed by employment arrangement and gender [16]. The study concluded that income loss due to hospitalization was more pronounced among non-standard workers than standard workers, and unemployment due to hospitalization was more pronounced among women than among men. The present study goes a step further to identify the influence of intersects between employment arrangements and gender on income loss. The aim of this study is to identify the causal relationship between health shock and income loss and different patterns of associations by workers' employment arrangements and gender. Data and Study population We used data from the latest six waves of the Korean Welfare Panel Study (KoWePS) (2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016). The KoWePS is the largest sampling data and nationally representative longitudinal study in South Korea. It has surveyed "the dynamic aspects and varying needs of people over the course of their lives, including living conditions, socioeconomic status (SES), and health status, " since 2006. Figure 1 shows the selection process for the study population. In the first step, baseline data from 2009 to 2011 was gathered. Baseline data was established as follows: a) only workers whose employment status remained unchanged for three years were selected, and b) those who had other types of contractual works, such as workfare, employer, unpaid family workers, non-standard work arrangements, and the self-employed were excluded. Finally, workers who had never been hospitalized in the previous three years were selected, to ensure a sample of healthy workers. Further, the data until 2016 was merged with the baseline data. Next, a) the data with missing values in the major variables was excluded in the analysis, and b) only those who responded for all eight years from 2009 to 2016 were selected, to establish balanced panel data (n=4876 each year). Dependent variable: Change of income The dependent variable in this study is the change in an individual's earned income. Earned income changes were measured by subtracting the average annual income for the three baseline years from the earned income for the next five years (2012 to 2016). The average earned income of the baseline years was used to minimize or control the effects of economic growth and/or inflation. In the case of self-employed agriculture, forestry, farming, or fishing workers, negative earned income values may occur because of the possibility of a net loss. In other words, income (profit) may be negative because the total costs may be higher than total sales. Independent variable: Hospitalization Health shocks were conceptualized as experience of hospitalization and dichotomized hospitalization to "Yes" and "No". To dichotomize the variable, the length of hospitalizations leading to a substantial loss of income were identified by analyzing a receiver operating characteristics (ROC) curve. The ROC curve predicted that hospitalizations for three days would lead to a substantial income loss. Therefore, "Yes" was defined as having an experience of hospitalization lasting for greater than three days. "No" was defined as having experience of hospitalization of less than three days, or having no experience of hospitalization. Additionally, 7 days and 14 days were used to dichotomize the variables to identify differences in disease severity. Further, individuals who reported their reasons for hospitalization as childbirth, medical checkups, and convalescence were excluded because such hospitalizations were not caused by sudden health problems. Groups: employment arrangement and gender To examine whether the relationship differs by social groups, workers were stratified based on employment arrangement and gender. Employment arrangement was based on contractual types (i.e., permanent, temporary, daily, and self-employed), working hours (full-time and part-time), and status of occupation and employment insurance. Employment arrangements were categorized into standard employment, non-standard employment, and self-employment. Individuals were defined as having standard work arrangements if they satisfied the following three conditions: permanent contract, full-time, and having both occupation and employment insurance. Respondents of the independent contractual type were defined as self-employed. To define the small self-employed, five or more self-employed people were excluded in this study. Respondents were defined as having non-standard work arrangements if they had temporary or daily contracts, whether they worked full-time, or had insurance. If respondents had permanent contracts but were part-time workers or did not have both occupation and employment insurance, they were also defined as non-standard employees. Covariates: age, education, marital, public health insurance, private health insurance, poverty, chronic disease, disability, and health status In this study, socio-demographic characteristics and SES were controlled to identify the net effect of hospitalization on income loss among workers. Age was operationalized into three groups: 18 ≤ age < 45 years, 45 ≤ age < 65 years, and age ≥ 65 years. Older workers were included to reflect the characteristics of the informal labor market in South Korea. Educational attainment was classified into four groups: under elementary school, middle school, high school, and college or higher. Marital status was classified into four categories: married couple, widowed, divorced or separated, and single (never married). Both public and private health insurance were investigated in binary; National Health Insurance (NHI) subscribers and individuals with private medical insurance were used as references. Poverty was defined as those who received the basic living allowance system: yes or no. Chronic disease and disability were also categorized as binary: yes or no. Subjective health status was investigated on a five-point Likert scale, and the reference was for the healthiest individuals. The square root of earned income of the year was also adjusted to control for the effects of the annual increment. Statistical analyses First, the ROC analysis was conducted to determine the cut-off point for the length of hospitalization that results in a great loss of income. Second, the difference in the change of income loss was assessed between those who experienced hospitalization and those who did not. The differences between them were examined by gender and employment arrangements. Third, fixed effect panel regression analysis was used to investigate the relationship between hospitalization and changes in income loss, controlling for other related variables. year, α: the year of hospitalization, the year of hospitalization+1year, the year of hospitalization+2year, Y: income loss, -Individual confounders: age, education, marital status, public health insurance, private health insurance, poverty, chronic disease, disability, and subjective health status -group: gender, employment status Characteristics of samples The baseline characteristics of the participants in the panel regression analysis from 2011 to 2016 are summarized in Table 1. As of 2011, as the baseline, there was a Change of income loss (t 0 , t 0+1 , t 0+2 ) by employment arrangement and gender Figure 2 shows the change in income loss over the previous year according to employment arrangement and gender. In the standard group, both males and females, regardless of whether they were admitted to the hospital, increased their earnings over the previous year. However, the income for the non-standard and self-employed groups decreased compared to the previous year. In particular, those who had more than two days of hospitalization had a significantly higher rate of income decline than those who had one day of hospitalization or were not hospitalized. Furthermore, the decrease was greater among women than among men. Table 2 shows the results of the panel regression analysis from 2011 to 2016 to determine the effect of hospitalization experience on income loss according to employment arrangement and gender. Men with standard employment did not experience significant income loss from hospitalization. However, standard and nonstandard working women who were hospitalized for more than three days had a greater loss of income than those who were not hospitalized (at all or more than 3 days, standard (t 0 :) β=-439.71, p=.038; non-standard (t 0 :) β=-151.67, p=.016). Those who were hospitalized longterm suffered greater losses of income than those who were not among non-standard and self-employed men (more than 7 days, non-standard (t 0 :) β=-262.67, p=.048; more than 14 days, non-standard (t 0 :) β=-692.18, p < .001; non-standard (t 0+1 :) β=-449.29, p = .018; selfemployed (t 0 :) β=-332.58, p = .034), as well as among women (more than 7 days, standard (t 0 :) β=-530.55, p = .037; non-standard (t 0 :) β=-220.37, p = .002; more than 14 days, non-standard (t 0 :) β=-275.79, p = .003; selfemployed (t 0 :) β=-190.70, p = .046). Discussion The UHC suggests that the entire population can use the necessary health services and simultaneously ensure that they are free from financial burdens [19]. Importantly, for UHC to secure universalism and provide high-quality services, it should ensure access to individual income security as well as minimize the financial burden of medical costs. It is well documented that a health shock (i.e., a catastrophic medical expense) can lead to poverty [20][21][22][23]. The process of economic poverty caused by a disease or worsening of health status involves direct losses (i.e., medical expenditures) as well as indirect losses (i.e., income loss). Therefore, this study first investigated the effect of hospitalization on income loss and the differences in the effects by gender and employment status among South Korean workers. Second, it redefined the concept and meaning of the UHC's definition in healthcare and social protection systems in South Korea based on the findings. This study reveals the importance of measures that ensure the right to work and the right to health among workers who are vulnerable to poverty caused by a health shock. It was identified that income loss was greater for non-standard and self-employed workers than for standard workers, when hospitalized. It should be noted that while income loss was seen in groups of non-standard working women and self-employed men when the length of hospitalization was 7 days or less, a statistically significant income loss was also observed among non-standard working males when the stay was extended beyond 14 days. Previous studies also revealed a greater income loss among workers who experienced serious diseases such as cancer, especially those who were non-standard workers or low-income earners [7,24]. Meanwhile, Sparrow et al. [25] reported a greater income loss due to worsening health among non-standard workers (-226.8 %) and the self-employed (-53.7 %), who were not in a low-income bracket [25]. A study in Korea found that non-standard workers who experienced a health shock are more likely to lose their jobs than standard workers [16]. The differential effect on income loss by employment status, even for those not belonging to low-income workers, may reflect differences in the type of work by employment status and availability of a workplace welfare system, such as a paid leave. Non-standard workers with precarious labor can experience unemployment when they are sick, without the ability to take a leave of absence to recover their health [26]. Especially among wage earners, income loss due to hospitalization occurred mainly among non-standard working females. These findings are consistent with previous studies that reported a greater income gap for women than for men when workers get sick, or that only women showed a loss of income [5,[27][28][29]. It is worth noting that, in this study, despite the average length of hospitalization by gender being longer for men than for t 0 = Income for the year of hospitalization -average income for three years before hospitalization, t 0 +1= Income for the following year of hospitalization -average income for three years before hospitalization, t 0 +2= Income after 2 years of hospitalization -average income for three years before hospitalization. Table 2 The effect of hospitalization on earned income loss by gender among Korean workers, 2011-2016 Adjusted for age, education, marital status, public health insurance, private health insurance, poverty, chronic disease, disability, and subjective health status t 0 = Income for the year of hospitalization -average income for three years before hospitalization, t 0+1 = Income for the following year of hospitalization -average income for three years before hospitalization, t 0+2 = Income after 2 years of hospitalization -average income for three years before hospitalization. Standard (N=556) Non-standard (N=986) women and the higher risk of male workers' disabilities, women's loss of income was greater. This can be seen from a labor-social context in Korea, which has a particularly high rate of non-standard work among women compared to men and the highest income gap among OECD member countries. According to the ratio of nonstandard workers by gender in Korea, the proportion of non-standard workers among total wage earners is quite high at 45 % for women, compared with 29.4 % for men in 2019 [30]. The proportion of non-standard workers among female workers is higher than that of men, especially in the form of temporary and part-time employment with lower income. Moreover, the hourly wage gap between male and female workers is 34.1 %, the largest among OECD countries [31]. Further, social insurance in South Korea, which includes only workers with standard employment arrangements, hardly protects the precariously employed. Therefore, it is expected that Korean women, particularly women with non-standard employment arrangements, are higher vulnerable to loss of income when they fall sick. In contrast, income loss was significant only among self-employed men. Compared to men, Korean women do not easily work as wage workers, especially because they get older. During this time, when the spouse runs a business, women work as unpaid family workers and support their husbands' work [32]. In this study, this possibility cannot be excluded because unpaid family workers without income were also included in the self-employed. Alternatively, not only wage workers, including regular and non-regular workers, but also self-employed women have a low average income of female groups such as unpaid family workers, which can lead to a flat effect of relatively small differences in income loss [13]. In addition to gender and employment status, this study confirmed that workers' income loss is affected by general characteristics. Middle-aged people from 45-65, who were engaged in economic activities, had relatively less income loss even if they were hospitalized, but elderly workers aged 65 or older lost a greater amount of income when hospitalized. In other words, elderly workers aged 65 years or older are much vulnerable to income loss when they fall sick [16]. Regarding marital status, male workers lost less income when they had a spouse than when they were single or widowed. This may be a phenomenon (a presentation), where in people endure and earn income even if they are sick due to responsibility for their families. Contrary to the results of previous studies, income loss occurred less when workers with spouses experienced hospitalization, which can be understood because this study did not distinguish between spouses' working status [16]. In contrast, it was insightful that women had opposite results, especially among divorced women, where they lived alone less than when they had a spouse. This can be due to the burden of childcare after divorce, or the economic burden of making a living alone. In conclusion, from a different perspective, both the large and small groups are likely to be at risk, and the group whose income loss was greater due to disease means they were at financial risk. In contrast, among the groups with relatively less income loss, it is predicted that men will continue to work with greater presenteeism, and women, for they either have childcare, or livelihood burdens. Therefore, if a social safety net such as paid sick leave or sickness allowance is established, it will not only support income loss, but also prevent workers' aforementioned presentation (working even when sick) and have a positive effect on family care in the long term. However, in the case of workers without private insurance or for the poor, the income loss get more severe. Based on our findings, blind spots have been identified in the social security system for workers in the Korean labor market, who are in a complex crisis involving both health and unemployment. While many OECD countries have sickness benefits systems to ensure that workers take a fully rest during illnesses and recover from disease, such benefits are not available in South Korea, except for public officials. Several studies have reported positive effects of sickness benefits or paid sick leave that mitigate workers' income loss, allowing them to take a rest, and, consequently, improve labor productivity [27,33,34]. Such benefits also allow the provision of timely treatment that in turn contributes to the maintenance and improvement of workers' physical and mental health [35][36][37]. Given the current lack of a system to protect Korean workers from income loss (e.g., sickness benefits, paid sick leave) it is imperative to consider introducing and implementing the relevant system. Since non-standard workers and the self-employed are especially vulnerable to income loss, as they have no annual or monthly leaves at workplaces, it is critical to take a step-wise approach by setting priorities among groups while introducing the system. Australia and New Zealand have sickness benefits for the low-income group, and Germany implemented sickness benefits for the low-income group during the initial phase of introduction, followed by expansion of coverage in a step-wise manner. Switzerland operates and manages a separate sickness benefit system for the self-employed. Based on the results of this study, an extended version of the WHO definition of the UHC may be proposed. As mentioned earlier, the meaning of UHC is to maintain and improve individuals' health and address health inequality; therefore, it should be able to move beyond just medical costs and include the costs that can arise indirectly. Despite the widespread implications of the UHC concept, many countries have designed their UHC to cover only direct medical costs. Economic costs must be considered in addition to medical costs, and the demographic characteristics and vulnerability of various groups should be taken into account from a social perspective. With the onset of the COVID-19 pandemic, workers who have to be worriedy about their livelihoods are pointing out that it is difficult to take a rest in reality, paying attention to the role of social protection such as "sickness benefits." While infectious diseases spreads, such as COVID-19, sickness benefits have the following two key functions: 1) to ensure workers' right to take a rest, and 2) to prevent social problems such as the spread of infectious diseases. For example, tuberculosis (TB), a typical infectious disease with a long treatment period, increases the risk of poor TB treatment outcomes, exacerbates poverty, and contributes to sustaining TB transmission. Thus, it was noted that social protective interventions that prevent or mitigate other financial risks associated with TB, including income losses and non-medical expenditures such as transportation and food, are also important [38]. Figure 3 shows the UHC double cube model presented by the authors, in which the blue cube shows the coverage of direct costs for existing health care expenditures, and the orange cube shows the coverage of indirect costs for social protection. In the blue cube, the three aspects describe who gets covered, what services are covered, and how much coverage those services receive from the WHO [2]. It is suggested that the orange cube consists of three smaller cubes with an extended concept. The first is the income loss cost arising from diseases of workers. Specifically, it is classified by the entity that compensates for the loss of income. For example, "paid sick leave" in which the user pays for some of the lost income and the workplace compensates for the rest, and the "sickness benefits" that compensate for workers' income loss in the country. The second cube is the case where it is found permanently unearned, even though it is supported for ongoing medical expenses and income after the outbreak of the disease, and is a part that can primarily be guaranteed through the payment of a "disability pension" in social security pensions. The last cube indicates transportation costs, care costs, and other expenses that are not directly related to medical care. The extended concept of the UHC could take a step closer to bridging the health gaps that arise across the entire healthcare industry, as a system that can realize both universality and equity. Limitations This study had several limitations. First, the disease severity was not considered. The KoWePS did not provide information on medical expenses or the diagnosis of diseases. However, as the data differentiated the length of hospital stay into short -and long-term, the length of hospitalization was used as a proxy for disease severity. Second, the study did not consider moderating factors such as private health insurance, social support, or use of non-income properties that a worker may use while being sick, except for the private health insurance used as a control factor in this study. Nevertheless, it is true in Korea that the greatest support during a worker's sickness comes from a workplace support system or private insurance. Finally, there was a limitation to conduct more robust comparison across different types of workers and gender by performing stratification analysis rather than interaction analysis between hospitalization experience and work type or gender. Thus, in future studies, it is necessary to verify a clear causal relationship between income loss and the interaction terms of work type, gender, and hospitalization experience using sufficient number of samples. Conclusion It is meaningful that this study confirmed indirect costs for loss of income through empirical analysis and proposed an extended concept of the UHC, as well as showing that support for workers' access to medical care and medical expenses needs to be protected. As a results of this study, it was confirmed that if a worker is hospitalized for more than 2 weeks, it can affect income loss until the following year, as shown in prior studies that observed a long-term effect of loss of income [6,12,13]. Therefore, based on the present findings, it is reasonable to propose an alternative plan to secure at least a two-week service-guarantee period as part of developing Korea's paid sick leave or sickness benefit system. Several OECD countries benefit from at least two weeks of payment for sickness benefits [39]. Furthermore, it was confirmed that it is urgent to establish a system to protect all workers, including the vulnerable workers, considering there are extreme disparities in the labor market based on employment arrangements and gender. Based on the present findings, it is believed that introducing a paid sick leave or sickness benefits system for the entire population, including non-standard workers, self-employed workers, and standard women, will be necessary to implement the UHC. Thus, it is critical for the Korean government to continuously intervene and undertake efforts to ensure universal health protection to fight the resulting poverty, and address the income and health gaps.
2022-02-04T14:38:31.808Z
2022-02-04T00:00:00.000
{ "year": 2022, "sha1": "6a502b376cd53601e21494da761270112b859653", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-022-12647-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6a502b376cd53601e21494da761270112b859653", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
232352709
pes2o/s2orc
v3-fos-license
The stabilization of wave equations with moving boundary In this paper, we consider the stabilization of wave equations with moving boundary. First, we show the solution behaviour of wave equation with Neumann boundary conditions, that is, the energy of wave equation with mixed boundary conditions may decrease, increase or conserve depending on the different range of parameter. Second, we prove the wellposedness and stabilization for the wave equation with time delay and moving boundary. The main objective of this paper is to investigate the boundary feedback stabilization of wave equations. Stabilization of wave equations The first purpose of this paper is to study the stabilization of the following wave equation with boundary damping: in Q k , u x (0, t) = 0, u x (l k (t), t) + au t (l k (t), t) = 0 on (0, ∞), u(x, 0) = u 0 , u t (x, 0) = u 1 in (0, 1), where a ∈ R, (u 0 , u 1 ) is any given initial data, a is the control and u is the state variable. Let r = (1, a) ∈ R 2 , and we see that u x l k (t), t + au t l k (t), t = ∂u ∂r l k (t), t . Obviously, the right boundary condition for system (1.1) is the directional derivative of u along the direction r. We concern the relationship between the control a and the stability of (1.1). To this aim, define the following energy of system (1.1): Definition 1.1. (1) The energy of (1.1) decays only at a rate of m-th order polynomials, if there exist constants c 1 , c 2 > 0 and an m-th order polynomial ϕ(t) such that the energy of (1.1) satisfies (2) The energy of (1.1) decays at a rate which is no less than m-th order polynomials, if there exists a constant C > 0 and an m-th order polynomial ϕ(t), such that the energy of (1.1) satisfies (3) The energy of (1.1) decays at a rate which is no more than m-th order polynomials, if there exists a constant C > 0 and an m-th order polynomial ϕ(t), such that the energy of (1.1) satisfies (4) System (1.1) is said to be exponentially stable, if there exist constants C, δ > 0 such that for any given (u 0 , u 1 ) ∈ H 1 L 0, l(t) × L 2 0, l(t) , the energy of (1.1) satisfies E 1 (t) ≤ CE 1 (0)e −δt , t > 0. We will prove that the energy of system (1.1) may increase, decrease or conserve, which is dependent on the different range of the control a. This result is stated as follows. Theorem 1.2. Fix 0 < k < 1, and let It is easy to check that a 1 < b 1 < b 2 < a 2 . (1) If a < a 1 or a > a 2 , then the energy of system (1.1) is increasing. Furthermore, there exist solutions of (1.1) such that the corresponding energy increases only at a polynomial rate. (2) If a = a 1 or a = a 2 , then the energy of system (1.1) is conserved. (3) If a 1 < a < a 2 , then the energy of system (1.1) is decreasing. Moreover, there exist solutions of (1.1) such that the corresponding energy decreases only at a polynomial rate. For a = b 1 or a = b 2 , the energy of (1.1) decays only at a rate of first order polynomials; for b 1 < a < b 2 , the energy of (1.1) decays at a rate which is no less than first order polynomials; for a 1 < a < b 1 or b 2 < a < a 2 , the energy of (1.1) decays at a rate which is no more than first order polynomials. Stabilization theory has been widely investigated for hyperbolic equations in cylindrical domains and there have been a great number of results (see [5,8,12,13,17,20] and the references therein). In physical situations, many phenomena evolve in domains whose boundary has moving parts. For instance, consider a heat process in a combustion chamber attached a piston, where part of the boundary moves with the motion of the piston (see [3]). Another example is the vibration of an extendible flexible beam with right end supported by a movable base and left end imbedded inside a bearing permitting extension and contraction of the beam (see [22]). For the wave equation with moving boundary, qualitative theory results has been obtained in the literature(see, for instance, [2,4,18,19] and references therein). However, very few results on the stabilization of hyperbolic equations in non-cylindrical domains have been known. To the best of our knowledge, [3] is the first to treat stabilizability problem of wave equation in a domain with moving boundary. The authors proved that the wave equation with moving boundary is stabilizable with viscous damping and compensation. In [10], the author proved that the wave equation in a finite moving domain is stable, when the movement is assumed to move slower than light and periodically. Further, an optimal feedback stabilization of a string with moving boundary is treated, as the author [9] showed that, if the movement is not too fast, the energy decays exponentially. Recently, in [21], the authors analyzed the stabilization of wave dynamics by moving boundary, while the domain remains bounded, and undergoes phases of expansion and contraction. The stabilization of the wave equation with moving boundary and Dirichlet-Neuman boundary conditions was considered in [1], where the energy decays exponentially when the movement move slower than light and periodically. In this paper, we study the stabilization of the wave equation (1.1) with moving boundary, and the movement satisfies l k (t) = 1 + kt, k ∈ (0, 1). On the other hand, if the left boundary condition of (1.1) is replaced by the Dirichlet boundary conditions, we refer to [11] that the authors studied the stabilization of the one-dimensional wave equation with general moving boundary. Although the moving boundary we considered is a special boundary, we give a more explicit energy estimate for system (1.1). Stabilization of wave equation with time delay The third objective of this paper is devoted to studying the stabilization of the following wave equation with time delay: where µ 1 , µ 2 ∈ R, (u 0 , u 1 , g 0 ) is any given initial value and delay τ > 0. The energy of (1.2) is defined by where ξ is a positive coefficient. We establish a relationship between stability and the sizes of coefficients µ 1 , µ 2 and time delay τ for (1.2) in this paper. The result is stated as follows. In the past decades, many authors focus on the stabilization for the wave equation with time delay in cylindrical domains. We mention [6,15,16] and the references therein for a detail statement. In particular, the stabilization of the one-dimensional wave equation with time delay in cylindrical domain was discussed in [23], where the wave equation is exponentially stable when µ 1 > µ 2 and the system is unstable when µ 1 < µ 2 . Moreover, when µ 1 = µ 2 , if τ ∈ (0, 1) is rational, then the system is unstable; if τ ∈ (0, 1) is irrational, the system is asymptotically stable. However, as far as we know, this paper is the first attempt to study the stabilization problem for the wave equation with time delay and moving boundary. It is more complex to treat the stabilization problem for system (1.2) with moving boundary than the case in cylindrical domain. Moreover, we observe that, if µ 2 = 0, then the system (1.2) degenerates to the Dirichlet system without time delay, and the conclusion Theorem 1.3 (1) will be the same as that in [11]. The paper is organized as follows. In Section 2, we study the well-posedness of the problem (1.1). In Section 3, we give the proof of Theorem 1.2 and some examples. In Sections 4, we prove that problem (1.2) is well-posed. Section 5 is devoted to giving the proof of Theorem 1.3. (2) We will denote by AC i (Ω) the set of all functions u defined on Ω with the following property: if the function u is not absolutely continuous on line P (x 1 , · · · , x i−1 , x i+1 , · · · , x N ), then µ N −1 (M i ) = 0. Let us denote by [ ∂u ∂x i ] the classical partial derivative of u with respect to x i . Since u is absolutely continuous for almost lines P (x 1 , · · · , x i−1 , x i+1 , · · · , x N ), it exists almost everywhere in Ω. (3) Assume that Σ ⊂ R N is an open set. Put D(Σ) = C ∞ 0 (Σ) and use the symbol D (Σ) to denote its dual. H 1 loc (Σ) is defined as the space of distributions φ such that for all ψ ∈ D(R N ), ψφ ∈ H 1 (Σ). Before proving the main theorem, we introduce three lemmas. Moreover, u can be continuously extended to Q k , the traces of u on each line {(x, t) ∈ Q k |t = t 0 } are in H 1 ((0, l k (t 0 ))) and the traces of u on the boundary ∂Q k of Q k are in Based on the three lemmas above, we have Proof. The whole proof is divided into five part. loc (R). The second step. Using the Neumann boundary condition u x (0, t) = 0, we get f (t) + g (t) = 0 and so The third step. Using the moving boundary condition u x (l k (t)) + au t (l k (t), t) = 0, we assert that (a + 1) The fourth step. Based on the initial data, we derive u 0 ( (2. 2) The fifth step. We are going to extend f from 2) and the continuity of f, we can extend uniquely f from I 0 to R. f is unique, Thus u is unique. in R and consequently f = c, u = 2c, for any c ∈ R. Hence u 0 = 2c and u 1 = 0. This shows that (1.1) has only a trivial solution at a = −1. It follows that f • F n = 0, a.e. in R, n = 0. Consequently, solutions of (1.1) appear to be constant in the domain figure 1). By (2.2), f has been known a.e. on I 0 . Using the relationship (2.3), we can obtain f a.e. on I n . Furthermore, by integral of f over I n , up to a constant C n (depending on n), f is also known on every I n and C 0 = f (0) = 1 2 u 0 (0). Thus, by continuity of f, f is unique , a.e. in Q k , u is uniquely determined by initial-boundary condition in H 1 loc (Q k ). Finally, we only need to determine C n for n ≥ 1, as the argument of C n , n ≤ −1 is analogous to that. Due to I n := F n (−1), F n (1) = F n (I 0 ), for any y ∈ I n , there exists x ∈ I 0 , such that 3) and where F −1 (y) ∈ I 0 . f has been known on I 0 by (2.2). Hence, for any y ∈ 1, F (0) , we deduce This yields that For any y ∈ I n , we use (2.3) to deduce that Notice that F −n (y) ∈ I 0 , then for any y ∈ F n (−1), F n (0) , Similarly, for any y ∈ F n−1 (0), By continuity of f, we conclude which leads to Until now, we complete the proof of Theorem 2.4. Remark 1. Given (x, t) ∈ Q k , put ξ = t + x and η = t − x. An easy computation shows that ξ ∈ K 1 = (0, +∞) and η ∈ K 2 = (−1, +∞). Although we extend f from I 0 to R using boundary conditions, to prove the existence of solutions in Q k , it is sufficient to determine a unique f from I 0 to (−1, +∞). The proof of Theorem 1.2 Without loss of generality, we assume that functions are sufficiently smooth. Otherwise, we can use the smoothing technique. Proof of Theorem 1.2. The proof falls naturally into two parts. Step 1. We study the relationship between the energy of (1.1) and a. As calculating the derivative of E 1 (·) with respect to t, we have Using the first equation and boundary conditions in (1.1), we arrive at (3.1) Let f (a) = ka 2 − 2a + k, then (3.1) can be written as It is easy to check that the discriminant of f (a) : Hence, two roots are When a < a 1 or a > a 2 , we conclude that E 1 (t) > 0 and the energy of (1.1) is increasing; When a = a 1 or a = a 2 , E 1 (t) = 0 and the energy of (1.1) is conserved; When a 1 < a < a 2 , E 1 (t) < 0 and the energy of (1.1) is decreasing. Step 2. Integrating (3.2) over (0, T ), we get Multiplying the first equation in (1.1) by xu x , we obtain Integrating the above equality on Q k T and using the Green's formula, we have where dσ is the length element on Γ R and n t , n x are components of the unit exterior normal n on Γ R corresponding to time and space respectively. It is easy to see that Notice that x = l k (t) on Γ R . Transforming the curvilinear integral on Γ R into a single integral about t, using the moving boundary condition and rearranging the above equality, one gets Let g(a) = a 2 − 2ka + 1, the discriminant of g(a) is ∆ = 4(k 2 − 1) < 0. Thus g(a) > 0, ∀a ∈ R. Write (3.4) as Integrating the above equality on Q k T , we obtain Then (3.5) and (3.6) yield that Set h(a) = kg(a) + f (a) = k(a 2 − 2ka + 1) + ka 2 − 2a + k = 2 ka 2 − (k 2 + 1)a + k . The discriminant of h(a) is Hence h(a) has two roots It is easy to check that From (3.7), we get (3.9) Rearranging the above equality, we arrive at which together with (3.10) yields As l k (t) = 1 + kt, we deduce According to (3.12), we know that when a = k or a = 1 k , system (1.1) decays at a rate of first-order polynomials. (3.14) Substituting (3.3) into (3.14), one gets By (3.11), we derive Notice that f (a) < 0, ∀a ∈ (b 1 , b 2 ) ⊂ (a 1 , a 2 ) and l k (t) = 1 + kt. Finally, This means that when k < a < 1 k , system (1.1) decays at a rate which is no less than first-order polynomials. Here, we have h(a) = kg(a) + f (a) > 0. (3.15) Applying (3.15) to (3.7) and using (3.3) again, we obtain By (3.11), we deduce This means that when a 1 < a < b 1 or b 2 < a < a 2 , system (1.1) decays at a rate which is no more than first-order polynomials. 2. Examples. Let's go further to interpret the rate at which the energy of (1.1) decays or grows with some examples. Suppose that (1.1) has a solution in the following form Using the moving boundary condition, we get For a ∈ R and t = t + 1 k , we have discussed a formula as (3.17) for the wellposedness of (1.1) in Section 2. When a = 1, with a similar argument, we claim that the solution will be a constant in the domain V. For −1 < a < 1, we are going to construct some particular solutions by (3.17). For simplicity of presentation, set µ a = 1 − a 1 + a and θ k = 1 + k 1 − k . In We establish a special function f (z) = z ln µ a ln θ k satisfying (3.18). Example 1. If a = k, then ln µ a ln θ k = −1 and f (z) = 1 z . Thus, where c is a constant. By (3.16), we have Moreover, Put u 0 (x) = u(x, 0), u 1 (x) = u t (x, 0) and then u is a solution to system (1.1) with the initial data (u 0 , u 1 ). Therefore, which implies that the energy decays at a rate of first-order polynomials. where c is a constant. By (3.16), we obtain Given u 0 (x) = u(x, 0) and u 1 (x) = u t (x, 0), u is a solution to system (1.1). Therefore, which means that the energy is conserved. For any fixed k, let g k (a) = 2 ln µ a ln θ k + 1 = 2 ln 1 − a 1 + a ln 1 + k 1 − k + 1 denote a function with respect to a. It is easy to find that g k (·) is a strictly decreasing function for −1 < a < 1. Moreover, (3) if k < a < 1, g k (a) → −1 (a → k) and g k (a) → −∞ (a → 1). According to the above argument, we conclude that the energy of (1.1) decays or grows only at a polynomial rate. With regard to a < −1 or a > 1, by f (θ k z) = µ a f (z), we have f (θ 2 k z) = µ 2 a f (z). Similarly, a same conclusion can be obtained. From (4.1), we have Substituting them into the first equation of (4.2), we get In the same manner, from the third equation of (4.2), we have At last, from (4.1) and the second equation of (4.2), we see that u( On the other hand, Thus, In the following, based on (c 1 ), (c 2 ), using (a), (b), we are going to extend f from the interval (−1, 1) to − (1 − k)τ − 1, +∞ which is its required extension interval. First, we extend f toward the left using (b). To begin with, we rewrite (b) as In this case, f (1 + k)(t − τ ) + 1 is known by (c 1 ) and (c 2 ) because it is located on (−1, 1) when t ∈ (0, τ ). Additionally, g 0 is a given function and then functions on the right-hand side of (b ) have been known. Letting the value of f on the left-hand side of (b ) be what we want to extend, we can directly define the value of f on −(1−k)τ −1, −1 using it. . We extend f in several steps. Step 2. Let using (b ) again, we can define the value of f on Step 3. Let For this t, Therefore, repeating above process until the n-th step, we can already define the value . In order to reach the target extension interval − (1 − k)τ − 1, −1 , we shall request After having done the extension of f on − (1 − k)τ − 1, −1 , we start to extend f toward the right using (a). (1) µ 1 = ±1. (a) is equivalent to We want functions on the right-hand side of (a 1 ) to be located in intervals where f has been known, then the left-hand side of (a 1 ) is the value of f that we're going to define. In this case, we use two different time spans to extend f . Using (a 1 ), we can define the value Step 3. For fixed τ, there exists a unique positive integer N, such that (1 + k)(t − τ ) + 1 < (1 − k)t − 1, t < t N , For t < t N , We divide 0, t N to N i=1 t n−1 , t n , where t 0 = 0, t n = n i=1 2(1 + k) i−1 (1 − k) i . For every t n−1 , t n , 1 ≤ n ≤ N, similar to Step 1 or the second part of Step 2, we can define the value of f on 1 + . For t > t N , let the time span is τ, that is, t n = t n−1 + τ, n > N. For every (t n−1 , t n ), n > N, using (a 1 ) again, we can define the value of f on In summary, for R + = +∞ n=1 (t n−1 , t n ), we can define the value of f on 1, +∞ by (a 1 ). (2) µ 1 = 1. from (a), we get that for almost everywhere t > 0, Similar to Case 1, with fixed time span τ, we can define the value of f on 1, +∞ using (a 2 ). Therefore, g 0 , u 0 , u 1 shall satisfy certain compatibility conditions for f to be well defined. Since the remainder of the proof is very similar to what we discussed earlier, we omit it. Proof of Theorem 1.3 Proof. It is easy to check that We estimate it in three classifications, using the Cauchy's inequality. From the above argument, we can see that the size of coefficients µ 1 , µ 2 and the value of τ have an impact on stability of the system. Comparing (5.2),(5.4),(5.6) and (5.3),(5.5),(5.7), respectively, we find that (F1) Stabilization of the system requires that µ 1 belongs to the range where the system is stable without time delay and the coefficient of time-delay term µ 2 cannot be large. (F2) No matter what µ 1 is, the system loses stability if the time-delay term coefficient µ 2 is too large. Remark 2. In the process of estimating (5.1) above, we made the coefficients both negative or both positive. In fact, if the coefficient is a plus and a minus, it is complicated, may be a complex fluctuation in itself.
2021-03-26T01:16:23.230Z
2021-03-25T00:00:00.000
{ "year": 2021, "sha1": "5d9a034d6f29171770c857fdc8eb0471c5e33a6f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5d9a034d6f29171770c857fdc8eb0471c5e33a6f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
258349873
pes2o/s2orc
v3-fos-license
MANAGEMENT OF INTERNATIONAL TRADE IN THE CONTEXT OF ENSURING INNOVATIVE DEVELOPMENT : The activities of companies in the foreign market that offer their products and services face several unique challenges, domestic and international, global competition, and additional requirements at several levels. Therefore, developing best practices and considering the evolution of trading processes, strategies, regulations, and technological innovations are mandatory for continuity and prosperity in the international market. The article carries out a bibliometric analysis of publications by keywords «international trade» and «innovations» in the system of international commodity-monetary relations in the conditions of innovative processes and prospects of international exchange of scientific and technical knowledge and technologies. The article aims to investigate the functional link between international trade and the level of the country’s innovative development and confirm the hypothesis about the significance of this link. The following methodological tools were used in the article: Canonical Correlation Analysis and Multivariate Panel Data Regression Model. Forty-four European and Asian countries are investigated. The period of the investigation is from 2006 to 2021. The array of input variables includes a set of indicators, six of which characterize the innovative development of the studied countries; five indicators represent international trade; and three indicators control and describe the socio-economic development of nations. The revealed correlation-regression dependences generally provide a basis for confirming the hypothesis of a direct relationship between the country's innovative development level and its positioning in the field of international trade. The obtained results proved the presence of a direct statistically significant relationship between High technology exports, Import and Current account balance; Innovation index and External balance on goods and services. An inverse functional dependence was found between the indicator Patent applications by residents and the Current account balance. In the future, it is necessary to adapt the proposed methodology to develop a functional basis for the analysis of the impact of innovations on the ecosystem of specific enterprises; to consider the national aspects of conducting business, and the state policy implication of supporting the digitization of crucial stages of production. Introduction.The development of the modern global world has gained significant technological progress in the 21st century.This period was marked by the transition to the knowledge economy (digitalization of society).It determines the specifics of the development of cities, regions, etc. Informatization of community causes radical changes that provoke the formation of the economies of developed countries and exerts a significant influence on the sphere of international trade.Nowadays, the globalization processes have taken an important role.International trade is a key factor in profitability and a way of doing business.It stimulates the development of innovations.With innovative technologies, it is possible to consider the acceleration of humanity's economic, scientific, and technical development.However, this factor can also provoke other competition between business entities.That is why the globalization revolution creates the interpenetration and merger of the economies of different countries.In these conditions, a number of rules must be followed.Requirements and procedures in the field of international trade are constantly changing, so it is necessary to be aware of these changes in order to avoid delays in both production and distribution of goods.International trade is a complex process, so it is important for exporting companies to consider innovative best practices and take advantage of lower logistics costs to achieve greater market share and succeed in opening new markets. Literature Review.The relevance of the direction of management of international trade in the context of ensuring innovative development is confirmed by the growing interest of the international scientific community in this issue, which is reflected in the positive dynamics of the number of relevant publications in the international databases Scopus and WoS (Figure 1).As can be seen from these figures, innovation and international trade publications are published more in the Scopus database in absolute terms.Most publications are from scientists from China, the USA, and the United Kingdom (Figure 2).According to queries for the keywords «innovations» and «international trade» presented in the Scopus scientometric database, scientific publications are distributed as follows (Figure 3).In this context, it is worth paying attention to the most interesting results of scientists that have been published in recent years.Thus, in the work of a group of scientists led by Amaral et al. (2023) examine the importance of industrial capabilities in Spain and Portugal during the crisis caused by the spread of the COVID-19 pandemic.In particular, the authors emphasize the high demand for the production of ventilators.Countries, for which the production of these devices is atypical, had not only to establish their production but also to establish interaction with other international contractors to form a complete set.The authors also propose a new theory on how countries can identify core features to enhance the dynamic stages in areas critical to their social well-being.The impact of COVID-19 on the reformatting of emphases in the conduct of international trade is also reflected in the works of Wu et al. (2022) The essence of the connection between international trade and countries' financial development level is revealed in a scientific article by Choi (2023).On the example of companies of various levels that conduct their activities in Taiwan, a study was conducted on how much the level of the country's financial development affects the production capacity of companies operating in the foreign market.Gaps in quality and exports between more and less productive exporting companies were found to widen as a country's financial system improved.Blockchain and cryptocurrency are promising directions for modernizing the international trade process.Tandra and Suroso (2022) propose a payment system (stablecoin) for international trade that can operate without the supervision of banks.The influence of electronic commerce systems on international trade processes is considered by Wang et al. (2021).Siddik et al. (2021) studied the connection between blockchain technology and international trade. Since the world today is a network of closely intertwined trade relations between various companies, there are a lot of articles about the structure and changes in the Global Value Chain (GVC).Ito et al. ( 2023) investigated how the country's positioning in the sales chain affects the innovative activity of the company. Japan, as a key participant in Asian value chains during 1995-2011, found itself on the periphery, yielding to several companies operating in China.The Japanese companies involved in patent activities participated in this study.The analysis results show that increasing the central role of Japanese sectors, that is, as key suppliers, is positively associated with increased patent applications by companies in these sectors.Thus, firms benefit from downstream markets.A similar study only for the USA was also conducted by a group of scientists, Zhou et al. ( 2023 Studying the connection between the country's innovative development and its eco-orientation and international positioning within this direction is also one of today's urgent problems.The work of Chen et al. (2022) empirically investigates the influence of the flow of innovative technologies on the eco-efficiency of a country through decomposed diversified channels of flow.The obtained results demonstrate that technologies related to imports positively affect the environmental efficiency of the country's manufacturing industries.The question of the influence of eco-innovation and financial inclusion according to the stable development of international trade is the topic of research in the scientific work of Ma et al. (2022).In the work of Meng et al. ( 2022) estimated the impact of trade, green innovations, and renewable energy. New technologies are changing the supply and demand of international trade.The final consumer of goods increasingly affects the work of companies, forcing them to adapt to their needs in all categories -from design and sales market to delivery methods.K. Schwab (2015) identified four main effects that the fourth industrial revolution could have on business.It is a rise in expectations for customers, product quality improvement, joint innovation, and new forms of organizations.All these innovations will soon completely change how people live and will also affect consciousness (that is, gradually alter the very nature of man).People will have free time not only through robotics but also through buying and delivering goods.It will be possible to order individual designs and assembly of products and services, they will be paid instantly, and the drones will deliver the goods directly to the buyer's location.Many markets will work now, bypassing various intermediary structures: brokers and dealers.The need for cheap unskilled labor will gradually disappear, forcing people to live longer thanks to the complete automation of treatment and health care processes. In the era of globalization of society, food security is one of the research priorities.Schram and Townsend (2021) are the authors of a study of issues that arise at the intersection of international trade, investment, and food systems.The interaction between these constituent parts should primarily solve the problems of food systems arising under the influence of various factors.At this time, policy efforts must be directed at preparing the future of investment and trade systems to create a food system that contributes to the health of people and the planet. Innovation in conducting international trade is also manifested through creating appropriate software solutions that allow the automation of some processes.European scientists led by Polanec et al. (2022) presented the development of an approach to determine the marginal value of the development of international trade, production, innovation, use of ICT by enterprises, etc.For this, the coverage ratio was used to measure the analytical solutions provided to determine the boundary limits.Based on the results of surveys of enterprises operating in the EU, an appendix was developed that illustrates the approach to determining the limits.Thus, an important practical consequence is an ability to set industry restrictions.In addition, the role of innovative technologies, including international trade, is considered in the works of Klevenhusen et al. ( 2021) and Ben Hassine and Mathieu ( 2021).An article by Shadikhodjaev (2021) examines the regulation of trade-as-a-service, intellectual property, and paperless trade.It concludes that the principle of technology neutrality should be universally accepted, complemented by policy flexibility where appropriate.The problem of establishing management of international trade in the context of ensuring innovative development is not sufficiently studied. It predetermines the following research objective to test the hypothesis regarding the direct relationship between the country's innovative development and its positioning in international trade. Methodology and research methods.In order to test the proposed hypothesis, the research is conducted in two steps.In the first step, a canonical analysis is carried out, determining the relationship between two sets of features that characterize the corresponding object.Among the advantages of this method, it is possible to highlight the possibility of determining the influence of several factors on several indicators. The canonical analysis is one of the regression methods.With the help of the correlation coefficient r (1), the determination coefficient R 2 , and the regression coefficient are key indicators of regression analysis.Canonical analysis of variables captures the relationship between a group of predictors with criterion variables. Pairwise correlation coefficients are used to determine the linear relationship between two features x and y.When it is necessary to detect dependencies between indicators x0 and x1 … xn , then a multiple correlation coefficient is used as a characteristic of this dependence, corresponding to the correlation coefficient R ( 0 , 0 ̂), where 0 ̂= 0 + * is the best linear prediction 0 . The task of canonical analysis is to find the following normalized linear combinations (2-3). Thus, it is necessary that the canonical correlation R=cor( 1 , 1 ) was maximal (the weighting coefficients were maximal). In the second step of the study, a multivariate panel regression model is built, which will allow formalizing the functional dependencies between the studied variables.Panel data are rolling spatial datasets where each object appears multiple times (monthly, quarterly, annually, etc.) over a selected period.The use of data in this format opens up several prospects for developing economic science.Panel data allow for considering the heterogeneity of the economic entities participating in the study.In addition, using panel data in the analysis has several other advantages: − allow analyzing a set of economic issues that cannot be expressed through time series or spatial data; − prevent a shift in data aggregation that may occur during the analysis of time series and cross-sectional data, where unobserved individual characteristics of objects are not taken into account, where heterogeneity of data is not taken into account); − enable the researcher to analyze a larger number of observations, which increases the set of freedom degrees and reduces the dependence between explanatory parameters and the probability of the appearance of standard errors of estimates; − make it possible to avoid specification errors that arise from neglecting some types of existing variables in the modeling. Given the listed advantages of using panel data, one of their weaknesses is that self-bias may be present.If this happens for random reasons, then the self-selection bias may not occur. The formalized form of the multivariate regression model has the following form (4) where iserial number; tperiod of investigation; αconstant term; βvector of dimension coefficients K˟1; Х * -row vector of the matrix K of explanatory variables; -error of the regression. where ui-individual effects of variables;εit-residuals of the model. When studying panel data, two main models could be constructed: models with fixed and random effects.Each independent variable is non-random in models with fixed effects (they uniquely influence the dependent variable).In this time, in the model with random effects, the moment of randomness is not excluded, which manifests itself in the selection of research indicators.That model is better suited for the selected data set and allows for solving one of the special tests (Wald, Breusch-Pagan or Hausman) (Kolenikov, 2001). Results.The Canonical Analysis module of the STATISTICA 12 was used.The canonical correlation coefficient R is 0,98 (Table 1).It indicates a high level.The Chi-Square indicator of 1514,253 p < 0,05 confirms that R is statistically significant.Sources: developed by the authors. According to Table 1, 100% of innovation development indicators are considered (Left Set), and 99,125%t of international trade indicators (Right Set).The Total redundancy indicator means that the variation in the Left Set innovation development indicators explains more than 52,29% of the variation in international trade indicators.The closeness of the relationship between the groups of studied indicators is also confirmed graphically (Figure 4).The p values of the Chi-Square test show that the roots 0-4 are statistically significant, the canonical root 0 shows 98,056% of the variance, and root 1 -only 66,428%.Thus, it is necessary to consider only the canonical root 0 (Table 2) for further calculations. The first root plays a key role in this study.The canonical weights were used of the first root (Tables 4, 5) to build the corresponding canonical variables 8, 9. Sources: developed by the authors.Х = 0,06х1 + 0,05х2 -0,05х3 + 0,79х4 -0,11х5+ 0,21х6 У= 18341,2y1 -20081,3y2 + 0,003y3 -0,005y4+ 0,5y5 -0,0012y6+ 2670,8y7 The next step of the investigation involves the construction of a multivariate panel regression.For this, STATA 12 was used.Taking into account the results of the canonical analysis to test the hypothesis regarding the presence of a functional dependence between the country's innovative development and international trade, three indicators of innovative development of the studied countries will act as independent variables (Innovation index (I_S1), High technology exports (I_S4), Patent applications by residents (I_S6)) and three control variables (G1-G3).The role of dependent variables would be performed by three indicators of international trade -Import (I_Т1), Current account balance, billion USD (I_Т5), External balance on goods and services, USD (I_Т7).The Export indicator (I_Т2) was excluded from considering dependent variables.That is due to the presence among the independent variables of the indicator High technology exports (I_S4), which is directly correlated with this indicator.In this way, three multivariate regression equations (10-12) would be constructed. _ 5 = 0 + 1 _ 1 + 2 _ 2 + 3 _ 3 + 4 1 + 5 2 + 6 3 (11) To choose the type of panel regression (with random or fixed effects), the Hausman test was used.The condition for its use is that if the p-value of this criterion is less than 0,05, it is necessary to build a regression model with fixed effects; otherwise, it with random effects.Table 6 presents the results of the Hausman test.Sources: developed by the authors. Тhus, the study will construct two-panel regression models with fixed effects and one model with random effects.Table 7-9 presents the results of the regression models.Sources: developed by the authors. The obtained results of regression modeling make it possible to construct the corresponding three regression equations (13)(14)(15).The obtained criteria values F and χ 2 , as well as the corresponding p-value, all three constructed models are statistically significant.The quality of the built models is also confirmed by the obtained coefficients of determination R 2 , which for all models show a good result (more than 70% of the dependent variables variation is due to the change of the independent variables involved in the study).Thus, the simulation results are reliable and can be used to make predictions.Statistically significant relationships (p-valueless than 0,05) are observed between the following variables: − High technology exports (I_S4) and Import (I_Т1)with an increase in I_S4 per unit, I_Т1 by 1412,439 bln USD; − High technology exports (I_S4) and Current account balance (I_Т5)with an increase in I_S4 by one unit, I_Т5 will increase by 474,016 bln USD; − Patent applications by residents (I_S6) and Current account balance (I_Т5)with an increase in I_S6 by one unit, I_Т5 will decrease by 6,979 bln USD; − Innovation index (I_S1) and External balance on goods and services, billion USD (I_Т7)with an increase I_S1 by one unit, I_Т7 will increase by 152,001 bln USD. Conclusions.Despite the difficulties and obstacles, international trade has continued to work effectively in time and after the COVID-19 pandemic.At the same time, the pessimistic forecasts of experts were never realized.The positive factors that restrained the rapid decline of international trade and the world economy are the rapid adoption of the new "rules of the game" by all participants in remaining international commodity markets.A contribution to the investigation of the role of innovation in international trade was made by Silva et al. (2022).A group of scientists investigated the relationship between innovative products and the export performance of small and medium-sized companies.The results obtained in this article are applied during the formation of enterprises' business strategies.However, the drawback is that the accepted recommendations can be used only at the micro level, unlike the results in this article.A study by Huang (2022) investigated the correlation between international trade and innovations in private companies.Unlike this research, the author allows identifying the functional influence of the determinants of international trade.Considering the obtained conclusions, this can be the basis for further research. A complex econometric model was built following the article's purpose to confirm the hypothesis regarding the direct dependence between the country's innovative development and their positioning in the field of international trade.The object of the investigation is forty-four European and Asian countries were selected from 2006 to 2021.At the first stage of the study, a Canonical Correlation Analysis was built between a group of indicators that characterize innovative development and the level of development of international trade of the countries under study.It was determined that the variation in indicators of innovative development explains more than 52% of the variation in international trade indicators.The strongest correlation is observed between the following indicators: High technology exports and the following indicators of international trade -Import, Export, Current account balance, External balance on goods and services; Patent applications by residents and the following indicators of international trade -Import; Export; Current account balance; External balance on good and services.In the second stage, a Multivariate Panel Data Regression Model was built.The obtained results proved the presence of a direct statistically significant relationship between High technology exports, Imports and Current account balance; Innovation index and External balance on goods and services.An inverse functional dependence was found between the indicator Patent applications by residents and the Current account balance.Thus, the identified correlation-regression dependencies generally provide a basis for confirming the hypothesis of a direct relationship between the country's innovative development and its positioning in the field of international trade. Thus, considering the obtained results, they are limited by only the macro level.In future investigations, it is necessary to adapt the proposed methodology to develop a functional basis for the analysis of the impact of innovations on the ecosystem of specific enterprises.It is necessary to consider the national aspects of conducting business and the state policy implication of supporting the digitization of crucial stages of production.In this way, it will be possible to form conglomerations of companies according to the level of their informatization within the country.This aspect will make it possible to identify potential business clusters that will contribute to the development of international trade. Figure 1 . The dynamics of scientific publications (keywords «innovations» and «international trade») presented in the databases WoS (a) and Scopus (b) Source: developed by the authors based on the results of Scopus and WoS database. Figure 2 . Figure 2. Scientific publications in the leading countries by the keywords «innovations» and «international trade» presented in the Scopus Source: delevoped by the author based on the results of Scopus. Figure 3 . Figure 3. Distribution of scientific publications by thematic groups according to the keywords «innovations» and «international trade» presented in the Scopus Sources: built by the author based on the results of Scopus. ), for France -De Rassenfosse et al. (2022), for the group of countries of the European Patent Office -Ye et al. (2022), for Germany -Chen et al. (2021). Figure 4 . Figure 4. Scatter diagram of canonical variables of the studied groups of indicators of innovative development of countries and the level of international trade Sources: developed by the authors. Figure 5 . Figure 5. Plot of Eigenvalues Sources: developed by the authors. Table 2 . Statistical characteristics of selected canonical roots Sources: developed by the authors.
2023-04-27T15:13:40.093Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "c875586982debee499fa159cc1a6e281dd2331b9", "oa_license": "CCBY", "oa_url": "https://armgpublishing.com/wp-content/uploads/2023/03/A675-2023-08_Huseynova-et-al_.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "005d845452bb42ff5c7ff7136a79a12df2a1373a", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [] }
234778449
pes2o/s2orc
v3-fos-license
Industrial linkage and spillover effects of the logistics service industry: an input–output analysis This study applied input–output analysis to examine the industrial linkage effects of the logistics service industry based on the data collected from input–output tables provided by the World Input–Output Database. The results showed the detailed industrial linkage effects of the logistics service industry, indicating several logistics sectors (transportation, storage, and handling) are not only interdependent but they also form a service ecosystem. The study results provide new insights on industrial linkage effects of the logistics service industry and their ripple effects throughout the economy. The results will be valuable source of knowledge for establishing service industrial policies for both the logistics industry and a country’s economy as a whole. Introduction The COVID-19 pandemic has brought an unprecedented experience and pain to the world. For many organizations, their global supply chains were disrupted and consumption patterns of people have changed dramatically as industry and distribution structures were altered. Online orders surged since consumers practiced the shelter-in-place policy and curb side pick-up services flourished as customers 1 3 preferred contact-free shopping (Korea Logistics News 2020; . New logistics service systems had to be developed to support these new type of retail services. This is not a temporary phenomenon, but the start of a paradigm shift for organizations, global supply chains, industrial structures, and eventually the national economy (ChosunBiz 2020). The pandemic crisis amplified the importance of services as the core of organizational competitiveness, both in domestic and international markets (McKee 2008). South Korea, the location of this study, has been lauded as a model country for successfully controlling the COVID-19 crisis (CLO 2020). Korea is seizing the opportunity to strengthen the competitiveness of its industries through strengthened new global supply chains enabled by advanced service technologies. Without the support of logistics service companies, the chaos caused by COVID-19 would have been much more severe. The Korean logistics service industry has contributed significantly to minimize the damage caused by the spread of COVID-19. The logistics service industry is a strategic priority for enhancing the quality of major activities within the supply chain, such as supply, manufacturing, distribution, and consumption as well as improving the competitiveness of the entire system (Mentzer et al. 2001). Logistics strategies influence the financial performance of organizations through supply chain agility in the dynamic business environment (Hwang and Kim 2019). In addition, the logistics service industry has high value added and employment inducement effects. Thus, an efficient logistics service system has a significant impact on lean management of the entire supply chain and subsequent contributions to competitive advantage of the manufacturing and distribution industries (Min et al. 2019). In the macro perspective, the impact of logistics service and its ripple effects is enormous for industries and the national economy, especially for export-oriented economies. The Korean logistics service industry, which has grown from the country's transportation service sector, has expanded to the various related service fields. Today, the logistics service industry includes transportation, storage, distribution, assembly, packaging, and logistics information management. The logistics service network has expanded, both in scale and scope, with the stretched global supply chains and transportation service systems (Kim et al. 2016). Moreover, the strategic importance of the logistics service industry has become significant with the increasing demands for products of diverse industries. To gain competitive advantage, logistics service firms need to form collaborative relationships with partners and their customers (Park and Kim 2020). The primary focus of previous studies on the logistics service industry has been on the shipping or transportation service. There is a paucity of research on the macro perspective of logistics service at the industry level. Thus, this study contributes to the literature as it investigates the logistics service industry based on the input-output analysis (IOA). IOA assumes that demand for intermediate inputs is a linear function of output, expressing production increase of an output of a sector drives continuous increase in demand for products of other sectors. IOA has been recognized as a useful approach to analyze and predict the overall economic impact, as it has the feature of a general equilibrium model that emphasizes the relationship between sales and purchases of inputs (Miller and Blair 2009). Industrial linkage and spillover effects of the logistics… More specifically, we analyze the linkage effects of the entire logistics service industry which encompasses such functions as land transport, pipelines, water transport, air transport, courier services, warehousing, and handling. For an empirical study, we used the Korean logistics service industry data of the National Input-Output Tables of the World Input-Output Database (WIOD) released in November 2016. IOA will derive input coefficients, production inducement coefficients, backward linkage effects, and forward linkage effects of the Korean logistics service industry. The paper is organized as follows. In Sect. 2, we review the relevant literature to provide the theoretical support for the study. Section 3 presents research methodology used in the study, detailing the procedure to derive various coefficients by IOA and the data applied. In Sect. 4, we present the results and discussion of the analysis. We conclude the paper in Sect. 5 with a summary of results, implications, limitations of the study, and future research needs. Literature review There have been numerous studies on industry linkage effects for the various industries. Some of the most relevant studies are reviewed here to provide theoretical foundation for our study. Choi et al. (2008) applied IOA to identify the impact of the maritime freight transportation service on the Korea's national economy. The authors investigated the industrial linkage effects of the maritime freight transportation service with 20 different sectors and extracted production inducing effects, added value-inducing effects, and supply-shortage effects. Lee and Yoo (2016) analyzed the economic impacts of four transportation service modes (rail, road, water, and air) in Korea using IOA and found that the production inducing effect of the investment in transportation service was the largest in the petroleum and transportation equipment sectors, while the rail and road transportation service sectors had the greatest supply-shortage effect. There have been more studies on the logistics service industry in Korea including those by Park et al. (2009), Kang et al. (2011), and Park (2019. Most of the data used in these studies were extracted from the Korean Input-output Tables issued by the Bank of Korea. The research conducted on Korea's logistics service and related industries tended to focus on the shipping and port logistics sectors. Studies by Park (2019) and Kang et al. (2011) were the only ones that paid attention to the industrial spillover effect of the logistics service industry using WIOD or OECD's ISIC Input-output Table. Chiu and Lin (2012a) investigated the role and influence of the transportation service sector on the national economy of Taiwan by using IOA. The results showed that the transportation service industry in Taiwan had the capability to absorb products of related industries rather than just being used as an input by other industries, indicating its strong role in supporting other industries. Road transportation service also demonstrated comparatively more strength in supporting other domestic industries than being supported by others. Zhao et al. (2007) performed a comparative analysis of the characteristics of industry relevancy and industry spread in the transportation service industry and compared five transportation service modes between China and the USA. They 1 3 concluded that China's transportation service industry has played an increasingly more important role in the national economy. The transportation service industry in China is one of the industries that have a high ratio of intermediate demand, and its drawing power on relevant industries is much greater than that in the US. Morrissey and O'Donoghue (2013) examined the linkages and production effects of the Irish marine service sector on the national economy. Disaggregating the Irish IO table for 2007 to include 10 additional marine service sectors, this paper represented the first effort to quantify the industrial spillover effects and employment multipliers of the marine service sector. This analysis found that Irish marine service sectors, notably the maritime transportation service sector, played an important role within the wider Irish economy. Most recent studies that analyzed Korean industrial characteristics using the international input-output table took aim at the information and communication technology Kim and Lee (2020) are typical. These studies focused on Korea's ICT industry from various perspectives through analyzing the industry linkage effects using the reliable WIOD and OECD data. These studies conducted a comparative analysis among selected countries to derive meaningful implications. Yun et al. (2017) used the OECD data to classify Korea's ICT and automotive industries into service and manufacturing sectors to identify the differences in industrial linkage effects. compared and analyzed the competitive advantage, catch-up and industrial linkage effects of the ICT industries between Korea and India, while Li et al. (2019) studied the competitive advantage and industrial impact of ICT industries in China. Min et al. (2019) also used the WIOD data to compare backward and forward linkage effects of ICT and mechanical equipment industries among five countries: Korea, China, the United States, Germany, and Japan. Kim and Lee (2020) used the WIOD data to compare and analyze the production inducement effects of ICT services, ICT manufacturing, chemical and medical industries as major industries in Korea and the Netherlands. A summary of the studies, mostly recent ones, on industrial linkage effects, is shown in Table 1. While there are numerous existing studies using IOA for various industries, there has been limited research on specific logistics service sectors to examine the economic ripple effects using IOA. There exists lack of awareness of the importance of the logistics service industry in Korea's industrial structure. This study fills this gap in the literature by conducting a research on the overall logistics service industry level within Korea's industrial structure. To this end, this study analyzes the economic ripple effects of the overall logistics service industry, which encompasses various logistics functions such as land transport, sea transport, air transport, courier, storage, and handling and suggests implications for improving industrial competitiveness. Research design The industrial linkage effect was first explored in 1936 by Leontief. Leontief integrated Wallace's General Equilibrium Theory and a theoretical model based on empirical economic data for analyzing industrial associations (Leontief 1941(Leontief , 1970(Leontief , 1986Miller and Blair 2009). The industrial linkage effect, also referred to as input-output analysis or industrial spillover effect, is useful for analyzing Comparative analysis on the characteristics of transportation systems between China and America based on input-output theory Wang (1990) Analyzing the economic effects of transportation, communications, and construction industries specific economic structures and associations among industries, a topic that is outside the realm of macro analysis. IOA measures the ripple effect of changes in demand on production activities, assuming that the input structure of products is stable for a certain period of time. It is fundamental to analyze the impact on final demand, the exogenous variable, through the measurement of the interrelationships between sectors. Applying IOA is a useful way to assess backward and forward linkages, because it enables analysis of inter-industry relationships in the overall industry structure with a focus on the logistics service industry. The industrial linkage effect analysis is a useful methodology for identifying and adjusting policy directions of industrial structures, because it enables an assessment of the ripple effect of each industry's contribution to the economy, such as production, employment, and income. The industrial linkage effect measures the ripple effect of changes in an industry on production activities of other industries in a certain time period. The association of production activities with added value and income creates the ripple effect in the entire national economy. In this study, backward and forward linkage effects of the logistics service industry are derived from input coefficients and production inducement coefficients calculated for the input-output analysis. The analysis procedure is shown in Fig. 1. Input coefficient The input coefficient represents the measure of raw materials and intermediate goods used to produce a unit of output in each sector (Cartwright et al. 1981;Miernyk 1965;Richardson 1972). The total output depends on the size of the final demand. The input coefficient plays the role of mediating the size of the final demand and the level of total output. The input coefficient is expressed as where X ij is the intermediate demand for product j as an input by industry i. Production inducement coefficient The input coefficient is the parameter used to measure the magnitude of the production inducement coefficient. When the number of industrial segments is large, however, it is difficult to measure the infinite number of direct and indirect production ripple effects generated by one unit of output using input coefficients. The production inducement coefficient is computed using an inverse matrix. The production inducement coefficient is expressed by (I−A) −1 , which is also called the Leontief inverse. A represents the input matrix and I for the unit matrix with ones on the main diagonal and zeros elsewhere. Industrial linkage and spillover effects of the logistics… Industry linkage effects There are two approaches to analyze the degree of industry interdependence using the production inducement coefficient. One is to investigate the industries that demand intermediate goods, while the other is to analyze the industries that supply intermediate goods. The former analyzes the backward linkage effect, and the latter is for the forward linkage effect. Several studies measured backward and forward linkage effects, including Chenery and Watanabe (1953), Rasmussen (1957), and Jones (1976), and the methodology suggested by Rasmussen (1957) has been used most widely. Table 2 presents the formulas for calculating industry linkage effects. Derivation of production inducement coefficients In order to analyze the forward and backward linkage effects of the logistics service industry, the time series data in WIOD's National IO tables were used to derive the input coefficient matrix A, unit matrix I, (I−A) matrix, and production inducement coefficient matrix(I−A) −1 . Five matrices were prepared for each year from 2000 to 2014. The input coefficient was calculated by dividing each industry's intermediate input requirement for the production process by the total amount of input of each industry. The production inducement coefficient (I−A) −1 is the inverse of the (I-A) matrix, which is calculated by subtracting the technical coefficient matrix from the unit matrix. By using the input coefficient to compute production inducement coefficient, we can measure the level of change in the final demand independently to estimate the corresponding level of change in production. The production inducement coefficient represents the cumulative multiplier that conveys all direct and indirect ripple effects on the production of each sector, assuming that final demand is 1 unit. The element (i,j) of the production inducement matrix indicates the total output increase in industry i due to the increase in the final demand of industry j. The sum of the columns in the production inducement matrix indicates the total output change in all industries due to the unit increase in the final demand of industry j. The production inducement coefficients of industries, that use domestic raw materials as inputs for manufacturing, are expected to be higher than that of the service sector. Data analysis In this study, we analyzed Korea's logistics service industry. In general, the logistics service industry deals with products and/or services related to transport, storage, handling, and packaging. The detailed classification standards and methods of the logistics service industry may vary depending on the data collection agency or industry classification system of the given country. The data used for our analysis were the Korean logistics service industry for the 2000-2014 period based on the National IO Tables of the World Input-Output Database (WIOD) (released in November 2016). The selected industry classifications are from H49 to H53 (see Table 3). Analysis of the linkage effect The backward linkage effect indicates the extent that an industry's production of output requires intermediate inputs from other industries, and it is computed by dividing the sum of each column in the production inducement coefficient matrix (I−A) −1 by the overall industry average. Regarding the five sectors of the logistics service industry from 2000 to 2014, backward linkage effects of H49 (Land Transport and Industrial linkage and spillover effects of the logistics… Transport via Pipelines) ranged from 0.969 to 0.993 with an average of 0.98; H50 (Water Transport) ranged from 0.964 to 1.046 with an average of 1.00; H51 (Air Transport) ranged from 0.992 to 1.067 with an average of 1.03; H52 (Warehousing and Support Activities for Transportation) ranged from 1.055 to 1.125 with an average of 1.09; and H53 (Postal and Courier Activities) ranged from 1.038 to 1.142 with an average of 1.08. In general, assuming that the overall industrial mean of the backward linkage effect is 1, an industry is considered to have a high effect if its mean is greater than 1 and a low effect if the industry mean is less than 1. Among the five sectors of the Korean logistics service industry, H49 (Land Transport and Transport via Pipelines) showed the lowest mean effect at 0.98 and H52 (Warehousing and Support Activities for Transportation) showed the highest at 1.09. Overall, the backward linkage effect of the entire logistics service industry was between 0.98 and 1.09, which is close to the average of 1. Consequently, increasing the output of the logistics service industry by one unit is expected to cause a bit higher than the average level of ripple effect on the production of the upstream industries that supply intermediate goods. Warehousing and support activities for transportation (H52) and postal and courier activities (H53) sectors showed relatively high backward linkage effects than other transportation service sectors (H49, H50, H51, and H51). The forward linkage effect refers to the change in demand for the logistics service industry's output when the production of each industry increases by one unit, and it is computed by dividing the sum of each row in the production inducement coefficient matrix (I−A) −1 by the overall industry average. Regarding the five sectors of the logistics service industry over the study period, the forward linkage effect of H49 (Land Transport and Transport via Pipelines) ranged from 1.31 to 1.49 with an average of 1.38. The effect of H50 (Water Transport) ranged from 0.54 to 0.58 with an average of 0.56; H51 (Air Transport) ranged from 0.60 to 0.77 with an average of 0.68; H52 (Warehousing and Support Activities for Transportation) ranged from 1.26 to 1.42 with an average of 1.33; and H53 (Postal and Courier Activities) ranged from 0.66 to 0.76 with an average of 0.70. As similar to the backward linkage effect, the overall industrial mean is assumed to be 1. If the industrial average has the forward effect greater than 1, we can conclude that the industry has a high effect rate. Among the five sectors of the logistics service industry, H50 (Water Transport) showed the lowest mean at 0.56, while H49 (Land Transport and Transport via Pipelines) showed the highest at 1.38. Overall, the forward linkage effect of the entire logistics service industry distributed from 0.56 to 1.38, which indicated noticeable differences among sectors of the logistics service industry. In detail, water transport (H50), air transport (H51), and postal and courier activities (H53) had low forward linkage effects, while land transport and transport via pipelines (H49) and warehousing and support activities for transportation (H52) had relatively high forward linkage effects. Table 4 and Fig. 2 present the summary of each sector's backward and forward linkage effects. This study calculated average backward and forward linkage effects of each sector of the Korean logistics service industry over the 2000-2014 period. On the premise that the industry mean is 1, backward and forward linkage effects of H49 (Land Transport and Transport via Pipelines) were 0.98 (− 0.02) and 1.38 (+ 0.38), H50 (Water Transport) had 1.00 (0.00) and 0.56 (− 0.44), H51 (Air Transport) showed 1.03 (+ 0.03) and 0.68 (− 0.32), H52 (Warehousing and Support Activities for Transportation) had 1.09 (+ 0.09) and 1.33 (+ 0.33), and H53 (Postal and Courier Activities) showed 1.08 (+ 0.08) and 0.70 (− 0.30), respectively. The results revealed that H50, H51, and H53 had high backward linkage effects, whereas H49 and H52 had high forward linkage effects. Based on the results, we can state that majority of domestic demand for transport service belongs to land transport and transport via pipelines (H49), and warehousing and support activities for transportation (H52) were performed in conjunction with this effect. Korea's land transport service industry is a representative form of transportation service that connects supply chain entities which sequentially carry out value adding activities followed by sales contracts for goods (manufacturing-manufacturing/ 3 Industrial linkage and spillover effects of the logistics… manufacturing-logistics). Storage and logistics facilities such as warehouses, logistics centers, distribution centers, and cargo terminals serve as nodes of land transport services. The high backward linkage effect of the storage and logistics service industry is possibly attributable to its strategic nature including logistics outsourcing and third-party logistics (3PL) with manufacturing and distribution industries. In contrast, domestic transportation services were not well developed in Korea because water transport and air transport have not been efficient due to the narrowness of the nation's land area. The domestic demand for water and air transport service had been insignificant. Korea's coastal and air transport have primarily served for trade and transshipment activities due to the country's unique geographic and environmental characteristics. In other words, the demand for water transport and air transport services tended to be limited to import and export businesses. In addition, a fleet of ships and aircrafts, the primary means of transportation that are essential to produce water and air transport services, require enormous amounts of investment. The postal and courier activities industry is based on the door-to-door service concept, meaning that it is basically an industry that needs to build a tight transportation service network on a nationwide scale. Accordingly, it is an industry that does not have high demand generating effects in comparison to the total investment requirement for the logistics service infrastructure. The change trends in backward 1 3 and forward linkage effects over time by sector in the logistics service industry is shown in Fig. 3. Linkage effects by industry sectors The linkage effect that indicates the logistics service industry's demand for intermediate inputs from other industries when its production changes by one unit represents the backward linkage effect. And the forward linkage effect, which indicates the degree to which the products of the logistics service industry are used as intermediate goods for other industries, varies from sector to sector within the logistics service industry. This study analyzed the production inducement coefficients of the five sectors of the Korean logistics service industry to identify the sectors with the highest backward linkage and forward linkage effects. These sectors were divided into two groups: industries that were impacted the most during the entire study period (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014) and industries that were impacted the most for a certain period. This study selected the 10% of the industries of each logistics industry with the highest forward/backward linkage effect according to the analysis results. The selected industries were divided into two groups. One contains the industries remained in the top 10% over the entire study period, the other contains industries with the highest forward/backward linkage effect for only a certain period of time. The industries with high forward/backward linkage effects over the entire study period could be considered as the significant forward/backward industries. On the other hand, among the industries which are with high linkage effect for only a certain period of time, if an industry's linkage effects have been increasing to a greater extent recently, it can be regarded as a relatively more important forward/backward industry. By using this, the changes in the industries that affect the upstream and downstream of the logistics industry can be estimated. According to the results, transport had a high production inducement effect on several manufacturing sectors over the entire period. More specifically, each transport service sector was closely related to the manufacturing sector that produces the equipment/machines that are used by the corresponding mode of transportation service. As a result, land transport and transport via pipelines (H49), exerted a significant production inducement effect on petroleum products manufacturing (C19) and automobile manufacturing (C29), while water transport (H50) and air transport (H51) exerted a strong production inducement effect on petroleum products manufacturing (C19) and other transport equipment manufacturing including ship and aircraft (C30). Moreover, the three transportation sectors had a common production inducement effect for warehousing and support activities for transportation (H52) within the logistics sector. In addition, water transport (H50), air transport (H51), warehousing and support activities for transportation (H52) related to international logistics have a common production inducement effect in the financial industry (K64). This is because overseas payment, remittance, currency exchange, receipt of bill of exchange, issuance of L/C, and purchase of marine cargo insurance. These service sectors showed large production inducement effects on the financial service activities sector (K64). Because the focus of warehousing and support activities for transportation (H52) is on selecting strategic locations to construct and operate logistics service facilities, it had a production inducement effect on real estate activities (L68) and electricity, gas, steam and air conditioning supply sectors (D35). In addition, accommodation and food service activities (I) showed a production inducement effect due to the needs for transportation equipment and personnel by logistics service facilities. The transportation service industry also induced production in the wholesale and retail industries (G46, G47) and several transportation service sectors (H49, H50, H51). It also showed a production inducement effect on the manufacture of computers, electronics, and optical products sector (C26), which produces equipment with high added value. Industries that have production inducement effects due to the backward linkage effect of the logistics service industry, in which products from other industries are inputted as intermediate goods for the production activities, are organized in Table 5. While logistics services, such as transport and storage and handling, are final products of land transport and transport via pipelines (H49), air transport (H51), warehousing and support activities for transportation (H52), and postal and courier activities (H53), they are commonly used as intermediate inputs by distributors like wholesalers and/or retailers (G45, G46, G47). However, among the transportation service sectors, only water transportation (H50) had a relatively low forward linkage effect, because water transportation mainly served import/export activities for the manufacturing industry, rather than wholesale and retail trade. Therefore, water transport services are used for mass transportation of bulk products, such as forestry and logging (A02), mining and quarrying (B), manufacture of wood and products of wood and cork, except furniture (C16). Also, in general, the shipbuilding industry is an upstream industry, while the port industry is a downstream industry of water transport (H50). The analysis also showed that water transport (H50) had a high forward linkage effect on warehousing and support activities for transportation (H52) over the entire study period. On the other hand, land transport and transport via pipelines (H49) continued to be widely used by mining and quarrying (B) and manufacture of other non-metallic mineral products (C23). Since 2006, it also started to be widely used for manufacture of paper and paper products (C17) and sewerage, waste collection, treatment, and disposal activities (E37-E39). Warehousing and support activities for transportation (H52) had been used in all transport service industries (H49, H50, H51, H53), implying that transport, storage and handling service of logistics are not independent but instead a structure in which services are linked together. Since 2008, H53 had been widely used in financial sectors (K64, K65, K66). Due to the advances in IT technology, offline face-to-face financial services have diminished considerably while internet and mobile financial services increased. Accordingly, the use of courier services in financial services, such as card issuance and document delivery, have seen a dramatic up trend. Table 6 shows industries that had production inducement effects due to forward linkage effects of the logistics service industry that provided its products as intermediate goods. Conclusions This study conducted an input-output analysis on Korea's logistics service industry using the WIOD data for the 2000-2014 period. Five logistics service industry sectors (H49, H50, H51, H52, H53) were analyzed based on the WIOD industry classification criteria. This study identified the forward and backward industries of the logistics industry, and calculated the production inducement coefficients of the logistics service industry on other industries to check the existence and degree of the forward and backward linkage effects. Through an input-output analysis, this study classified the upstream and downstream industries that are impacted by the production inducement effect of the logistics service industry. Therefore, this study makes significant contribution to the existing body of knowledge about the linkage and spillover effects the logistics service industry to other industries. Our findings also disclose that the transport, storage, and handling fields of logistics service are not independent of each other, but instead they constitute a services ecosystem. In addition, our study found that in the transportation service sector, the transport equipment and petroleum manufacturing are an upstream industry while wholesale and retail industries are downstream industries. The results of this study also recognized that the logistics service industry was playing a role in connecting supply, manufacturing, and distribution activities in the supply chain. Specifically, ocean transportation service was shown to be the most common mode of water transportation service in Korea which supports import and export activities of products of the manufacturing industry. This study also confirmed the fact that shipbuilding industry is an upstream industry while port industry is a downstream industry. Moreover, due to the recent development of IT technology, the financial service industry showed an extensive use of courier services. The study results showed the evidence of high forward linkage effects of land transport and transport via pipelines (H49) and warehousing and support activities for transportation (H52), which may be a consequence of Korea's successful quarantine efforts related to COVID-19. In Korea, most domestic logistics service needs are handled by land transport and transport via pipelines (H49), and warehousing and support activities for transportation (H52) are intimately linked to supporting the land transport service. Besides, their services are widely used as intermediate inputs in wholesale and retail service business (G45, G46, G47). Logistics infrastructure related to the industry H49 and H52, which have been shown in the results to have high forward linkage effect, is difficult to be established in the short term. Besides, it cannot be systemized easily as needed in a timely manner. In the COVID-19 pandemic, the effective quarantine in Korea and the stable national life due to the various distribution and logistics services can be seen as the continued growth of the logistics industry. The recent surge in a variety of delivery services in Korea can be considered as the evidence. In this context, it can be expected that industry H49 and H52, which have been shown in a result of this study 's 15-year (2000-2014) data analysis, have been developed steadily to date and are contributing to the COVID-19 pandemic. Without the expansion of land transport service networks and storage and handling service facilities in Korea, the recent chaos caused by COVID-19 would have been much greater in scale and scope. Without properly equipped logistics services, there would have been many disruptive problems with the purchase of daily necessities as well as the distribution of essential medical supplies such as personal protective equipment (PPE), testing supplies, and medicine. Online purchases have skyrocketed in Korea due to COVID-19. Additionally, being supported by Korea's land transport and transport via pipelines (H49) and warehousing and support activities for transportation (H52), distribution companies have created various delivery services such as one-day delivery service, overnight delivery service, and early morning delivery service. It should be noted that Korean logistics service companies have been contributing effectively to delivering the necessary products/services to consumers who rely on online purchases, and they have played an important role in managing the spread of COVID-19. The Korean logistics services have been operating to their maximum capacity in the current national emergency. The results of this study, linkage and ripple effects among industries, can shed important new insights that can contribute to establishing efficient and effective policies for pandemic management through the logistics service industry. This study was conducted on the logistics service industry, which belongs to the service sector, in which industrial chain effect analysis was not as active as the manufacturing sector. It also looks into both the backward and forward industrial linkage structure of the logistics service industry, which has played a supporting role in connecting relations between members within the manufacturing-oriented supply chain. In addition, unlike prior research that put the focus on one specific sector of the logistics service industry, this study investigated the entire logistics service industry that includes all logistics service functions. In terms of corporate management, the service industry could also provide a foundation to shape supply chain management into a service entity's management strategy, as in manufacturing. In addition, managers of logistics service companies will be able to use them to make strategic decisions on the relationships among service supply chain members. In other words, depending on the strategic orientation of the relationship, this study could provide indicators of strategic decisions such as outsourcing, strategic alliances, and vertical integration. On the other hand, from a national policy perspective, this study provides a foundation for understanding the backward and forward linkage effects of the logistics service industry. The backward and forward linkage effects can play an important role in determining the priority of investment when selecting the national policybuilding sectors with limited resources. If the logistics service industry has a greater backward linkage effect than other industries, it can be more beneficial in terms of the overall economy's productive activities than investment to foster other industries. It is possible to establish efficient and effective industrial policies for the industries that impose strong ripple effects on the logistics service industry and the industrial areas whose outputs were highly utilized in the logistics service industry. This study has several limitations which can provide opportunities for future research. First, while the study used the highly reliable WIOD data, we could not exclude the passenger sector included in the transportation service industry. Thus, it is necessary to secure subdivided industrial classifications to obtain more suitable data of the logistics service industry. The industrial structure and the associated forward and backward linkages differ significantly from country to country. Second, our study focused on the logistics service industry of South Korea. Thus, future studies should investigate the logistics service industries of different countries to establish the concept of logistics service supply chains (LSSC). Future research should also investigate the association between the GDP level and the production inducement effect of the logistics service industry and compare the degree of association across countries. Industrial linkage and spillover effects of the logistics… Motion picture, video and television programme production, sound recording and music publishing activities; programming and broadcasting activities
2021-05-20T05:17:49.549Z
2021-05-17T00:00:00.000
{ "year": 2021, "sha1": "ae26f36570161693da3b651529f2ca9c72d036fc", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s11628-021-00440-1.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "ae26f36570161693da3b651529f2ca9c72d036fc", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [] }
219261092
pes2o/s2orc
v3-fos-license
Experimental demonstration of a quantum generative adversarial network for continuous distributions The potential advantage of machine learning in quantum computers is a topic of intense discussion in the literature. Theoretical, numerical and experimental explorations will most likely be required to understand its power. There has been different algorithms proposed to exploit the probabilistic nature of variational quantum circuits for generative modelling. In this paper, we employ a hybrid architecture for quantum generative adversarial networks (QGANs) and study their robustness in the presence of noise. We devise a simple way of adding different types of noise to the quantum generator circuit, and numerically simulate the noisy hybrid QGANs to learn continuous probability distributions, and show that the performance of HQGANs remain unaffected. We also investigate the effect of different parameters on the training time to reduce the computational scaling of the algorithm and simplify its deployment on a quantum computer. We then perform the training on Rigetti's Aspen-4-2Q-A quantum processing unit, and present the results from the training. Our results pave the way for experimental exploration of different quantum machine learning algorithms on noisy intermediate scale quantum devices. I. INTRODUCTION Quantum computers are expected to provide advantage over classical machines in certain sampling tasks [1,2] because of their underlying quantum correlations, which could be helpful in modelling hard probability distributions. This has spurred much interest in investigating the possibility of achieving quantum advantage in quantum machine learning [3], leading to a lot of different quantum algorithms being proposed in the the last few years. However, the lack of perfect control in currently available quantum devices [4] limits the implementation of these algorithms to proof-of-principle experiments. The hybrid quantum-classical (HQC) [5][6][7] approach provides a way around this, by using the quantum resources in tandem with classical computers to improve the overall efficiency of the algorithm. In the last few years, the HQC approach has been frequently used to develop quantum algorithms for noisy intermediate scale quantum (NISQ) devices. Most of these algorithms use parameterized quantum circuits as physical ansatzes or statistical models, which are optimized by minimizing a cost function. Some examples include variational autoencoders (VAE) [8][9][10], variational quantum eigensolvers (VQE) [6,11], the quantum approximate optimization algorithm (QAOA) [12], generative adversarial networks (QGANs) [13][14][15][16][17][18], among others. [19][20][21] In the last couple of years, variational quantum circuits have been employed for generative modelling, particularly GANs. These studies include proof-of-principle simulations and experimental demonstrations. One of these demonstrations focused on quantum state estimation [18], while the other learnt distributions by loading them in quantum states [16]. While these results are very impressive, there is still need for extensive investigation about the performance of qGANs in presence of noise and the potential advantages of using them over their classical counterparts. Various studies have explored the robustness to noise of different HQC algorithms such as VQE [22,23] and QAOA [24], where they show that these algorithms are to some extent resilient to it. In this work, we investigate the effect of noise on variational quantum circuits for generative modelling. We use the HQGAN model recently proposed by our research group [13] for learning classical probability distributions, and investigate its robustness with respect to noise. More specifically, we run simulations with realistic parameters to understand the effect of gate noise and other hardware imperfections on the model, before implementing it on a quantum computer. The rest of the paper is organised as follows: In Section II, we describe the theory of hybrid quantum GANs (HQGANs), the numerical simulation setup, and results. Section III describes the implementation of HQGANs on Rigetti's Aspen-4-2Q-A quantum processing unit, and presents the results obtained. Finally, a discussion of the significance of the experiment and the conclusion are presented in Section IV. II. THEORY OF HYBRID QUANTUM GENERATIVE ADVERSARIAL NETWORKS A prototypical GAN [25] consists of two networksa generator, F G (z; θ g ) and a discriminator, F D (x; θ d )playing an adversarial game, which can be summarized as follows: where θ g and θ d are the parameters of the generator and discriminator respectively, p z (z) is a fixed prior distribution for the generator to sample from and translate to samples that are indistinguishable from the real distribution p data (x), x is the data sampled from the real distribution p data (x), and z is the noise sampled from the prior distribution p z (z). These models have become a very powerful tool in the machine learning community, for a variety of tasks [26], including image and video generation [27,28], and materials discovery [29][30][31][32]. In this work, we employ a HQGAN that learns a classical target data distribution using quantum resources. We continue describing the details of the different quantum and classical networks we have employed in this work. A. HQGAN architecture The HQGAN conserves the two-component architecture of a regular GAN. While the discriminator used here is classical in nature, the generator uses operations on quantum states to perform its function. The generator is a two qubit quantum circuit, consisting of an encoding element and a variational element. The encoding element is built up out of two layers of single qubit rotation gates and uses a tensorial mapping strategy to introduce non-linearities [33,34]. The variational element consists of parametrized rotations and entangling (CNOT) gates. The angles in the rotations correspond to the parameters θ g which are optimized during training. A circuit diagram of the generator is shown in Figure 1. The data is generated by measuring the first qubit of the generator circuit in the σ z basis. The discriminator is a classical feed-forward neural network, with four layers -an input layer, two hidden layers with 50 units each, and an output layer. The two fully-connected hidden layers (1 to 50, and 50 to 50) have an exponential linear unit (ELU) activation function, and the final fully connected layer (50 to 1) has a sigmoid activation function. B. Simulation setup The HQGAN training was carried out by implementing the variational circuits using PyQuil [35]. The functions for expectation value and gradient calculation was added employing the autograd function from the Py-Torch library [36], which enables us to do gradient based optimization. We use the Adam optimizer [37], and one-side label smoothing [38] was used for both simulations and the experiment, which is a typical strategy used in classical GANs to improve convergence. During the simulation, the expectation value calculations were done by taking 1000 circuit evaluations, and the metrics (Kullback-Leibler (KL) divergence, discriminator and generator losses, norm of the gradients, mean and standard deviation) during the training were calculated using 100 data points sampled from the distributions, unless stated otherwise. The target data source for training was generated by the quantum generator with the parameters fixed to θ g = [0.35, 2.10, 5.06]. The target distribution generated by the generator is shown in Figure 2. We choose the initial parameters of the quantum generator, θ g to be [0.31, 1.89, 4.56] for all the simulations. To evaluate the performance of HQGANs in realistic execution conditions, we have studied our model in the presence of noise sources that model the errors in quantum hardware. The noise was introduced in the simulations by adding noisy gates to the variational quantum circuit (the generator component of the GAN). Noisy identity gates were added after any standard gate in the generator. The number of such noisy gates, was decided based on the gate time, as follows: n = t gate /t I−gate . A sample implementation of a noisy gate in the circuit is shown in figure 3. We used different directive statements (pragmas) available in the Forest platform to modify our noiseless circuit. The pragmas inform the QVM that a gate is to be replaced with a imperfect realization using a Kraus map in the noisy simulation. The noise models we use in our simulations include: 1) amplitude damping; 2) dephasing; 3) decoherence (a combination of amplitude damping and dephasing), and 4) a combination of amplitude damping, dephasing and readout noise. The Kraus operators corresponding to amplitude damping and dephasing are shown in equation 2 and 3 below: where, p is the probability of the qubit decaying/dephasing over a time interval of interest. The operator K 2 for the amplitude damping noise model controls how the state decays from the state |1 to |0 , while the operator K 1 describes the evolution of the state in the absence of a quantum jump. The evolution of the density matrix in presence of the amplitude noise model can be expressed as follows: while the combined effect of the operators for the dephasing noise model, is the reduction of the transverse component of the density matrix, which can be expressed as: The decoherence noise model used in the simulations is a combination of the damping and dephasing noise model, and the Kraus operators for the combined noise model are obtained by combinatorially multiplying the operators of the two noise models. The readout noise model is modeled according to the assignment probability matrix, which has two independent parameters, p 0|0 and p 1|1 , representing the conditional probability p(x |x), where x is the output when x is transmitted through a noisy channel. The parameters for the noise models used in simulations are listed in Table I. A detailed description of the noise models used here can be found in Refs. [39,40] Parameters Values amplitude damping time T1 15 µs dephasing time T2 18 µs one-qubit gates times t1 50 ns two-qubit gates times t2 400 ns readout assignment probabilities p 0|0 , p 1|1 0.91 As with classical computation, wall-time on a quantum device is a limited resource. We carried out simulations to estimate and reduce the time required to run the experiment on a quantum computer. We investigated the effect of two parameters, the number of data points sampled from the distributions for training, and number of circuit evaluations for expectation value calculation. We discuss the results of the simulations in detail in the next sub-section. C. Simulation results The training was carried for 4500 epochs, and is evaluated by tracking metrics such as the KL divergence of the two distributions, discriminator and generator losses, norm of the gradients, mean and standard deviation of the distributions. We visualize the data in violin plots to compare the distributions generated from the simulations. The plots can be interpreted as follows, the white dot in the middle represents the median, the thick line the interquartile range, the thin line the rest of the distribution except the outliers, and the area around the probability density using kernel density functions. Noiseless simulations We simulated an ideal generator circuit with 100 input points sampled from the uniform distribution U (−1, 1), and 1000 circuit evaluations for every measurement value. The results from the simulations are plotted in Figure 4 and labeled B0 and B1. It can be seen from the plots that the distribution generated by the quantum generator at the end of 4500 epochs of training is an approximation of the target distribution, with the median, interquartile range and distributions resembling the corresponding metrics for the target distribution (labeled A) also shown in Figure 4. This is in agreement to the results from the numerical simulations carried out in our earlier work [13]. Noisy simulations The noisy simulations were carried out with the same parameters as the noiseless simulation above. The distributions from the different simulations and the training metrics are plotted in Figure 4. Different letters represent different noise models. The results show that under the presence of noise, all the simulations with the exception of the one under purely dephasing noise, approximately converged to the target distribution. The distributions for noisy simulations show visible tails products of noise. First, we ran simulations to test the effect of the damping and dephasing noise models on the training of the HQGAN. We investigated the effect by using different damping/dephasing probabilities, and show the results from the training at probability p = 0.09, which is a very high value compared to the error probability expected from the experiments. Under the influence of purely amplitude damping noise with a damping probability p = 0.09, (label C) we observe that a visible part of the distribution centers around expectation value 1.0, as a consequence of the noise driving the population of states to |0 . In general, noise drives the average of the population towards an expectation value of zero, as observed from the distributions previous to training. This result is consistent with the system being driven towards a mixed state. However, when the training is successful, the generated distribution recovers the shape and moments of the target one despite the effect of noise, showing how the training of variational circuits is still possible under moderate noise conditions. In the case of purely dephasing noise with a damping probability p = 0.09, the initial and final distributions are considerably distorted compared to the noiseless case, becoming a symmetric distribution (label D) centered at zero, as seen in Figure 4. After training, the final distribution gains some features that resemble the target distribution, such as the modes around 0.5 and 0.0, but still has a third spurious mode around -0.6. Since the training converged with even a large damping/dephasing probability, we next ran simulation with different combinations of the noise models. However, we now use the parameters in Table I, to compute the probabilities according to the expressions, p damping = 1 − exp(−t/T 1 ) and p dephasing = 0.5(1 − exp[−2t(1/T 2 − 1/2T 1 )]). The distribution generated by the combination of different noise models (label E and F), do not show a significant deviation from the ideal simulation, as the probability values are significantly smaller compared to the value of 0.09 employed in the previous simulations. It is also worth pointing out that our numerical experiments indicate that the presence of moderate noise facilitates convergence by reducing the number of epochs required. Improvements in optimization of other algorithms involving parameterized quantum circuits have been previously reported as well [20,21,41]. Input samples and Circuit evaluations Next, we ran simulations to investigate the ability of the algorithm to converge as a function of the number of input samples used per epoch of the HQGAN training. Using less samples reduces the amount of computational resources required for training. We used four different values of the number of input samples, 25, 50, 75, and 100, and the combination of amplitude damping, dephasing, and readout noise models for the simulations, with the parameters derived from the values in Table I. We plot the distributions generated from the training, in Figure 5a. We observe that the number of input samples used for every epoch of training has very little effect on the training, and the distributions converge to a good approximation of the target distribution in all the simulations after 2000 training epochs. After observing that reducing the number of samples does not seem to decrease the quality of the training, we studied the impact of varying number of measurement shots used to estimate expectation values. The number of input samples was fixed to 25 for all of these simulations, and we chose 100, 250, 500, and 1000 as the different number of circuit evaluations for expectation value calculation. Simulations were carried out with the combination of the amplitude damping, dephasing, and readout noise models, with the same parameters as for previous simulations. The generated distributions are plotted in Figure 5b, and it is evident that the training is unaffected by the number of circuit runs. This is in agreement with the result from previous works [42,43], where it was shown that for various hybrid quantum-classical optimization algorithms, the estimation of the expectation values can be done using very small number of measurements. The number of samples can be considered a hyper-parameter of the algorithm that could be tuned or adjusted during the calculation. Based on the simulations, we deduced that we could reduce the run-time of the experiment with a minimal effect on the accuracy by using smaller values for input samples and circuit evaluations. III. IMPLEMENTATION ON QUANTUM HARDWARE In this Section, we present details on the implementation and execution of an HQGAN on the Aspen-4-2Q-A superconducting quantum processor from Rigetti computing. We use the same target state, generator and discriminator architecture as in the simulation. We used our findings from noisy simulations regarding the hyperparameters for the training to optimize the run-time on the QPU. We set the number of samples drawn from the distribution for an epoch of the training to 25, and the number of circuit runs per evaluation of the expectations value to 250. Based on the simulations and gate times, we estimated that the full training would require more than a day on the QPU. We divided the full experiment into two hour slots and completed the training by running ∼ 30 such slots. We finished 2993 epochs of training, which was sufficient for the HQGAN to learn the target distribution. Figure 6 illustrates the evolution of the distribution generated by the generator at different epochs during the run on the quantum computer. Figure 6 shows how the final distribution from the quantum generator achieves a good overlap with the target distribution. To evaluate the training we tracked the KL divergence and loss functions of the generator and discriminator. The top two plots in Figure 7 show the dynamics of these metrics, indicating how the KL divergence decreases to nearly 0 during the training and the cost functions converge to approximately a value of ln(0.5) ≈ 0.7. We also plot the mean, standard deviations, and gradient norms of the distribution in Figure 7. The plot illustrates that the mean, standard deviations, and gradient norms converge for both distributions. We also plot the distribution at the first and last epoch of training obtained from our experiment in Figure 4 (label G). It can be seen from the plots that the results from the simulation matches the distribution obtained in the experiment. This result confirms the ability of the procedure to succeed under moderate levels of noise, and adds to the growing practical evidence showing the resilience to moderate level of noise of algorithms that rely on optimizing a parameterized quantum circuit on a NISQ device. IV. CONCLUSION In this work, we have demonstrated the training of a HQGAN [13] on a quantum computer. We evaluated the performance of our proposed HQGAN with respect to different noise models. We have ran simulations to reduce the computational scaling of the experiment, before performing the training on a quantum device. Our numerical demonstrations using both simulated and physical quantum devices show that the HQGAN training can be carried out in the presence of noise. We also found empirical evidence that we can perform the training with reduced number of input samples per epoch, and calculate expectation values for optimization with fewer circuit evaluations. We used the Aspen-4-2Q-A 2-qubit chip from Rigetti Computing for performing the HQGAN training, obtaining results similar to those obtained from classical simulation. Our numerical exploration illustrates how NISQ devices can be used for generative learning and contributes to the growing field of parameterized quantum circuits as machine learning models. Proof of principle demonstrations, such as those shown here, constitute a first step towards using quantum resources to enhance existing classical machine learning pipelines. One such direction is investigating the advantage of using the current protocol for practical tasks, such as image, speech and text generation. We will keep exploring different strategies to enhance the applicability of the current protocol to different scientific applications of broader interest.
2020-06-04T01:00:54.521Z
2020-06-02T00:00:00.000
{ "year": 2020, "sha1": "a25d35fdd42010f554051e72d19d3d4374203151", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2006.01976", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a25d35fdd42010f554051e72d19d3d4374203151", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
261073638
pes2o/s2orc
v3-fos-license
Shifting Patterns of Influenza Circulation during the COVID-19 Pandemic, Senegal Historically low levels of seasonal influenza circulation were reported during the first years of the COVID-19 pandemic and were mainly attributed to implementation of nonpharmaceutical interventions. In tropical regions, influenza’s seasonality differs largely, and data on this topic are scarce. We analyzed data from Senegal’s sentinel syndromic surveillance network before and after the start of the COVID-19 pandemic to assess changes in influenza circulation. We found that influenza shows year-round circulation in Senegal and has 2 distinct epidemic peaks: during January–March and during the rainy season in August–October. During 2021–2022, the expected January–March influenza peak completely disappeared, corresponding to periods of active SARS-CoV-2 circulation. We noted an unexpected influenza epidemic peak during May–July 2022. The observed reciprocal circulation of SARS-CoV-2 and influenza suggests that factors such as viral interference might be at play and should be further investigated in tropical settings. Historically low levels of seasonal influenza circulation were reported during the first years of the COVID-19 pandemic and were mainly attributed to implementation of nonpharmaceutical interventions. In tropical regions, influenza's seasonality differs largely, and data on this topic are scarce. We analyzed data from Senegal's sentinel syndromic surveillance network before and after the start of the COVID-19 pandemic to assess changes in influenza circulation. We found that influenza shows yearround circulation in Senegal and has 2 distinct epidemic peaks: during January-March and during the rainy season in August-October. During 2021-2022, the expected January-March influenza peak completely disappeared, corresponding to periods of active SARS-CoV-2 circulation. We noted an unexpected influenza epidemic peak during May-July 2022. The observed reciprocal circulation of SARS-CoV-2 and influenza suggests that factors such as viral interference might be at play and should be further investigated in tropical settings. sentinelle syndromique du Sénégal), known as the 4S Network (15). The 4S Network is concurrently run by the National Ministry of Health and Institut Pasteur de Dakar, which supervises the sites' activities, provides equipment, and manages sample transport, virological testing, and data management and analysis. The 4S Network functions as any syndromic surveillance system by monitoring and testing persons who have certain syndromes of public health interest, in this case, signs and symptoms suggestive of viral respiratory diseases, as previously described (16). The 4S Network comprises 25 sentinel sites: 22 community sites in primary or secondary healthcare facilities that are in charge of influenza-like illness (ILI) surveillance and 3 hospitals located in the region of Dakar that are in charge of severe acute respiratory illness (SARI) surveillance ( Figure 1). Sentinel sites are located throughout the country in each of its 14 regions, enabling geographic coverage and providing a fairly accurate representation of Senegal's population. Sites were selected according to their location, number of patients served, willingness to participate, and availability of minimal equipment, such as running water and a refrigerator (16). The 4S Network offers a unique source of epidemiologic data on ILI and SARI in Senegal. During the COVID-19 pandemic, the network also rapidly integrated SARS-CoV-2 testing in its routine surveillance activities. We extracted data from the 4S Network to analyze local dynamics of influenza, SARS-CoV-2, and interactions between the 2 viruses in a remote setting. Study Population and Case Definition We focused on ILI and SARI surveillance by using definitions from 2014 World Health Organization criteria (17). Those criteria define ILI cases as acute respiratory infection accompanied by a measured temperature of ≥38°C and cough that had an onset within the previous 10 days and define SARI cases as an acute respiratory infection and history of fever or a measured temperature of ≥38°C and cough that had an onset within the previous 10 days and resulted in hospital admission. We included all age groups in the study and had no specific exclusion criteria apart from a patient's refusal to participate. All patients undergoing virological testing and included in the surveillance program gave informed oral consent. All data were fully anonymized in advance. Study Period and Data Collection To assess baseline influenza seasonality patterns, we extracted influenza test results from January 1, 2013-March 1, 2020. To describe interactions between SARS-CoV-2 and influenza, we extracted those test results from March 1, 2020-July 31, 2022. For SARI cases, any patient that fit the case description and was admitted at a sentinel site was subjected to nasal and oropharyngeal swab sampling. For ILI surveillance, ≥5 samples per site were randomly collected for surveillance every week. SARI samples are transferred every day and ILI samples are transferred weekly to the national reference center for influenza and other respiratory viruses at the Pasteur Institute in Dakar. SARS-CoV-2 surveillance was rapidly integrated into the 4S Network. At the beginning of June 2020, every sample from SARI or ILI cases was subjected to monoplex SARS-CoV-2 RT-PCR testing by using the LightMix CoV E-gene and LightMix Modular Wuhan CoV RdRP-gene kits (TIB MOLBIOL, https://www. tib-molbiol.de). Although a new case definition including other symptoms, such as anosmia or digestive symptoms, for suspected COVID-19 cases was initially added to the surveillance system, ILI and SARI case definitions remained unchanged during that period. Senegal abandoned the new suspected COVID-19 case definition at the end of 2021, following the World Health Organization's international recommendations for COVID-19 surveillance (20). Thus, we only included patients that fit the case description for ILI or SARI in this study. Statistical Analysis We used R version 4.0.3 (The R Foundation for Statistical Computing, https://www.r-project.org) and the supplementary R package, Moving Epidemic Method version 2.17 (https://github.com/ lozalojo/mem), to process data and create epidemiologic curves. We generated average epidemic curves on the basis of percentages of SARI or ILI cases testing positive for influenza during each season. Then, we aligned the seasonal curves to generate an average curve, and set thresholds to define preepidemic, epidemic, and postepidemic periods. We defined the thresholds by calculating the upper limit of the 95% CI around the 30 highest weekly values. Our model also estimated sensitivity by correctly defining the epidemic period and specificity by correctly defining the nonepidemic period, and we calculated 95% CIs for the average season's start date and duration (21,22). During the pandemic period, January 1, 2020-July 31, 2022, the 4S Network detected 19,030 ILI cases in community sites. Of those, 2,593 (14%) were randomly tested for influenza, of which 1,409 (54.3%) were also tested for SARS-CoV-2. Among tested samples, 622 (24%) were influenza-positive and 195 (14%) were SARS-CoV-2-positive. During the same period, 1,352 SARI cases were hospitalized in sentinel sites and tested for influenza, and 68 (5%) tested influenzapositive; 1,129 had combined SARS-CoV-2 and influenza testing, and 211 (19%) were SARS-CoV-2-positive (Table). Every specimen tested for SARS-CoV-2 was systematically tested for influenza, but the 2 pathogens were co-detected in only 1 patient. We found that, before the pandemic, Senegal had continuous circulation of influenza throughout the year and had 2 distinct seasonal peaks. The first peak typically occurred at the beginning of the year during epidemiologic week 5 (range week 1-13). The first peak typically ended around mid-April and had an average duration of 14 (95% CI 12-17) weeks and an average test-positive intensity peak of 34% (95% CI 10%-57%) of samples ( Figure 2). The second peak typically occurred during the second half of the rainy season, around August during epidemiologic week 31 (range week 27-36). That peak usually lasted until the end of November and had an average duration of 18 (95% CI 13-25) weeks and an average test-positive intensity peak of 61% (95% CI 47%-78%) of samples ( Figure 2). Changes Observed in Seasonal Influenza during the COVID-19 Pandemic We observed that SARS-CoV-2 essentially transformed the biannual profile of influenza's seasonal epidemic peaks in Senegal to a monophasic epidemic. During 2020, influenza circulation in Senegal seemed practically unperturbed. At the start of the year, influenza B (Victoria) virus peaked during January-March, after which a rainy season peak of influenza A(H3N2) and influenza B (Victoria) began during epidemiologic week 37, peaked at 73% of positive tests, and lasted for 11.5 weeks. SARS-CoV-2 started circulating in Senegal at the beginning of March 2020; the first case in Senegal was detected on March 2. However, systematic testing for SARS-CoV-2 was not added to the 4S Network until the beginning of June, which explains the low levels of SARS-CoV-2 detection during March-May 2020 ( Figure 3). However, influenza surveillance continued during that period and revealed unusually low levels of influenza (Figures 4, 5). During 2021, the expected beginning of the year influenza peak was completely absent. That period was marked by high levels of SARS-CoV-2 Alpha variant, after which an unmodified rainy season peak of 2009 pandemic influenza A(H1N1) started during epidemiologic week 37, peaked at 80% test-positivity, and lasted 10 weeks (Figures 4, 5). The beginning of 2022 also was marked by the absence of the expected January-March influenza peak. That period also showed high levels of circulating SARS-CoV-2, but the Omicron variant dominated. Finally, an unexpected epidemic peak of influenza A(H3N2) was observed completely out of the usual period, starting in May during epidemiologic week 17 when influenza activity is usually the lowest in Senegal, and ending in July, during epidemiologic week 29, with a maximum peak of 71% test positivity (Figures 4, 5). Of note, influenza B (Yamagata) has practically disappeared in Senegal since June 2020; the last 2 cases were detected in January 2021. Discussion Before the COVID-19 pandemic, the dynamics of influenza in Senegal mostly followed the various patterns seen in tropical regions, showing year-round low-level circulation and increased activity during the rainy seasons (1,2). Senegal also had a typical smaller influenza peak at the start of the year ( Figure 5, panel A). Influenza's seasonal patterns and variability across different climate zones is still only partially understood (23). Among other factors, dry and cold weather conditions appear to promote influenza circulation in temperate regions (23)(24)(25), which is supported by in vitro and in vivo models (24). However, weather conditions do not account for observations made in tropical areas where circulation often peaks around months with the highest temperature and humidity levels (25)(26)(27). Many other seasonally dependent factors influence influenza's circulation: fluctuations in host competence and immune response; changes in population behavior, such as school attendance; and the amount of time spent indoors (23). In Senegal, the rainy season is a period when most of the population is frequently forced to stay at home because of violent rainfall that disrupts normal traffic and human mobility patterns. The increase in indoor human contact and the return to school of a predominantly young population during the same season certainly contribute to the observed rainy season peak in Senegal and possibly in other countries (26). Increased indoor contact does not account for the peak seen at the start of the year, which is the middle of Senegal's dry season. However, school schedules and international travel might be implicated in the peak. Children returning to school increase influenza circulation. In addition, many persons travel to Europe, which usually experiences its annual influenza season at that time. Travel between Senegal and northern Europe peaks during the end of the year, when persons from Senegal return from visiting their families in Europe during the winter holidays and tourists from Europe who favor the dry season travel to Senegal to visit. The role of international travel on the January-March influenza peak is also suggested by the absence of influenza at the beginning of 2016, which corresponded to the period of the Ebola epidemic in West Africa that resulted in travel restrictions ( Figure 3). Among the NPIs used during the COVID-19 pandemic, travel restrictions might have had a role in reshaping the biannual seasonality of influenza in Senegal into a more monophasic epidemic. However, Senegal did not have a biannual influenza epidemic profile until after the implantation of the pandemic H1N1 2009 strain in the territory in 2010 (27). That observation suggests that climate, host immunity, and behavior might not be the only contributing factors to the seasonality of influenza circulation and that emergence of new competitive viral strains can also have a prolonged effect on periodic influenza circulation patterns. Changes Observed during COVID-19 Pandemic During 2020-2021, countries in the Southern Hemisphere that have temperate climates, such as Australia and South Africa, reported close to zero influenza circulation, and influenza remained mostly absent until 2021 (28). In the Northern Hemisphere, the influenza seasonal peak of the 2020-21 winter was also absent (29,30). Those periods showed high levels of SARS-CoV-2 circulation during the second pandemic wave of the Alpha variant and subsequent reinforcement of NPIs (31). In Senegal, at the end of March 2020, face masks became mandatory in public places, public gatherings were forbidden, international flights were closed, and a curfew was put in place (32). Those measures were gradually alleviated at the end of July 2020, when curfew hours were lightened and international flights were reopened, but Senegal maintained a high level of border control. A noticeable reduction of population mobility was recorded during March 2020-March 2021 (33). The arrival of SARS-CoV-2 in Senegal had noticeable effects on local influenza circulation. Unlike reports from temperate regions, only the expected January-March influenza peak was affected in Senegal, but the main rainy season peaks stayed unperturbed in their timing and intensity ( Figure 5, panel B). That finding could be partially explained by concurrent reinforcement or alleviation of NPIs. However, influenza activity in Senegal did not seem well correlated with local NPI reinforcement. Senegal noticeably alleviated its contact restriction measures around March 2021 (34), as illustrated by the noticeable drop in its estimated COVID-19 Stringency Index (35) and the concomitant rise in the population's mobility, as estimated by Google's COVID-19 Community Mobility Reports (33). That timeline does not account for influenza's recorded activity during the study period. The abnormally low levels of influenza in the early months of 2021 and 2022 might be explained by the link between the expected start of the year peak and the winter peak usually seen in the Northern Hemisphere. That start of the year peak would be more dependent on international travel, as described, which might explain the unbalanced effect of the COVID-19 pandemic on influenza circulation in Senegal. Deciphering the underlying causes of those shifts is challenging because the pandemic affected every level of the human ecosystem. The role of social distancing and other NPIs is undeniable because it necessarily affects the number of potentially contaminating social encounters. However, as those measures were gradually alleviated, influenza and SARS-CoV-2 continued to circulate alternately. The observed reciprocal nature of influenza and SARS-CoV-2 circulation, which is easier to visualize in Senegal's tropical setting, calls into question the prevailing role of NPIs and travel restrictions and invites us to search for other contributors. Negative viral interference or viral competitionthat is, the transient inhibitory effects that a virus can have on secondary infection by other viruses at the host level, essentially through sustained interferon pathway activation-s an old concept that has been studied and confirmed by in vitro and animal models (36,37) and has been supported by epidemiologic observations and statistical modeling (36,38,39). Although the concept is still controversial, some argue that rhinoviruses might have participated in the dissipation of first the wave of the 2009 pandemic influenza A(H1N1), for instance (40). Viral interference between SARS-CoV-2 and influenza has also been studied experimentally (41,42) and is supported by epidemiologic data (43,44). The implication of negative viral interference on influenza circulation is further supported by the very low levels of co-detection noticed at the patient level, only 1 case of co-detection out of 2,538 tests performed during our study period. Cases of SARS-CoV-2 and influenza co-infections have been reported in the literature but seem to be rare (<1%) (43). The surveillance network used in this study has certain advantages, such as wide geographic coverage and use of community and hospital settings. However, the 4S Network exclusively provides information on symptomatic patients because of its focus on syndromic surveillance; thus, the network omits some local influenza and SARS-CoV-2 epidemiologic features. Also, locations of sentinel sites might have underrepresented populations from remote areas, especially in the northeastern and southeastern parts of Senegal, the most underpopulated areas of the country. Because the network provides close to real time information, we were able to integrate recent data and cover more post-COVID-19 influenza seasons. Thus, we could offer a broader view of the effects of SARS-CoV-2 on influenza circulation in Senegal, which has public health implications that seem to be ongoing (5). During March-June 2020, which corresponds to the first SARS-CoV-2 pandemic wave in Senegal, the activity of the surveillance system was drastically decreased. At that time, COVID-19 tests were not available, and local healthcare providers from sentinel sites were asked by the ministry of health to train colleagues in neighboring districts to perform nasopharyngeal sampling and conduct local case investigations. Nevertheless, routine influenza surveillance was not completely abandoned during that period, and approximately one third of the usual number of samples were sent for influenza testing. Therefore, the absence of influenza notifications during the first SARS-CoV-2 wave was not only because of a lack of testing but also because of low levels of concurrent influenza circulation, consistent with what was seen later. Data regarding influenza and SARS-CoV-2 circulation in tropical regions are scarce. In addition, our data are limited to a small geographic area and timeframe, just 2 years of co-circulation. Distinguishing crucial and durable changes in influenza's circulation patterns requires a broader scope. Therefore, data from other tropical countries and over longer periods of time are needed to clarify the effects of the COVID-19 pandemic on influenza circulation patterns in tropical regions. Many questions on how influenza's seasonality will be affected in the long term remain. Influenza seasonality is probably intimately linked to SARS-CoV-2 and its potential for becoming a seasonal virus. In addition, SARS-CoV-2 could interfere with influenza circulation through broad population behavioral responses and host level immunologic and virologic determinants. In conclusion, although NPIs and travel restrictions most certainly were predominant factors in the disruption of influenza circulation in 2020 and early 2021, those now seem insufficient to account for the more recent observations made in Senegal and other countries. Thus, the role of viral interference in reshaping influenza seasonality should be considered and included in future virologic and epidemiologic studies. Acknowledgments We thank Arnaud Fontanet for his careful revision of the present paper and his constructive remarks. We thank the rest of the staff of the Epidemiology, Clinical Research and Data Science Department and the Department of Virology of the Institut Pasteur de Dakar who contributed to sample and lab logistics. We also thank every healthcare provider from the sentinel sites that participated in the sampling and care of the patients included in this study. The surveillance system from which the data was extracted has received financial support from the US Department of Human Health services. The funding body had no role in the design of the study, analysis, interpretation of data and in writing the manuscript. However, part of its funding was used to transport samples from study sites to the Institut Pasteur de Dakar. About the Author Dr. Lampros is an infectious disease specialist at the Hôpital Europeen Georges Pompidou in Paris, France. He is concurrently pursuing an epidemiology and public health master's degree through the Institut Pasteur Network where he worked at the Institut Pasteur de Dakar. His research interests include COVID-19 co-infections, notably fungal superinfections, and more broad pathogen interactions.
2023-08-24T06:17:38.161Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "ec496a77652d16b0c0234392607b98db1228805c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "1f70bf7695326ff69672890a65bd67ac27e42092", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264087467
pes2o/s2orc
v3-fos-license
Assessment of risk factors associated with multi-drug resistant tuberculosis (MDR-TB) in Gulu regional referral hospital Background Multi-drug resistant tuberculosis (MDR-TB) is increasingly recognized as emerging infectious disease of public health concern. Globally, 206030 people were diagnosed with MDR-TB in 2019, representing a 10% increase from 186883people who had it in 2018. In Uganda, the prevalence of MDR among new TB cases is 4.4% and 17.7% among previously treated TB cases. Aim To determine the risk factors associated with MDR-TB among tuberculosis patients in Gulu regional referral hospital. Material and Methods A cross-sectional analytical study using both quantitative and qualitative methods of data collection and analysis was used. Data was collected from 384 TB patients using data extraction form and 6 Key informant interviews conducted. Analysis using Pearson chi-square test was run. Results HIV positive patients were 2.6 times more likely to be infected with MDR-TB than HIV negative patients [AOR=2.6: 95% CI 1.34– 5.85: P=0.006]. Previously treated TB patients were 2.8 times more likely to be infected with MDR-TB than newly diagnosed TB patients [AOR=2.8: 95% CI 1.33– 5.85: P=0.006]. Defaulting TB patients were 3.1 times more likely to be infected with MDR-TB than the non-defaulting TB patients [AOR=3.1] Conclusion There is high prevalence of drug resistance among patients attending TB treatment at the facility. Background of the Study Multi-drug resistant tuberculosis (MDR-TB) is emerging as major challenge facing tuberculosis control programs worldwide particularly in Asia and Africa.It is a challenge not only from a public health point of view but also in the context of global economy, especially in the absence of treatment for MDR-TB at national programs level in developing countries.Thus, MDR-TB has become a major public health problem and an obstacle to global TB control 1 .MDR-TB is defined as disease with mycobacterial strains that are resistant to two of the most effective and import-ant anti TB drugs: isoniazid and rifampicin 1,2 .These two drugs are considered first-line drugs and are recommended for the treatment of all individuals with drug-susceptible TB disease.According to literature, MDR-TB is mainly due to partial or incomplete treatments, previous history of TB treatment, treatment interruption, smoking [p = 0.005, 0.025, and 0.005, respectively] 3 and HIV/TB co-infected patients [p < 0.001] 4 .Globally, 206 030 people were diagnosed with MDR-TB in 2019 and this represented a 10% increase from 186 883 in 2018 1 .Overall, the 27 high burdened countries in which Uganda is among account for 85 % of all MDR-TB cases and the prevalence of MDR among new cases is 4.4% and 17.7% among previously treated TB cases in Uganda 1 .MDR-TB cases are more difficult and costly to treat, in 2017, MDR TB contributed to an estimated 14% of TB deaths globally 1 as well as MDR TB accounts for a disproportionally large proportion of the financial burden for national tuberculosis control programmes. In addition to the great global threat that the disease poses, it can also lead to the deadlier extensively drug-resistant TB (XDR-TB) which is associated with high mortality.XDR-TB is caused by Myobacteria that meet the same requirements as MDR-TB, but are also resistant to any fluroquinolone and to at least 1 of 3 anti-TB injectable drugs: capreomycin, amikacin, or kanamycin 5,6 .Following from similar studies in Uganda 7,8 , prevention of an increase in the incidence of MDR TB is therefore crucial for the success of any national tuberculosis control programme (NTP).However, there is limited literature about the risk factors for MDR TB in resource constrained setting like Uganda.Thus, ascertaining the risk factors associated with MDR TB among TB patients at Gulu regional referral hospital will be pertinent in coming up with interventions to reduce incidences of MDR-TB. Area of Study Gulu regional referral hospital (GRRH), commonly known as Gulu Hospital, is located in Gulu, Northern Uganda.Gulu is the largest metropolitan area in Uganda's Northern Region.The hospital, however, serves a wide catchment area which includes the following districts: Amuru, Gulu, Kitgum, Lamwo and Pader.It is affiliated with Gulu University, where it serves as a teaching hospital for the faculty of medicine 9 .GRRH is about 343 km (213 miles), by road, north of Kampala, Uganda's capital, and largest city.GRRH is a public hospital, funded by Uganda Ministry of Health (MoH) and general care in the hospital is free.The hospital is one of the 14 regional referral hospitals in Uganda, with the capacity of 350 beds.The Standard unit of Out-put (SUO) is 674,146 which is fourth highest among the 14 regional referral hospitals in Uganda 9,10 . Research Questions The following research questions were considered; 1. What is the prevalence of MDR-TB among TB patients treated at Gulu regional referral hospital between January 2015 to December 2021? 2. What are the individual factors associated with MDR-TB among TB patients at Gulu regional referral hospital?3. What are the health facility related factor associated with MDR-TB among TB patients at Gulu regional referral hospital? Conceptual Framework for the Study The conceptual framework (Figure 1) is composed of dependent, intervening and independent variables.Independent variables include Individual and health facility related factors. Research Design The research design was an analytical cross-sectional study; combining both qualitative and quantitative methods. Study Population The study population composed of TB patients who received treatment at Gulu regional referral hospital in period between January 2015 and December 2021 as well as health workers at the TB unit. Study Unit The unit of the study involved TB patients who received treatment at Gulu regional referral hospital in the period between January 2015 and December 2021.The health workers at TB unit were also involved in the study. Eligibility Criteria The study included TB patients who received treatment at Gulu regional referral hospital between January 2015 and December 2021 and excluded TB patients who received treatment at Gulu regional referral hospital outside the period between January 2015 and December 2021 Determination of Sample Size Sample size was determined using the Yamane formula for proportions Where n = Sample size N= is the population size e= is the level of precision at a 95% level of confidence = 0.05 The cumulative number of TB patients ever treated at Gulu Regional Referral Hospital as of December 2021 was 9,492.Therefore, from the above, the sample size shall be: Data collection tool and method Questionnaire survey A questionnaire survey is a method of data collection containing a series of questions and providing spaces as well as options to be attempted by the respondents themselves.The questionnaire surveys used involved close-ended and open-ended questions as well as leading questions pertaining the research variables and objectives.Interview guide was used alongside this, for the key informants. Pilot testing of the Instrument A pilot study was carried out at Gulu regional referral hospital TB clinic between 14 th and 16 th December 2021.A total of 41 participants were sampled for the pre-test whom according to Otero 11 , should make up more than 10% of the sample size for the actual study.During the pilot testing of the instrument, we assessed the clarity of the instruments and their ease of use.Information obtained during the pilot testing was used to revise the study instruments. Data Entry, Analysis and Presentation After data collection, the raw data collected was systematically organized to facilitate analysis.Completed questionnaires were cross examined for completeness and consistency.Descriptive statistics were used in data analysis. Data obtained from open-ended items in the question-naires were categorized according to themes relevant to the study and were presented in a narrative form using descriptions.Analysis of data employed Statistical STA-TA 14 software where descriptive statistics were generated. In this study, quantitative data from the questionnaires was analysed using frequency counts and frequency tables derived from the responses to the research questions.Pearson Chi Square was used to determine the existing relationship between factors associated with MDR-TB among TB patients at Gulu regional referral hospital. Ethical Considerations All the required ethical approvals were sought and granted as appropriate by Gulu regional referral hospital research and ethic committee.To ensure confidentiality, the respondents had the option to either indicate or not indicate their names on questionnaires (Voluntary participation).Informed consents were sought from each respondent. Demographic characteristics of respondents From the table 2, more than two thirds 67.4% (259/384) of the study participants were males, almost a third 41% (119/384) of them were aged 45 year and above, almost a third 31.0%(119/384) had no formal education and more than two thirds 70.3% (270/384) were married. Health facility factors associated with MDR -TB The health facility factors were investigated using routine MDR-TB screening, treatment supervision through DOTS and routine TB treatment monitoring.Table 5 shows the results from the analysis. From table 5, Routine TDR-TB screening (p<0.001) and implementation of DOTS (p<0.001) is statistically associated with MDR-TB.However, routine treatment monitoring is not statistically associated with MDR-TB (p=0.427). "…it is now by policy that all TB patients screened for MDR -TB, this because the cases have been increasing for the past five years in our region especially. The clinicians have been very vigilant on MDR-TB screening…." [TB unit in-charge]. " ……. Implementation of DOTS improves treatment outcome since TB patients are supervised while taking their drugs on daily basis, however, our communities are poor and patients cannot afford to come daily to the clinic for treatment and this has affected too much the prisoners. We just relay on the prison wardens to ensure the patient takes the drugs…." [TB-ward nurse in-charge]. "….. the MDR-TB cases are admitted in the TB ward and closely monitored but also after the infectious phase they are discharged and start taking drugs from their homes…" [TB ward nurse in-charge]. The study, as a way of controlling for confounding, subjected factors that were significant at bivariate analysis level to multivariate analysis by conducting a Multivariate Logistic Regression Analysis.The results depicting the Adjusted Odds Ratio (AOR) for each of the factors processed alongside the respective p values at a 5% level of significance were as presented in table 6. African Health Sciences, Vol 23 Issue 3, September, 2023 From table 6, the HIV positive patients were 2.6 times more likely to be infected with MDR-TB than the HIV negative patients, and this was statistically significant [AOR=2.6:95% CI 1.34-5.85:P=0.006].The previously treated TB patients were 2.8 times more likely to be infected with MDR -TB than the newly diagnosed TB patients and this was statistically significant [AOR=2.8:95% CI 1.33-5.85:P=0.006].The defaulting TB treatment patients were 3.1 times more likely to be infected with MDR -TB than the non-defaulting TB treatment patients and this was statistically significant [AOR=3.1:95% CI 1.34-7.36:P=0.009].Patients who had contact with known MDR TB case were 75.2 times more likely to be infected with MDR -TB than those who did not have contact with MDR -TB case, and this was strongly statistically significant [AOR=75.2:95% CI 21.34-264.65:P<0.001]. Patients who were not routinely screened for MDR TB were 26.4 times more likely to be infected with MDR -TB than those who were routinely screened for MDR-TB, and this was strongly statistically significant [AOR=26.4:95% CI 7.74 -95.85:P<0.001].Patients who were on DOTS were 0.2 times less likely to be infected with MDR -TB than those who were not on DOTS, and this was strongly statistically significant [AOR=0.2:95% 0.05 -0.32: P<0.001].However, the factors such as gender, place of residence, employment status, TB treatment interrupt, knowledge on MDR TB and experiencing TB drug side effects are not associated with MDR TB. Discussion Prevalence of MDR-TB It has been observed that less than a third 22.9% (88/384) of the TB patients had MDR -TB, this finding is higher as compared to the 2017 WHO anti-TB drug resistance surveillance data report, which showed that 4.1% of new and 19% of previously treated TB cases in the world are estimated to have multidrug-resistant tuberculosis. However, the study findings agree with a study 12 which indicated that the prevalence of MDR-TB ranges between 3.3%-46.3%.Another study 13 conducted in Ethiopia reported that 33% of TB patients had MDR-TB.This disagrees with a study 14 conducted in Mali that indicated higher (62.62%) prevalence of MDR-TB and this is due to the fact that the study was conducted among previously treated TB patients yet our study involved both new and previously treated TB patients. The observed high prevalence of the MDR-TB in the study could be due to failure to follow TB infection control measures since the majority (90.2%) of the MDR-TB cases were contacts of the known MDR-TB.In addition, given the fact that this study was conducted among TB patients attending Gulu RRH increases the risk of them contracting MDR-TB from the active cases due to cross infection that may result from poor TB infection measures within the hospital setting.The high prevalence of MDR-TB among TB patients at Gulu RRH could be due to failure to implement Directly Observed Treatment Short Course (DOTS) strategy since it poses potential risk factors for acquisition of MDR-TB infection 15 .It is well acknowledged that DOTS strategy is the best weapon to dismantle the spread of MDR-TB 6 , therefore, since DOTS was not being implemented across all TB patients, this could have contributed to the high (63.9%) reported number of TB patients defaulting treatment hence subsequently led to the development of MDR-TB. Individual factors associated with MDR-TB HIV status HIV positive status is statistically significantly associated with MDR -TB, the study findings indicated that 26.9% of HIV positives had MDR-TB.This is agreeing with a Ugandan study 16 , which found out that the prevalence of MDR-TB was 32.4% among HIV/TB co-infected patients.Another study 14 reported that 40% -70% of HIV patients in Ethiopia are co-infected with MDR-TB.Several other studies, including a systematic review in Europe and Ethiopia, have reported an association between HIV and MDR-TB 4,13,17 . 17.This finding could be African Health Sciences, Vol 23 Issue 3, September, 2023 explained by the fact that high prevalence of TB/HIV coinfection might lead the bacteria to resist the drugs.The reason for this finding could due to the fact that HIV infected patients have a rapid disease progression and in settings where MDR-TB is prevalent, either in the general population or in the local population such as a hospital.This may subsequently lead to rapid development of a pool of drug resistant TB patients.Additionally, HIV positive people are more likely to be exposed to MDR-TB patients, due to either to increased hospitalizations in settings with poor infection control or association with peers who may have MDR-TB, including in hospital settings 19 .Furthermore, people with HIV infection progress from tuberculosis infection to active disease faster than immune competent people and drug mal-absorption in HIV infected patients, especially rifampicin and ethambutol, can lead to drug resistance and has been shown to lead to treatment failure 20 . Previous TB treatment Previous TB treatment is statistically significantly associated with MDR-TB and the study findings indicated that a third (33%) of the previous TB treatment patients had MDR-TB.This is in line with the Ethiopian Study 21 which showed that more than a third (46.3%) of previous TB treatment cases had MDR-TB.Another study in Mali reported that 66.3% of the previously treated TB patients had MDR-TB 22 . The study finding is further supported by a study 18 , reported that patients who had previous history of treatment for TB had 21 times higher risk of developing MDR-TB than patients who did not have a history of previous treatment for TB.The probable reason for developing MDR-TB could be due to repeated and inappropriate way of taking the medication that could result into the bacteria mutating and hence develop resistance against the drugs.In order to address this problem, effective implementation of the DOTS strategy and increasing number of institutions equipped with drug resistance tests for early detection of primary resistance is mandatory. Defaulted TB treatment Defaulting TB treatment is associated with MDR-TB, in this study almost half (47.8%) of patients with history of defaulting TB treatment had MDR-TB.This agrees with a study 23 and defaulting TB treatment results into treatment failure due to non-compliance to treatment and this increases the chances of treatment failure and hence developing MDR-TB.This is supported with a study 24 , which is reported that patients who had history of pervious treatment failure was associated had increased risk of developing MDR-TB.The possible explanation for this strong association of previous treatment failure in DR-TB groups might be due to inadequate compliance by patients; lack of treatment supervision; poorer access to health-care facilities; and absence of infection control measures in clinics and hospitals. Contact with MDR -TB case Contact with MDR-TB is associated with MDR-TB infection, the study findings indicated that 90.2% of the patients who had contact with MDR-TB case were infected with MDR-TB.This agrees with a study (25), which reported that patients who had close contact with MDR-TB cases were 3.1 more likely to get infected with MDR -TB.The reason for this finding is that TB is transmitted via close contact with an infected individual who is actively spreading the bacteria through coughing.Once inhaled, the infection is established with or without a visible primary lung lesion; lymphatic and hematogenous spread usually follows within 3 weeks of infection 26 . Health facility factors associated with MDR-TB Implementation of DOTS Supervision of treatment by health workers through Directly Observed Treatment short course (DOTS) is associated with MDR-TB, the study findings indicated that the Odds of contracting MDR-TB was less in TB patients on DOTS.In the study, the KIs reported that the hospital failed to implement DOTS especially for TB patients who were prisoners and this could have accounted for more than half (63.6%) of the patients defaulting TB treatment and subsequently leading to MDR-TB.This agrees with a study 27 that reported lack of direct treatment observation by health workers was significantly associated with MDR-TB development. Other studies 12 ; 15 also reported that lack of compliance with DOTS program were the potential risk factors for acquisition of MDR-TB infection.It is well acknowledged that DOTS strategy is the best weapon to dismantle the spread of MDR-TB 6 .The probable reason for this is that when health workers directly supervise TB patients taking the drugs, there is little chance of defaulting African Health Sciences, Vol 23 Issue 3, September, 2023 and interruption treatment hence reducing the likelihood of contracting MDR-TB. Routine MDR-TB screening Routine MDR-TB screening is associated with MDR-TB, this because in most resource-poor countries, new case MDR TB patients are identified only after first-line therapy fails, by which time these patients could have further disseminated the disease.This agrees with the studies 28 15 which reported that non-routine MDR-TB screening is associated with increased prevalence of MDR-TB. Conclusion The prevalence of drug resistance among patients attending TB treatment at Gulu regional referral hospital is 22.9%, this is quite high compared to previous national reports.The study findings has showed that the key drivers are mainly being HIV positive, previous TB treatment, defaulting TB treatment, contact with MDR-TB case, none-routine MDR-TB screening and failure to implement DOTS.Therefore, there in need for Gulu Regional Referral Hospital and the surrounding districts to promptly respond to the increasing MDR TB cases through design interventions aimed at addressing the key drivers of the disease. Figure 1 : Figure 1: Conceptual framework diagram for the study 1. 4 Figure 1 : Figure 1:Conceptual framework Diagram for the Study interrupted TB treatment Dichotomous (1= Yes; 2= No) Ever defaulted TB treatment Dichotomous (1= Yes; 2= No) Previous contact of MDR-TB Dichotomous (1= Yes; 2= No) Knowledgeable about MDR-TB Dichotomous (1= Yes; 2= No) Ever experienced TB drugs side effects.Dichotomous (1= Yes; 2= No) Health facility factors Health facility factors Routine MDR-TB screening Interview with KIs Probe in details the answers Health Education on MDR-TB Interview with KIs Probe in details the answers Routine TB treatment monitoring Interview with KIs Probe in details the answers Availability of trained health workers.Interview with KIs Probe in details the answers) Ever experienced cases of TB patients' mismanagement Interview with KIs Probe in details the answers Health facility implements DOTS strategy Interview with KIs Probe in details the answers.African Health Sciences, Vol 23 Issue 3, September, 2023 8: 95% CI 1.33-5.85:P=0.006].Defaulting TB patients were 3.1 times more likely to be infected with MDR-TB than the non-defaulting TB patients [AOR=3.1] Table 1 : Study variables and measurements Table 2 : Demographic characteristics of TB patients prevalence of MDR-TB among TB patients at Gulu regional referral hospital From the table 3, less than a third 22.9% (88/384) of the TB patients had MDR -TB, with 69 (17.9%) males and 19 (5.0%) females TB patients having MDR-TB respectively. Table 3 : Prevalence of MDR-TB among TB patients at Gulu Regional Referral Hospital African Health Sciences, Vol 23 Issue 3, September, 2023 Table 1 : Pearson Chi-Square Results for the individual factors associated with MDR-TB variable Category Had MDR-TB Total χ2(df) , P-value No Yes Source: Field data 2022 * statistically significant factor, df=degree of freedomAfrican Health Sciences, Vol 23 Issue 3, September, 2023 Table 5 : Pearson Chi-Square Results for the health facility factors associated with MDR-TB
2023-10-14T15:17:43.042Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "916457c4e429ad075f4d42d6e9b560cb0f775d69", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "c40c084c2ae5a8e99d4e59b83b05df29a7f25653", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264972420
pes2o/s2orc
v3-fos-license
Tiza–Titre increase and enhanced immunity through an adjuvanted, recombinant herpes zoster subunit vaccine in patients with liver cirrhosis and post-liver transplantation: a study protocol for a prospective cohort study Introduction Shingrix, an effective adjuvanted, recombinant herpes zoster vaccine (RZV), has been available since 2018. Immunocompromised patients are known to be predisposed to vaccine failure. In-vitro testing of immunological surrogates of vaccine protection could be instrumental for monitoring vaccination success. So far, no test procedure is available for vaccine responses to RZV that could be used on a routine basis. Methods and analysis This is a single-centre, three-arm, parallel, longitudinal cohort study aspiring to recruit a total of 308 patients (103 with a liver cirrhosis Child A/B, 103 after liver transplantation (both ≥50 years), 102 immunocompetent patients (60–70 years)). Blood samples will be taken at seven data collection points to determine varicella zoster virus (VZV) and glycoprotein E (gE)-specific IgG and T cell responses. The primary study outcome is to measure and compare responses after vaccination with RZV depending on the type and degree of immunosuppression using gE-specific antibody detection assays. As a secondary outcome, first, the gE-specific CD4+ T cell response of the three cohorts will be compared and, second, the gE-VZV antibody levels will be compared with the severity of possible vaccination reactions. The tertiary outcome is a potential association between VZV immune responses and clinical protection against shingles. Ethics and dissemination Ethical approval was issued on 07/11/2022 by the Ethics Committee Essen, Germany (number 22-10805-BO). Findings will be published in peer-reviewed open-access journals and presented at local, national and international conferences. Trial registration number German Clinical Trials Registry (number DRKS00030683). INTRODUCTION Patients after solid organ transplantation are at an increased risk of infectious diseases due to drug-induced immunosuppression.3][4][5] The cause for the immune dysfunction is multifactorial: damage to the reticuloendothelial system affecting the organ's immune surveillance function, 4 5 and an increase in serum levels of proinflammatory cytokines due to systemic inflammation. 4hus, cirrhosis-associated immune dysfunction adversely affects the immune system, altering both innate and acquired immunity. 6his imminent threat also applies to herpes STRENGTHS AND LIMITATIONS OF THIS STUDY ⇒ This prospective study includes an immunocompetent control group allowing a comparison and assessment of potential immunological correlates of vaccine protection.⇒ Successful patient recruitment is ensured by combining the appointments for titre determination with the regular appointments of patients with liver cirrhosis and post-liver transplantation patients in the outpatient liver clinic of University Hospital Essen, Germany.⇒ A study nurse will perform home visit for patients who become immobile during the study period.This will ensure completeness of the data collection and reduce the dropout rate.⇒ A placebo control group is not included. Open access zoster (HZ, also termed shingles) which results from the reactivation of varicella zoster virus (VZV) replication in sensory ganglion cells, 7 causing a painful, dermatomerelated skin inflammation with vesicles.Especially under immunosuppression, severe, possibly life-threatening complications can occur. 7The incidence rate of HZ is significantly higher in immunocompromised individuals, such as liver transplant recipients (22.7 cases/1000 person-years (PY)), 8 than in the general population (6.7 cases/1000 PY). 9 Twenty per cent of patients with HZ after liver transplantation (LTx) suffer from disseminated shingles involving several dermatomes, visceral organs or the central nervous system, postherpetic neuralgia or cranial nerve damage. 8ince December 2018, the German Standing Committee on Vaccination (Ständige Impfkommission: STIKO) has recommended the adjuvanted HZ subunit inactivated vaccine (Shingrix) for people over 60 years of age (termed standard vaccination: vaccinations recommended for the general adult population) and for patients over 50 years of age with chronic underlying diseases associated with an increased risk of shingles such as diabetes mellitus or immunosuppressed patients (termed indicated vaccination). 10 11The vaccination itself is standard treatment. 12ecommendations are also being made at the international level to vaccinate immunosuppressed people at an early stage.The US Center for Disease Control and Prevention recommends vaccination from the age of 19 years, 13 the European Centre for Disease Prevention and Control from the age of 18 years. 14The approved vaccination schedule recommends two vaccinations within the space of 2-6 months.Zoster vaccination is a preventive measure to protect against clinical consequences of VZV reactivation. 2The adjuvanted HZ subunit vaccine (recombinant zoster vaccine: RZV) consists of the recombinant VZV glycoprotein E (gE) and the AS01B adjuvant.gE is part of the VZV virion and is abundantly displayed on VZV-infected cells 15 ; VZV gE antigen elicits neutralising antibodies and triggers vigorous CD4 + T cell responses. 16n immunocompetent individuals, the RZV has been shown to elicit robust IgG and CD4 + T cell responses associated with strong and long-lasting immune protection. 17 18Immunogenicity of RZVs was also documented in several groups of immunocompromised patients including adults with an HIV infection, 19 after renal transplantation 20 21 or adult patients after autologous stem cell transplantation. 22 23Immunogenicity and potential correlates of RZV vaccine-induced immune protection are not known in patients with an increased risk of shingles due to an impaired immune function of the liver or post-LTx.Reliable serological correlates of protection exist for some vaccinations (eg, hepatitis B), [24][25][26] but this is not the case for many other vaccinations (eg, pertussis). 24In the pivotal studies of RZV (ZOE-50/70), 17 18 a proprietary anti-gE enzyme-linked immunosorbent assay (ELISA) using a recombinant VZV gE antigen was used to determine anti-gE antibody levels.However, this ELISA was a laboratory developed test provided by the vaccine supplier which is not generally available.Conventional ELISAs using lysates or antigen preparations of VZV-infected cells do not determine antibodies that recognise gE, the antigen administered in the RZV.In order to measure the humoral immune response boosted by vaccination with RZV, a selective detection method is needed that especially measures anti-gE antibodies.For this purpose, we developed and validated two different gE-specific antibody detection methods at the Institute of Virology, University Hospital Freiburg: a gE-luciferase precipitation in-house assay analogous to Cohen et al 27 and a gE ELISA, which is a modification of the assay described in Cunningham et al. 28 Both assays will be used in the study. 0][31] The immunogenicity results of the ZOE-50/70 studies showed a 24.6-fold 28 increase in median gE-specific CD4 + T cell frequency 4 weeks after the second vaccination compared with baseline, and sustained gE-specific CD4 + T cell immunity was demonstrated in the interim analysis by Boutry et al. 32 Immune responses were determined using VZV peptide pools to characterise virusspecific immunodominant epitopes.An independent study confirmed that vaccination with the RZV enhances the CD4 + T cell response against gE. 33ur study aims to quantify the emergence of VZVspecific CD4 + T cells in immunocompromised patients compared with a healthy control group after vaccination with the RZV.To the best of our knowledge, direct comparative studies on humoral and cellular immune response after vaccination with the RZV in patients with varying degrees of immunosuppression, especially in those with liver cirrhosis or after LTx, are not available.Thus, no serological parameters for the responsiveness of the immune system depending on the type and severity of immunosuppression have been investigated in these patient collectives so far.Reliable gE-specific antibody or T cell assays could provide valuable insights into the humoral and cellular immune response after vaccination and identify suitable correlates of protection which could be applied in the context of routine care of immunocompromised patients.Such data could provide the rational basis for future studies to investigate whether and when further vaccine doses might be necessary. METHODS Design This single-centre, three-arm, parallel cohort study with a 1:1:1 allocation ratio is planned to take place between January 2023 and September 2028.Recruitment of primary care physicians and initial patients from the outpatient liver clinic began in January 2023.The study will include 103 patients with a secondary non-drug immunodeficiency due to chronic liver failure in the context of liver cirrhosis (stage Child A or B), 103 patients Open access with a secondary drug-related immunodeficiency due to severe drug-related immunosuppression after LTx (both groups aged over 50 years) and 102 immunocompetent individuals who will receive the vaccination as a standard vaccination (age 60-70 years) (n=308) (see figure 1).The study design precludes randomisation. The vaccination is part of the daily practice routine in Germany and is standard treatment.The STIKO vaccination recommendation is covered by statutory health insurance since March 2019. 12Since this is the daily routine of a European Medicines Agencyapproved vaccine and a clinically indicated vaccination, the subjects will not receive an intervention in the strict sense.Thus, the study does not fulfil the criteria of an interventional study and is governed by the German Medical Professional Code. 34 35 Study setting and characteristics of participants This single-centre study is being conducted at the University Hospital in Essen, Germany.Each patient with chronic liver failure or with a liver transplant who meets the inclusion and exclusion criteria is being currently contacted (by phone, letter, personally) by the medical assistant, study nurse or doctors of the outpatient clinic of the Department of Gastroenterology and Hepatology.Patients in the control group are being recruited via the Institute of General Practice, Medical Faculty, University of Duisburg-Essen, Essen, or contacted by their general practitioner (GP) and their medical assistants, all of whom are part of the educational and research practice network of the above-mentioned institute.All persons enrolled in the study must provide full written informed consent and are required to complete a baseline screening questionnaire to assess their eligibility. Inclusion and exclusion criteria Inclusion criteria are: Immunocompetent control group 1. Patients without a chronic disease mentioned in the exclusion criteria and without drug-induced immunosuppression.2. Age 60-70 years. All groups Consent given by the patient or legal representative for vaccination and blood draw. Intervention description gE-specific antibody detection methods The gE-specific luciferase assay uses a luciferase immunoprecipitation system (LIPS), which was described as particularly sensitive compared with other methods (fluorescent-antibody-to-membrane-antigen, glycoprotein ELISA, viral capsid antigen ELISA). 27In this assay, the gE ectodomain fused to Renilla luciferase is transiently expressed in HeLa cells, the lysate of which is incubated with patient serum.Immune complexes formed are precipitated using protein G Sepharose.The amount of specific anti-gE antibodies is determined by luciferase activity after washing the precipitated material.Due to the extremely sensitive luciferase detection, this procedure detects very low anti-gE antibody amounts (data not shown).This is of critical importance to include all eligible study participants and identify non-eligible individuals who need to receive varicella vaccination instead, depending on their immune status. As the LIPS assay is not ideal for high throughput testing and exact quantification, a gE-specific ELISA based on the method described 28 was developed.For this purpose, recombinant gE was obtained from Virion/ Serion, Wuerzburg, Germany.gE antibody concentrations are calculated on the basis of a reference standard curve using Varitect (Biotest Pharma, Dreieich, Germany).The cut-off for the analysis was set at 88 arbitrary units (AU: greyzone 88-155 AU). Humoral immunity All three cohorts will receive their first vaccine dose with the RZV (t0) by their co-treating GP after confirmation of a positive VZV IgG serostatus.Patients who have undergone LTx will receive the first vaccination with the RZV within 6 months after transplantation (t0) in accordance with the STIKO recommendations.Pre-vaccination and post-vaccination samples will be tested using the in-house gE ELISA and, for comparison, with a standard VZV IgG ELISA (Virion/Serion, Wuerzburg, Germany).Prevaccination and post-vaccination samples with greyzone or negative results will be additionally tested with the most sensitive LIPS assay to identify truly negative sera.This is instrumental for the approach since this group of patients will be considered for receiving the live-attenuated varicella vaccine and must be excluded from the study, while individuals identified as seropositive in the LIPS assay will be included in the study.The first titre control will be performed in probands of all three study arms prior to the second vaccination (t1).Four to 6 weeks after the second vaccination, another titre control will take place (t2).Further titre checks are scheduled 6 (t3), 12 (t4), 36 (t5) and 60 (t6) months after the second vaccination with a window of 3 weeks if necessary.For details, see table 1 and figure 2. T cell immunity VZV-specific CD4+ T cell responses will be analysed by flow cytometry using major histocompatibility complex II tetramers or VZV peptide pools.Participants will be human leucocyte antigen (HLA)-typed for class II (DRB1/DQB1/DPB1) alleles.For details, see table 1.All blood samples will be centrifuged and frozen until analysis.Upon arrival, peripheral blood mononuclear cells (PBMCs) will be isolated by Biocoll separation using Leucosep tubes.After centrifugation and washing of the cells, PBMC will be resuspended in fetal calf serum, aliquoted in 10% dimethyl sulfoxide and frozen for 3-4 weeks until analysis.The analyses of CD4 + responses will be performed as previously described [36][37][38] by using tetramers recognising defined HLA class II alleles based on HLA typing of the study participants.Additionally, PBMC will be stimulated with VZV gE as well as control peptide mixes (JPT) to determine CD4 T cell responses before and after vaccination (see table 1 and figure 2). Other laboratory and clinical parameters To determine the current Child-Pugh score and the Model for End-Stage Liver Disease score, the current laboratory values (liver enzymes including glutamate oxaloacetate transaminase, glutamate pyruvate transaminase, gamma-glutamyl transferase, alkaline phosphatase, coagulation (international normalised ratio) and kidney Open access retention parameters including creatinine and eGFR as well as albumin and bilirubin in serum), sonographic signs of ascites and the hepatic encephalopathy grades 39 will be recorded during routine visits to the outpatient clinic of the liver cirrhosis cohort, which are scheduled at the titre measurement appointments t0-t6.For details, see table 1. Questionnaire Furthermore, all patients included in the study will receive a non-standardised questionnaire at each of the seven data collection points (t0-t6) asking about typical symptoms of the disease and possible vaccination reactions like local reaction at the injection site (eg, redness, swelling, pain) or systemic reactions (eg, general weakness, fever, headache, swelling of lymph nodes, chills). 17e will categorise the severity of the vaccination reaction into mild-to-moderate and serious adverse events, as in the pivotal studies. 17For details, see the Adverse event reporting and harms section. In case of symptoms suggestive of HZ, a smear of the efflorescence will be sent to the Institute of Virology of the University Hospital Freiburg, Germany. Comparison of the severity of vaccination reactions with the titre level after vaccination with RZV. Tertiary outcome measure Comparison of the number of vaccine breakthroughs, that is, the incidence of (severe) HZ infections confirmed by VZV-PCR, with titre level. Adverse event reporting and harms The risk of venous blood sampling is minimal.Pain or bruising may occur during or after the blood draw.In very rare cases, blood collection may result in inflammation of the puncture site (thrombophlebitis) or nerve injury. There is a risk of vaccine reaction with transient symptoms such as local reactions (pain at the injection site, redness and swelling) and systemic reactions (fever, fatigue, myalgia and headache) noted in the first 7 days after vaccination and reported by 84.4% of those vaccinated. 17Of those vaccinated, 1.1% had serious adverse events (hypotension with syncope, mononeuritis, neurosensory deafness and musculoskeletal chest pain) within the first 30 days. 17Suspected cases of vaccination complication will be reported to the Paul-Ehrlich-Institute, the national institution responsible for vaccine pharmacovigilance. Allergic reactions to components of the vaccine may occur in the first 15-30 min.Therefore, patients will be monitored for 15-30 min after vaccination. The German Consulting Laboratory for HSV and VZV, Medical Center-University Hospital Freiburg, will carry out the titre measurements in compliance with the applicable safety guidelines. The study investigator will rate the severity of each adverse event and report all serious and non-serious Open access adverse events in the electronic case report form.The study investigator will also rate the underlying association between the serious adverse events and the study intervention.The following termination criteria were defined: severe allergic reaction to the RZV, occurrence of the above-mentioned serious events after vaccination 17 as well as (in terms of localisation or extent) HZ within 1 month after the first or second vaccination, and occurrence of hepatic complications in the sense of acute to chronic liver failure. Sample size calculation Data from the vaccine approval studies ZOE-50 and ZOE-70 17 18 showed mean VZV antibody titre levels in immunocompetent individuals of 52 376.6 mIU/mL 1 month after the second vaccination, with an SD of 35 996.96 mIU/mL.In their study, Vink et al 40 observed that the VZV antibody titre level in renal transplant recipients 1 month after the second vaccination was on average at least 60% lower than in immunocompetent individuals.We consider a reduced titre increase of more than 30% in either group 1 or group 2, that is, a difference of more than 15 713.0 mlU/mL, as clinically relevant.The expected variance is based on the distribution in the approval study. 17 18 40n estimated loss to follow-up of 25% is included in our calculation. Our case number calculation for our primary research aim therefore resulted in a required total study population of 308 subjects (n=102-103 per group) to detect a statistically significant difference of 30% (ie, 15 713.0 mlU/mL) or more in a group comparison by an analysis of variance (ANOVA) with a power of 80% and an alpha of 5%.The sample size calculation was performed with G-Power V.3.1 (and reviewed by a statistician from the Institute for Medical Informatics, Biometry and Epidemiology, Essen, Germany). Open access Plans to promote participant retention and complete follow-up The participants will benefit from the study as they will receive detailed written and verbal information about a vaccination against HZ.For patients with liver cirrhosis and those after LTx, the appointments for the titre check (from t2) will be combined with the regular appointments in the liver outpatient clinic of the University Hospital Essen, Germany.If patients are not able to visit the outpatient clinic in person, they will be asked to visit their GP to provide a blood sample for the titre check.For this purpose, we will contact the respective GPs by telephone and ask them to draw the blood sample.If this is not possible, a study nurse will visit the patient at home to draw the blood sample.For the patients in the control group, the research practices (n=15-25 practices) of the Institute of General Practice, Essen and the Institute itself will carry out the titre controls.For this purpose, each practice will receive pre-labelled tubes and a stamped return envelope.The medical assistants of the practices will receive compensation in the amount of €60 per enrolled patient for the recruitment, vaccination and blood collection.The participants of immunocompetent control group will also receive an expense allowance totalling €60.Participants who wish to withdraw from the study or do not receive the second dose of the vaccine will be excluded from the study and will be asked to complete an endof-study visit.If they do not object, the data collected to date will remain in the study database and will be included in the data analysis. Data management The principal investigator (Institute of General Practice, University Hospital Essen, Germany) will use an electronic case report form to record all study data.The lead physician will retain, in accordance with applicable regulations, all source documents, defined as any original document or item that allows evidence of the existence or accuracy of data or facts collected during the study.To ensure the dual control principle, all data from the questionnaires will be entered twice by two different individuals.A software tool will be used by a third person to check the two datasets (double entry) for consistency.If the data differ, the third person will determine the correct data from the questionnaires.If answers in the questionnaires do not allow a clear assignment, a consensual discussion will decide which entry will be made.All participant data will be kept in locked file cabinets to which only the principal investigator has access.Pseudonymisation of all data collected will be ensured through coding and will be traceable.All files containing names or other personal identifiers, such as consent forms, will be kept separate from the pseudonymised data.All blood samples for T cell analysis and HLA typing will be stored in the biobank of the Institute for Virology, University Hospital Essen for later analysis. Statistical methods and data analysis plan Statistical methods for primary and secondary outcomes For our first research aim, the mean values of the vaccination titres at the time point t2 (4-6 weeks after the second vaccination) will be compared (F-test for ANOVA with subsequent post hoc t-tests).A special focus is placed on this time point, as this is when the VZV antibody titre level was highest in the pivotal trials and therefore reflects the vaccination response.Consequently, it will be analysed whether the vaccination titres at t2 differ significantly (p<0.05 for ANOVA and p<0.017 for post hoc t-tests to adjust for multiple testing with the Bonferroni correction) in the three groups.In the near future, when a titre value (cut-off value) reflecting sufficient immunity according to the assay is defined, it will be investigated whether these cut-off values were reached in all three groups at t2 (p<0.05).This will be analysed with a logistic regression or Χ 2 test.The mean values of the vaccination titre at the other time points t1, t3, t4 (t5 and t6 serve as long-term controls) will be described descriptively.Subsequently, the mean values of the titre values of the three patient groups at t3 and t4 (with t5 and t6 serving as longterm controls) will be compared in order to assess the extent of an assumed drop in titre in the further course of the study within the three groups. The mean values of the T cell frequency results at the t2 (4-6 weeks after second vaccination) will be compared (ANOVA with subsequent post hoc t-tests).The mean values of the T cell frequency results of the three groups at the other time points will be described descriptively, as will possible associations of titre levels with vaccination breakthroughs (occurrence of HZ) and the severity of vaccination reactions. Since no explicit assumption on the distributions of the vaccination titres and T cell frequencies can be made in advance, we will examine these before the statistical analyses and, in case of a non-normal distribution, we will perform an appropriate transformation of titres and T cell frequency values for all non-descriptive analyses, for example, with the natural logarithm. Possible confounders Immune senescence can be a possible confounder.Thus, comparability appears to be much more difficult when comparing young patients with drug-induced immunosuppression with older patients without immunosuppression but with age-related immune senescence.Immune senescence is understood as an age-related decline in immunological competence and the associated progressive decrease in immune response at the humoral and cellular levels. 41 42Eliminating confounding due to age by matching participants in the three groups during the recruitment phase is likely not feasible, since this would restrict the liver groups to an age of 60 years and above.Therefore, we plan to control confounding due to age statistically in our analyses and perform regression analyses as sensitivity analyses for the primary and secondary Groups 1 and 2 ( liver group) 1. Age ≥50 years.2. Patients with liver cirrhosis Child A or B or who have undergone LTx. Outcomes Main outcome measures Primary outcome measure Comparison of the mean VZV antibody titre levels of patients with liver cirrhosis and those after LTx older than 50 years with the mean VZV antibody titre levels of immunocompetent individuals aged between 60 and 70 years from the control group.Secondary outcome measures 1.Comparison of the mean gE-specific CD4 + T cell frequencies in patients with liver cirrhosis and after LTx older than 50 years with those of immunocompetent patients aged 60-70 years from the control group. Table 1 Research timeline for each participant
2023-11-04T06:18:20.818Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "9b4cf4ba975dd2c4705912075fefcee7387554f4", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b9a80e9e2f743e7d48a82e9efb766b58de14b9aa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210132586
pes2o/s2orc
v3-fos-license
Physiologically‐Based Pharmacokinetic Modeling for Predicting Drug Interactions of a Combination of Olanzapine and Samidorphan A combination of the antipsychotic olanzapine and the opioid receptor antagonist samidorphan (OLZ/SAM) is intended to provide the antipsychotic efficacy of olanzapine while mitigating olanzapine‐associated weight gain. As cytochrome P450 (CYP) 1A2 and CYP3A4 are the major enzymes involved in metabolism of olanzapine and samidorphan, respectively, physiologically‐based pharmacokinetic (PBPK) modeling was applied to predict any drug‐drug interaction (DDI) potential between olanzapine and samidorphan or between OLZ/SAM and CYP3A4/CYP1A2 inhibitors/inducers. A PBPK model for OLZ/SAM was developed and validated by comparing model‐simulated data with observed clinical study data. Based on model‐based simulations, no DDI between olanzapine and samidorphan is expected when administered as OLZ/SAM. CYP3A4 inhibition is predicted to have a weak effect on samidorphan exposure and negligible effect on olanzapine exposure. CYP3A4 induction is predicted to reduce both samidorphan and olanzapine exposure. CYP1A2 inhibition or induction is predicted to increase or decrease, respectively, olanzapine exposure only. Medical and psychiatric comorbidities are common in patients with schizophrenia. 1,2 Management of these comorbid conditions may necessitate the use of additional pharmacologic therapies, exposing patients to a risk of drug-drug interactions (DDIs) between their antipsychotic treatment and concomitant medications. 3 Furthermore, the use of tobacco products also has the potential to alter plasma drug levels and affect the efficacy or safety of psychiatric medications. 4 Current guidelines for the treatment of schizophrenia endorse the use of antipsychotic medication, 5 and selection of an antipsychotic is generally based on its side effect profile. 6 The atypical antipsychotic olanzapine 7 is considered one of the most effective antipsychotics approved for the treatment of schizophrenia. 8 However, use of olanzapine has been limited by significant weight gain and metabolic effects associated with its use. 9 A combination of olanzapine and samidorphan (OLZ/SAM) is in development to provide the antipsychotic efficacy of olanzapine, while mitigating olanzapine-associated weight gain. Samidorphan is a new molecular entity that, in vitro, binds with high affinity to human μ-opioid, κ-opioid, and δ-opioid receptors, and acts as an antagonist at μ-opioid receptors and partial agonist at κ-opioid and δ-opioid receptors. 10,11 In vivo, it has been established that samidorphan functions as an opioid receptor antagonist. 12 In studies enrolling healthy adult subjects 13 and adult patients with schizophrenia, 14 the presence of samidorphan limited olanzapine-induced weight gain in those receiving OLZ/SAM vs. olanzapine alone. Olanzapine is mainly eliminated via hepatic metabolism, with 7% of the administered dose being excreted renally as unchanged olanzapine. 15,16 The primary metabolic pathways for olanzapine are direct glucuronidation via uridine 5'-diphospho-glucuronosyltransferase 1A4 and cytochrome P450 (CYP)-mediated oxidation, mainly by CYP1A2 with minor contributions from CYP2C8, CYP3A4, and CYP2D6. 15,16 Samidorphan is eliminated primarily via CYP3A4-mediated hepatic metabolism and renal excretion. 17,18 Pharmacokinetic data from clinical studies indicated a lack of DDI between olanzapine and samidorphan when the two drugs are administered in combination, 13,18,19 consistent with their distinct metabolic pathways. The effects of CYP1A2 and CYP3A4 inhibition and induction on the pharmacokinetics of olanzapine and samidorphan have been evaluated in clinical studies ( Table S1). Coadministration of olanzapine with fluvoxamine, a strong inhibitor of CYP1A2, increased olanzapine maximum plasma concentration (C max ) by 84% and area under the plasma drug concentration-time curve from time 0 to 24 hours (AUC 0-24 ) by 119%. 20 Conversely, clearance of olanzapine increased with agents that induce CYP1A2, including tobacco smoke and carbamazepine. [21][22][23] Samidorphan C max and AUC from time 0 to infinity (AUC ∞ ) were increased by 12% and 50%, respectively, in the presence of strong CYP3A4 inhibition 24 and decreased by 44% and 73%, respectively, in the presence of a strong CYP3A4 inducer. 25 Given the effects of CYP1A2 and CYP3A4 modulations on the pharmacokinetics of olanzapine and samidorphan observed in the clinical studies, additional evaluation of the potential for a DDI is warranted to inform on concomitant medication use during OLZ/SAM administration. A physiologically-based pharmacokinetic (PBPK) modeling approach that incorporates a drug's physiochemical properties, human physiological variables, and population variability estimates provides a comprehensive and powerful tool for evaluation of the effects of intrinsic (e.g., age and organ dysfunction) and extrinsic (e.g., DDIs) factors on drug exposures. 26 Where clinical trial data are not available to fully address DDIs, PBPK modeling may allow prospective prediction of DDI potential, 26,27 and it is now considered an acceptable time-saving and resource-saving alternative to clinical studies. 27,28 Therefore, PBPK modeling was used to further examine the DDI potential of OLZ/SAM, using a stepwise, workflow approach. 28 The objectives of the current analysis were: (i) to develop PBPK models for olanzapine and samidorphan using in vitro data and in vivo clinical pharmacokinetics data; (ii) to simulate, using those models, the pharmacokinetic profiles of olanzapine and samidorphan after single-dose or multiple-dose administration of each drug alone or in combination as OLZ/SAM; (iii) to verify, using observed data from clinical studies, the model-predicted effect of CYP1A2 and CYP3A4 inhibitors/inducers on the respective pharmacokinetic profiles of olanzapine and samidorphan; and (iv) to use the verified PBPK models to predict the effect of coadministration of OLZ/SAM with CYP3A4 and CYP1A2 modulators (including smoking) on the exposures of olanzapine and samidorphan. Model development Separate PBPK models were constructed for olanzapine and samidorphan in the Simcyp version 16 Simulator (Certara, Princeton, NJ) and refined by leveraging available in vitro data and in vivo clinical data. Published physicochemical properties, plasma protein binding, in vitro disposition and metabolism profiles of olanzapine obtained from literature searches, and permeability data from an in vitro study in the human Caco-2 cell system were used to build a PBPK model for olanzapine (Table 1). 15,16,21,29 Physicochemical parameters and parameters from in vitro absorption, distribution, metabolism, and excretion studies, and from a clinical mass balance study with samidorphan, were used to build a PBPK model for samidorphan ( Table 1). A minimal PBPK model, which includes a single adjusting compartment that combines all tissues except the intestine, liver, and portal vein ( Figure S1a), was used for olanzapine, and a full PBPK model by inclusion of additional tissues, such as adipose, brain, bone, heart, lung, muscle, and skin ( Figure S1b) was used for samidorphan, as the selected models described the disposition of each drug with reasonable accuracy when compared with observed clinical data. Application of the full PBPK model for samidorphan led to improved recovery of the observed half-life. MODEL VALIDATION The PBPK models were used to simulate concentration-time (C-T) profiles of olanzapine and samidorphan in a virtual North European Caucasian population based on default Simcyp parameter values. 30 Proportions of poor metabolizer phenotypes for relevant CYP enzymes and CYP1A2 abundance values for nonsmokers (52 pmol/mg microsomal protein) and smokers (94 pmol/mg) were obtained from published sources. 30,31 The PBPK models for olanzapine and samidorphan were first developed and verified using observed data from clinical studies in which olanzapine or samidorphan was administered alone. 13,20 The models were then combined to represent administration of olanzapine and samidorphan in combination as OLZ/SAM in virtual trial simulations to ensure the same virtual individual was administered olanzapine and samidorphan together as in clinical studies with OLZ/SAM. 13,18,19,25 Specifically, the PBPK models were verified by comparing simulated C-T profiles and pharmacokinetic parameters, including C max and AUC, for olanzapine and/or samidorphan with observed data obtained from clinical studies. 13,[18][19][20]24,25 Virtual trials for each comparison were generated to match the population demographics (i.e., age and sex) and treatment characteristics of the study, providing the observed data as described in Table 2. MODEL APPLICATION The verified OLZ/SAM PBPK model was applied to predict changes in exposure of olanzapine and samidorphan following coadministration of therapeutic doses of orally administered OLZ/SAM with CYP3A4 and CYP1A2 modulators. For each simulation, 10 virtual trials of 24 healthy subjects (50% women; 19-49 years of age) were generated. Change in exposure of olanzapine and samidorphan was predicted after administration of OLZ/SAM 10/10 (10 mg olanzapine and 10 mg samidorphan) or 20/10 (20 mg olanzapine and 10 mg samidorphan): (i) on the last Model validation Simulated plasma concentrations of olanzapine following multiple oral doses of olanzapine (10 mg) administered alone and in combination with samidorphan (5 mg; Figure 1a) were superimposed, as were simulated plasma concentrations of samidorphan following multiple oral doses of samidorphan (5 mg) alone and in combination with olanzapine (10 mg; Figure 1b), indicating there is no pharmacokinetic interaction between olanzapine and samidorphan when the two drugs were administered in combination. Simulated plasma concentrations and pharmacokinetic parameters were also in good agreement with observed data 13 ( Figure 1; Table S2). Simulated plasma C-T profiles of olanzapine and samidorphan following a single oral dose of OLZ/SAM in healthy subjects were consistent with observed data 18 ( Figure S2), as were simulated C-T profiles of olanzapine and samidorphan following multiple once-daily oral doses of OLZ/SAM 10/10 in patients with schizophrenia 19 (Figure 2). Simulated C-T profiles of samidorphan in the absence and presence of the strong CYP3A4 inhibitor itraconazole are consistent with the observed data ( Figure 3). The model-predicted 58% increase in samidorphan AUC in the presence of itraconazole agrees well with the observed 50% increase after sublingual administration of samidorphan 24 (Table 3). Simulated C-T profiles of olanzapine and samidorphan following a single oral dose of OLZ/SAM in the absence and presence of the strong CYP3A4 inducer rifampin are consistent with observed data 25 (Figure 4). Using a maximum fold induction (Ind max ) value of 29.9 for rifampin, 32 the model predicted 43% and 74% reductions in olanzapine and samidorphan AUC, respectively, in the presence of rifampin, which agreed well with the observed 48% and 73% reductions, respectively ( Table 3). The model-predicted 92% increase in olanzapine AUC in the presence of the strong CYP1A2 inhibitor fluvoxamine was consistent with the observed increase of 119% (Table 3). 20 MODEL APPLICATION The validated PBPK model for OLZ/SAM was applied to predict the change in olanzapine and samidorphan exposure following coadministration of OLZ/SAM with itraconazole (a strong CYP3A4 inhibitor), fluconazole (a moderate CYP3A4 inhibitor), and efavirenz (a moderate CYP3A4 inducer). The model predicted a 60% and 39% increase in samidorphan AUC in the presence of the strong CYP3A4 inhibitor itraconazole (200 mg/day) and the moderate CYP3A4 inhibitor fluconazole (200 mg/day), respectively, with minimal change in olanzapine AUC ( Table 3). The model predicted a 14% reduction in olanzapine AUC and a 41% reduction in samidorphan AUC in the presence of the moderate CYP3A4 inducer efavirenz (600 mg/day) ( Table 3). The effects of CYP1A2 inhibition and of smoking, which is associated with induction of CYP1A2, were assessed for olanzapine only, as samidorphan is not metabolized by CYP1A2. The model predicted an increase in olanzapine exposure when OLZ/SAM 10/10 or 20/10 was administered in the presence of the strong CYP1A2 inhibitor fluvoxamine (100 mg/day; Table 3). The predicted effect of fluvoxamine coadministration on olanzapine exposure was greater in smokers than in nonsmokers and was independent of olanzapine dose ( Table 3). A reduction in olanzapine exposure after OLZ/SAM administration was predicted in smokers, assuming CYP1A2 abundance of 94 pmol/mg protein. An even greater reduction in olanzapine exposure was predicted in heavy smokers, assuming a CYP1A2 abundance of 156 pmol/mg protein ( Table 3). DISCUSSION Separate PBPK models for olanzapine and samidorphan were developed and verified using observed data from clinical studies in which olanzapine or samidorphan was administered alone. 13,20 First-order models were able to capture the absorption profiles of both olanzapine and samidorphan adequately. Furthermore, given that the change in C max was relatively small (12%) in the clinical study involving samidorphan and the strong CYP3A4 inhibitor itraconazole, 24 first-pass metabolism was not considered a major contributor to DDI liability here. The two models were combined to represent administration of olanzapine and samidorphan in combination as OLZ/SAM in virtual trial simulations to ensure the same virtual individual with the exact same system/physiological parameters was administered olanzapine and samidorphan together at the same time in the same biosystem as in clinical studies with OLZ/SAM. 13,18,19,25 Model-simulated C-T profiles well described the observed data in multiple clinical studies for model validation. Simulated exposures were within 1.25-fold of observed data for both olanzapine and samidorphan. PBPK modeling predicted no interaction between olanzapine and samidorphan when administered in combination, which is consistent with the distinct metabolic pathways of olanzapine and samidorphan and observed data from clinical studies. 13,18,19 The weak effect of CYP3A4 inhibition on samidorphan pharmacokinetics was accurately predicted by the PBPK model. The predicted 58% increase in samidorphan AUC in the presence of itraconazole (a strong CYP3A4 inhibitor) aligned well with the 50% increase observed in a clinical DDI study with buprenorphine/samidorphan (BUP/SAM) and itraconazole. 24 Although the observed data were based on sublingual administration of samidorphan as a component of BUP/SAM, because the bioavailability of samidorphan is similar with sublingual and oral administration, 33 the effect of itraconazole on samidorphan pharmacokinetics after oral administration is expected to be the same as that after sublingual administration. The negligible effect of itraconazole on olanzapine exposure predicted by the PBPK model was expected, as the contribution of CYP3A4 to the overall clearance of olanzapine is < 10%, 34,35 and supported by the fact that no significant CYP3A4-mediated DDIs with olanzapine have been reported. 21 Coadministration with a moderate CYP3A4 inhibitor (fluconazole) is predicted to result in a 39% increase in samidorphan AUC and a negligible change in olanzapine exposure. The reduction in both olanzapine and samidorphan exposure when coadministered with the strong CYP3A4 inducer rifampin was well predicted by PBPK modeling. Although the ratios (presence/absence of rifampin) of C max and AUC values predicted using the default Simcyp Ind max value of 16 for rifampin (olanzapine: 0.88 and 0.72; samidorphan: 0.59 and 0.41, respectively) were generally consistent with observed values (olanzapine: 0.89 and 0.52; samidorphan: 0.56 and 0.27, respectively), published DDI modeling studies have indicated that models perform well with a more potent rifampin induction potential, using maximum fold values of 37.1 and 38.0, respectively. 36,37 An intermediate Ind max value of 29.9, based on mRNA data from an in vitro study using human hepatocytes, 32 yielded ratios of C max and AUC values (olanzapine: 0.78 and 0.57; samidorphan: 0.42 and 0.26, respectively) more consistent with observed values compared with the default Ind max . The value of 29.9 has been applied in simulations involving other drugs with CYP3A4 contributions similar to that of samidorphan (including zolpidem and alprazolam). Predicted changes in exposure of both drugs as a consequence of coadministration of rifampin led to good recovery of the observed data in each case (data not shown). Coadministration with a moderate CYP3A4 inducer is predicted to result in a 14% reduction in olanzapine AUC and a 41% reduction in samidorphan AUC. Inhibition and induction of CYP1A2 are predicted to affect exposure for olanzapine only in individuals receiving OLZ/SAM, as samidorphan is not metabolized by CYP1A2. The OLZ/SAM PBPK model predicted increases of 60% and 102% in olanzapine AUC in nonsmokers and smokers, respectively, when OLZ/SAM was coadministered with a strong CYP1A2 inhibitor. The predicted increase in olanzapine AUC in the presence of strong CYP1A2 inhibition is consistent with that reported in a previous DDI study where coadministration of fluvoxamine (≤ 100 mg/day) with olanzapine (2.5−7.5 mg/ day) once daily for 8 days in 10 male smokers resulted in a 119% increase in olanzapine AUC. 20 A smaller 30% to 55% increase in AUC was observed in a second study, in which male smokers (N = 10) were administered olanzapine 10 mg in the absence and presence of fluvoxamine 50 or 100 mg/ day. 38 When taking OLZ/SAM, smoking is predicted to decrease olanzapine AUC by 23%, assuming a hepatic CYP1A2 abundance of 94 pmol/mg protein in smokers vs. 52 pmol/ mg in nonsmokers. 31 Assuming an increase in CYP1A2 abundance to 156 pmol/mg protein in heavy smokers, a 42% decrease in olanzapine AUC is predicted in heavy smokers compared with nonsmokers. Again, this predicted effect is consistent with results obtained for olanzapine administered alone. Clearance of olanzapine was increased 55% in smokers vs. nonsmokers in a population pharmacokinetic analysis (N = 523). 39 In a pharmacokinetics study (N = 49), AUC for olanzapine was 15% lower in smokers than in nonsmokers, whereas clearance was determined to be 37% to 48% lower in nonsmokers than in smokers. 21 The predicted increase in clearance with smoking is expected, given that smoking is associated with a significantly reduced olanzapine plasma concentration-to-dose ratio in patients with schizophrenia, 40 and case reports suggest that smoking cessation can result in clinically significant changes in symptoms and tolerability requiring dosage reductions of 30% to 40% in patients stabilized on olanzapine. 41 A PBPK modeling approach can be used to provide information regarding drug pharmacokinetics and predicted effects of intrinsic and extrinsic factors on absorption, distribution, metabolism, and excretion. 26 Model predictions may be valuable for examining potential differences in pharmacokinetics between patient populations and effects of organ impairment, selecting appropriate dosing regimens for clinical trials, and understanding the potential for DDIs where clinical data are sparse. 26 Possible effects of DDIs can be difficult to predict, particularly for drugs that are susceptible to both induction and inhibition. For OLZ/SAM, each of the two component drugs is known to be subject to DDIs via one or more enzyme pathways when administered alone, [20][21][22]25 but clinical data assessing DDI potential of OLZ/SAM are currently limited to a single study. 25 In this analysis, PBPK modeling was used to elucidate DDI potential for olanzapine and samidorphan when administered as OLZ/SAM in lieu of clinical studies. CONCLUSIONS The validated OLZ/SAM PBPK model serves as a valuable tool for elucidating the potential for DDIs with coadministration of OLZ/SAM and CYP3A4 or CYP1A2 modulators in lieu of clinical studies. PBPK modeling indicated no pharmacokinetic interaction between olanzapine and samidorphan when administered in combination as OLZ/SAM. Strong inhibition of CYP1A2 is predicted to increase exposure of olanzapine, and induction of CYP1A2 (associated with smoking) is predicted to reduce exposure of olanzapine. Coadministration with moderate and potent CYP3A4 inhibitors is predicted to have a weak effect on samidorphan exposure and negligible effect on olanzapine exposure. Moderate to strong CYP3A4 inducers are predicted to reduce samidorphan exposure and, to a lesser extent, olanzapine exposure. Supporting Information. Supplementary information accompanies this paper on the CPT: Pharmacometrics & Systems Pharmacology website (www.psp-journal.com). Supplemental Information. Tables S1-S2. Figures S1-S2. Funding. Alkermes, Inc. is a pharmaceutical company developing OLZ/SAM, a combination product of olanzapine and samidorphan for the treatment of schizophrenia and bipolar I disorder, and has funded this study. Conflicts of Interest. L.S. and L.vM. are employees of Alkermes, Inc. K.R.Y. is an employee of Certara UK Limited, Simcyp Division.
2020-01-11T14:03:53.109Z
2020-01-09T00:00:00.000
{ "year": 2020, "sha1": "aedd91b7f43495db4ae5972166e659cfb20e8cb9", "oa_license": "CCBYNC", "oa_url": "https://ascpt.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/psp4.12488", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "63da59ed5a5a5fc8b7730e9d8d550fae9a4e03ad", "s2fieldsofstudy": [ "Medicine", "Psychology", "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
233737087
pes2o/s2orc
v3-fos-license
Elastic Bloom Filter: Deletable and Expandable Filter Using Elastic Fingerprints —The Bloom filter, answering whether an item is in a set, has achieved great success in various fields, including networking, databases, and bioinformatics. However, the Bloom filter has two main shortcomings: no support of item deletion and no support of expansion. Existing solutions either support deletion at the cost of using additional memory, or support expansion at the cost of increasing the false positive rate and decreasing the query speed. Unlike existing solutions, we propose the Elastic Bloom filter (EBF) to address the two shortcomings simultaneously. Importantly, when EBF expands, the false positives decrease. Our key technique is Elastic Fingerprints , which dynamically absorb and release bits during compression and expansion. To support deletion, EBF can first delete the corresponding fingerprint and then update the corresponding bit in the Bloom filter. To support expansion, Elastic Fingerprints release bits and insert them to the Bloom filter. Our experimental results show that the Elastic Bloom filter significantly outperforms existing works. INTRODUCTION The Bloom filter [7], a highly compact probabilistic representation of a set, is used to answer whether a particular item is in the set. A standard Bloom filter is a bit array along with k hash functions. The hash functions are used to map each item into k positions/bits in the array, and we call them the k mapped bits. An element is inserted by setting its k mapped bits to 1 while querying the presence of an element is done by checking if all the k mapped bits are set to 1. The main advantages of Bloom filters are: (i) small memory footprint, (ii) fast and constant speed of queries and updates, (iii) no false negatives, small and tunable false positive rate. Due to these advantages, the Bloom filter and its variants have been widely used in a great many fields, such as real-time systems [24], computer architectures [21], neural network [17], IP lookups [10], [18], [23], web caching [13], Internet measurement [11], packet classification [38], regular expression matching [9] , multicast [32], queue management [8], routing [31], [35], P2P networks [20], [30], data center networks [39], cloud computing [26], and more [16], [28], [37]. However, the Bloom filter suffers from two main drawbacks: they are not deletable nor expandable. More specifically, 1) once an item has been inserted into the standard Bloom filter, the item cannot be directly deleted; 2) once a set has been inserted into the Bloom filter, it is impossible to construct a larger Bloom filter representing the same set without extra information. In many applications, the Bloom filter is the best choice. However, these applications Co-primary authors: Yuhan Wu, Jintao He, and Shen Yan. Corresponding authors: Tong Yang (yangtongemail@gmail.com). E-mail: {yuhan.wu, 16hjt, yanshen, jywu2017, bin.cui}@pku.edu.cn, yangtongemail@gmail.com, olivier.ruas@inria.fr, nicholas.zhang@huawei.com inevitably suffer from the aforementioned drawbacks as discussed in the following examples: • Black lists. The Bloom filter is used to store a black list to prevent threats such as DDoS attack [27] and amplification attack [33]. However, when an IP address is queried, even if the IP address is legal, the result might be a false positive and the IP address is regarded it as malicious. To solve the problem, a white list can be set up to store those friendly addresses. In practice, the elements of both lists might change and the lists should be deletable and adjustable. • MAC address lookup. Each switch has a MAC address table. The table has a large number of entries and each entry can be considered as a Key-Value pair. The key is the destination MAC address and the value is the outgoing port. One Bloom filter is built for all MAC addresses with the same outgoing port [39]. The size of MAC address table dynamically changes, and could increase a lot. • Multicast routing. Multicast routing is the routing of IP multicast datagrams. The Bloom filter is used to compress a Multicast forwarding table for each outgoing interface in the switch [29]. The Bloom filter in each outgoing interface is used to determine whether to forward an incoming packet or not. The member of an interface will join and leave the forwarding tables. The size of forwarding table changes dynamically and is unknown in advance, and the size of a forwarding table could be quite large. • Longest prefix matching (LPM). The LPM is part of the rule of Internet Protocol (IP) routing. The LPM is to find an address with the largest number of same leading bits. For LPM, the Bloom filter is used [10] to determine the length of the matching prefix. The candidates of the LPM, the entries containing addresses, change dynamically and the number of candidates can be enormous. The above examples require the Bloom filter to support item deletion and expansion. In summary, when Bloom filters are used for representing dynamic sets, deletion and expansion are often indispensable. Some existing works focus on addressing one of the above two shortcomings of Bloom filters, but none of them can address both shortcomings at the same time without sacrificing the query efficiency. To support item deletions, the counting Bloom filter (CBF) [13] stores counters instead of bits in the Bloom filter. A common approach is to maintain a counting Bloom filter in slow memory to support the deletion of members and a Bloom filter in fast memory to support fast queries. But the second shortcoming of expansion cannot be addressed by using CBF. To support expansion, Scalable Bloom filters [36] and Dynamic Bloom filters [14], [34] append a new empty Bloom filter at the end of the the old structure repeatedly. The optimized Dynamic Bloom filter [15] replaces each Bloom filter with a counting Bloom filter to support item deletion. The overhead of these solutions is that the query time and the false positive rate are persistently increasing along with the number of new Bloom filters. In contrast, the design goal of this paper is to address the two shortcomings of the Bloom filter simultaneously with neither additional query overhead nor additional accuracy loss. In this paper, we propose a novel data structure, namely the Elastic Bloom filter (EBF), that overcomes the above two shortcomings at the same time. EBF is an extension of the standard Bloom filter that supports both item deletion and expansion. Contrary to the Dynamic Bloom filters whose expansion will increase the false positive rate, the expansion of the EBF can significantly reduce its false positive rate. The key technique of the EBF is called Elastic Fingerprints. EBF consists of a standard Bloom filter and an elastic fingerprint array. To expand the Bloom filter, we first cut one bit from each fingerprint, and appropriately combine the Bloom filter and the cut bits into a larger Bloom filter. To compress the Bloom filter, some bits of the Bloom filter are to be lost, while we append these lost bits to the elastic fingerprints. In other words, EBF dynamically moves bits between the fingerprint array and the Bloom filter during expansion and compression. For deletion, we first delete fingerprints from the fingerprint array, and then determine which bits in the Bloom filter should be cleared. Further, we propose two optimization methods namely lazy update and bucket tree in Section 3.3. We have open-sourced all code at GitHub [2]. Standard Bloom filter The Bloom filter is a highly compact probabilistic representation of a set and it answers whether a queried item is in the set. Let U = {e 1 , .., e n } be the universal set of items, and S ⊂ U a subset of U . A Bloom filter of S is to test whether a given item e ∈ U belongs to S. In practice, the set S is the result of successive insertions and deletions of items. In that context, an item e is in S if it has been inserted in S and has not been removed from it. A Bloom filter is a bit array A of size m along with k hash functions (h i ) i=1,..,k . The hash functions map the items of U to bits of A: ∀i ∈ [1, k], ∀e ∈ U, h i (e) ∈ [0, m). An empty Bloom filter has all its bits set to 0. Insertion: To insert an item e to the filter, we compute all its hashes by the hash functions and set the mapped bits to 1: ∀i ∈ [1, k], A[h i (e)] ← 1 Query: To check whether an item e has been inserted into S, we check whether all its mapped bits in A are 1: The queries do not cause false negatives but some false positives may occur. For a Bloom filter of size m with k hash functions and in which n different items have been inserted, the false positive rate (FPR) [7] is Note that the standard Bloom filter does not support item deletion and its size never changes. Related Work The Bloom filter [7], [22] is widely used because of its three main advantages: 1) memory efficient; 2) fast to query and update; and 3) no false negatives. Many variants of the Bloom filter have been designed to improve its performance. In this paper, we only focus on the variants about its expansion and deletionand analyze them from the aspect of the above three main advantages. In order to support item deletion, the Counting Bloom filter (CBF) [13] replacing each bit by a counter. When inserting an item, its mapped counters are increased. Similarly, its mapped counters are decreased when removing an item. This additional feature comes at an important memory overhead. A. Pagh, R. Pagh & S. S. Rao [25] and B. Fan et al. [12] also implemented the item deletion at a lower cost. Unfortunately, those approaches do not allow expansion, the number of inserted items must be known in advance. In order to support the dynamic set well, Scalable Bloom filters [36] and Dynamic Bloom filters (DBF) [14], [34] provide an adaptable size to the Bloom filter. Once a Bloom filter is considered to be full, i.e. its estimated false positive rate is higher than a given threshold, a new Bloom filter is appended. While DBF adds similar Bloom filters, Scalable Bloom filters append filters with larger sizes to have better control over the false positive rate. Unfortunately, it results in higher response time for queries, as the items should be queried successively in several Bloom filters. Also, DBF and Scalable Bloom filters suffer from the same shortcoming as the standard Bloom filter: they do not support item deletion. The optimized Dynamic Bloom filter [15] and its variant Par-BF [19] replace each Bloom filter of DBF with a CBF to support item deletion. Therefore, they can support both item deletion and expansion. However, they still suffer from the linearly increasing query time and false positives, which is hardly acceptable for query-oriented scenarios, including the four scenarios introduced in Section 1. Structure The Elastic Bloom filter consists of two parts, a standard Bloom filter, and a cooperative bucket array. The Bloom filter is stored in the fast memory to provide fast queries while the cooperative bucket array, used for expansion, is stored in the slow memory. The standard Bloom filter is an array with m bits. And the cooperative bucket array is an array containing m buckets. Every bit in the Bloom filter is associated to the bucket with the same index in the bucket array. We use k independent hash functions. Each hash function hashes the item into a w-bit hash number. We use uniform random hash functions: the output numbers are uniformly distributed in the range [0, 2 w ). The hash number is separated into two distinct parts by dividing it by m: the quotient and the remainder. The index of an item in the Bloom filter is the remainder while the quotient, named the Elastic Fingerprint, is stored in the cooperative bucket associated with the index. Each bucket can store D Elastic Fingerprints. Notations: Let A denote the bit array and B denote the cooperative bucket array. They both have a size m. The k hash functions (h i ) i∈ [1,k] hash item e into a pair (F p , I ndex ) i where F p is the fingerprint in the range 0, 2 w m and I ndex is the index in the range [0, m). Operations We now introduce the operations of the EBF: item insertion, item query, item deletion, the expansion of the EBF, and the compression. Insertion: To insert an item e into the EBF, we first compute the hash numbers using the k hash functions. We obtain k pairs (F p , I ndex ) i . We set all the mapped bits of A to 1 and insert the fingerprints to the associated buckets in B: The updates of A and B can be done independently. Fig. 1 shows the insertion of an item e into the EBF. If the insertion operation insert an item that has been inserted, we should check whether it has been inserted in order to avoid duplicate fingerprints. Before we set all the mapped bits of a to 1, we check whether there is any mapped bit that is 0. If so, we directly insert the fingerprints into the bucket, because it must have not been inserted. Otherwise, we additionally check whether there is the same fingerprint in every associated bucket. If so, we believe that the item is a duplicate and we do not insert its fingerprints. Otherwise, we just insert the fingerprints to the buckets. Query: To query whether an item e is in the EBF, we compute all the hash indexes h i (e).I ndex and return true if all the mapped bits in A are set to 1, false otherwise. The result There is no need to access the slow memory so that we can achieve a high query speed. Deletion: To remove an item, we compute its hash indexes and fingerprints, and then delete its fingerprints from the mapped buckets. If a bucket becomes empty due to the deletion, its corresponding bit in the Bloom filter is set to 0. Note that only the deletion of a previously inserted item is allowed. Expansion: The Elastic Bloom filter expands automatically (Case 1) when inserting a fingerprint into a full bucket or (Case 2) when the ratio of 1 bits in the Bloom filter, namely Set Bit Rate (SBR), is above a threshold Ω. In other words, the False Positive Rate (FPR) is above a threshold ∆ = Ω k . To expand our EBF, we double the allocated memory space of both the Bloom filter in the fast memory and the cooperative bucket array in the slow memory. The new memory is appended to the existing EBF. All the new bits are initiated to 0 and all the new buckets are empty. We now spread the fingerprints in the old EBF around the two parts of the new EBF. To do so, we scan the m buckets one by one to reallocate all the fingerprints. For every pair (F p , I ndex ), the hash number of it is F p * m + I ndex . Now that the size is 2m after expansion, the new fingerprint should be the quotient by 2m, and the index should be the new remainder. In other words, the lowest bit of F p determines its new position: if it is 0, the item should remain in the same bucket B[I ndex ], otherwise it should be moved to bucket B[I ndex + m]. In both cases, the value of the fingerprint is divided by two: F p ← F p /2 . After reallocating all fingerprints, we update the Bloom filter: for a given index i ∈ [0, 2m), if B[i] is empty, we set A[i] to 0. Otherwise, we set A[i] to 1. Fig. 2 shows the expansion. Compression: The compression is the inverse process of expansion. Suppose that the EBF has m buckets, where m is an even number. For each bucket B[i], i = 0, 1, . . . , m/2, we merge it with bucket B[i + m/2]. When merging the two buckets, we append one bit to each fingerprint and store the new fingerprint in the merged bucket. For the fingerprints from bucket B[i], we append a zero bit (multiply the fingerprint by 2). For the fingerprints from bucket B[i + m/2], we append a bit of 1 (multiply the fingerprint by 2 and add one). The EBF compresses automatically when the estimated Set Bit Rate (SBR) is below a threshold 1 4 Ω and it should not cause bucket overflow. Lazy update: In network applications, the expanding operation should be done as fast as possible to minimize the loss of packets. The lazy update is an optimization to make the expanding operation much faster. When expanding, the first thing we do is to copy the old Bloom filter into the newly allocated memory. In this way the query can be performed right away. Then, instead of spreading the fingerprints around the old and new memory, we simply copy the old bucket array into the new one. Besides, we attach a sign bit to every bucket. The sign bit, which is initialized to 0, indicates whether the bucket and its corresponding bit in the Bloom filter have been updated. If a bucket has been updated (e.g., reallocate all its fingerprints and renew the corresponding bit in Bloom filter), the sign bit will be set to 1. We update the buckets through two mechanisms in parallel: (i) the scanning function that scans each bucket and updates them and (ii) the updating with insertion function that updates buckets whenever the buckets are accessed to add or remove an item. When all the buckets have been updated, the structure can support expansion once again. Bucket trees: To prevent the buckets from overflowing, we reorganize the structure of the cooperative bucket array. The new structure is a bucket tree: a two-level k-ary tree whose nodes are buckets. Every bucket can store up to S fingerprints. There are m buckets in level 1. The i-th bucket in level 1 is associated to the i k -th bucket in level 2. When inserting a fingerprint, we first attempt to insert it into the bucket in level 1. If the bucket in level 1 is full, we insert it to the associated bucket in level 2 and add log 2 (k) bits at the beginning of the fingerprint to indicate where the fingerprint is inserted from. If the bucket in level 2 still overflows, we suspend the insert operation and expand our structure instantly. When expanding the structure, we try to pull the fingerprints in level 2 back to level 1. After expansion, we try to insert the item again. Extra Operations The main purpose of the Elastic Bloom filter is to provide deletion and expansion to the Bloom filter while still providing both fast answers and a low false positive rate. Additionally, EBF provides other operations without extra overhead: (i) union, (ii) intersection, (iii) cardinality, and (iv) accurate query. Union of Elastic Bloom filters: Let two Elastic Bloom filters ebf 1 and ebf 2 have the same initial size and hash functions. The union of ebf 1 and ebf 2 is calculated by first expanding both filters to the maximum of their sizes. Then, their buckets (B i ) i∈ [1,m] are merged by keeping all the fingerprints. The bit of the standard Bloom filter A is set to 1 when its corresponding bucket is not empty. Intersection of Elastic Bloom filters: The intersection between two Elastic Bloom filters is calculated similarly as the union: the only difference is that we take the intersection of the fingerprints while merging two buckets. Note that a fingerprint may be several times in the buckets. Cardinality: We can know exactly how many items have been inserted by counting the fingerprints. The insertion of every item adds k fingerprints in total, so the total number of Accurate Query: The Elastic Bloom filter can provide a more accurate estimation at the expense of slower speed. The accurate query first performs the standard query: given an item e, we verify all the bits A[h i (e).I ndex ]. If they are all set to 1, we check the presence of the fingerprints in all corresponding buckets: MATHEMATICAL ANALYSIS In this section, we provide a mathematical analysis of the Elastic Bloom filter (EBF). First, in section 4.1, we focus on the part of EBF in the fast memory, i.e., a standard Bloom filter. Then, in section 4.2, we analyze the part of EBF in the slow memory, i.e., a bucket array. Finally, We summarize how to configure the parameters of EBF in Section 4.3. The Performance of EBF in the Fast Memory We show the properties of the fast memory part of EBF in this section, including accuracy, space complexity, and time complexity. The data structure of EBF in the fast memory is exactly a standard Bloom filter and shares its properties. For an EBF of size m with k hash functions in which n different items have been inserted, the false positive rate . Conversely, when we want to achieve a given false positive rate δ while inserting n elements, the size of the filter should be at least m = log 2 (e)n log 2 1 δ , and the optimal number of hash functions is k * = ln(2) m n = O(ln 1 δ ). Therefore, the space complexity of the fast memory part is O(n ln 1 δ ). The time complexity of insertion/query/deletion is O(ln 1 δ ). The Performance of EBF in the Slow Memory We show the properties of the slow memory part of EBF (i.e., the cooperative bucket array), including space complexity and time complexity. For space complexity, we show that a small bucket size, for example O ln(n ln( 1 δ )) ln ln(n ln( 1 δ )) , is sufficient enough to ensure a high probability of not triggering 4 expansion. For time complexity, we show that applying expansion and compression operations does not increase the average time complexity, which is still O(ln 1 δ ) under our EBF algorithm. Applying Lemma 4.1, we have the following theorem. . We can derive that p = mi ln(1−ω1) mi−1 ln(1−ω0) , which is a constant. As m i = 2m i−1 or m i = 1 2 m i−1 , we have p ∈ ( 1 2 , 5 8 ) ∪ ( 8 5 , 2). Then, we have (1 − p)n i + pn i = pn i−1 , and therefore n i = p(ni−1−ni) As the time complexity of expansion/compression is O(n ln 1 δ ), we derive that the time complexity of the i th expansion/compression is O ln 1 δ (a th i − a th i−1 ) and the total time complexity of Case 1 is O(n ln 1 δ ). For Case 2, similarly, we will prove that the time complexity of the expansion (Case 2) happens in range a th i−1 ∼ a th i (where the i th expansion/compression triggered by Case 1 happens at the a th i insertion/deletion) is O ln 1 δ (a th i − a th i−1 ) , and therefore the total complexity is O(n ln 1 δ ). For each insertion in range a th i−1 ∼ a th i , according to Theorem 4.1, the probability that the insertion triggers bucket overflow is smaller than = 1 m . Applying the union bound, we get the time complexity of expansion (Case2) is smaller than , where j denotes the number of consecutive triggers of expansion (Case 2). EBF Parameter Configuration We introduce how to configure the parameters of EBF. First, we configure the EBF size m and the number of hash functions k. If you want to achieve best accuracy with least space, for the common set size n and the required false positive rate δ, you can set m = log 2 (e)n log 2 1 δ and k = − log 2 ( 1 δ ). But, in applications, we often choose a smaller k (e.g., k = 4) to achieve a better processing speed. . For hash value length w, we set w to 32, because the space of the slow memory (e.g., disk) is sufficient and the size of the Bloom filter in the fast memory can be up to 512M B, which is larger enough for many applications. Datasets The following datasets are used in our experiments. • CAIDA: As many papers [4], [5] do, we use anonymized IP trace streams from CAIDA [3], and then identify each flow of IP trace streams by the five-tuples. • Distinct Stream: we use Distinct Stream, which consists of random distinct elements, to test the worst case performance of each algotirhm. • IMC Data Center IP Trace: We use IP trace streams collected by [6] to measure the performance of different kinds of Bloom filters in data centers. Each flow of IP trace streams is identified by the five-tuples. Implementation We implement our algorithm in C++. The programs are run on a server with 18-core CPU (36 threads, Intel CPU i9-10980XE @3.00 GHz) and 128GB memory. In all experiments, we use MurmurHash [1], a well-acknowledged hash function, to calculate the hashing value of elements. We compare our EBF with the Dynamic Bloom filter (DBF) [14], partitioned BF (Par-BF) [19], the Scalable Bloom filter (SBF) [36], and the Counting Bloom filter (CBF) [13]. The default parameters of our experiment are listed in where T denotes the total number of fingerprint slots in the buckets and I denotes the total number of fingerprints that can be inserted into the Bloom filter without overflow. We use LR to evaluate the loading ability of our buckets in the slow memory. • RPDS (Relative Peak Data Size): n1 n0 . Typically, the number of elements in the Bloom filter is smaller than a predetermined threshold n 0 , which does not trigger expansion. In special cases, the number of elements changes dynamically, reaching a peak value n 1 . We call the ratio of n 1 to n 0 as Relative Peak Data size. When using real traces with duplicate items, we calculate the ratio of the insertions to approximate Relative Peak Data Size. Experiments on Accuracy FPR during insertion. EBF vs. DBF vs. SBF vs. Standard BF (Fig. 3): This experiment shows that the EBF can freely adjust its accuracy (measured by FPR) to keep it accurate. In this experiment, we first insert 2 14 items into each kind of Bloom filter, whose initial sizes are the same (2 18 bits), and detect the FPR of them during the process of inserting items. As shown in the figure, the FPR of standard BF is the worst because it cannot adjust its size. The DBF and SBF (Scalable BF) get worse more slowly but keep increasing along with the RPDS. Our EBF is the best among these Bloom filters because its Bloom filter in the fast memory can expand perfectly. The price is that we consume more slow memory space, but we claim that it is acceptable. Impact of initial memory size on FPR. EBF vs. DBF vs. SBF (Fig. 4): This experiment shows that EBF is the best for all initial memory sizes. When the data size is small, the user wants to keep a small Bloom filter to save memory. When the data size increases significantly, the user wants the data structure to supply the lowest final FPR by the smallest initial memory size. We detect the FPR after expansion. It is shown in the figure that the EBF is the best for all initial memory sizes. Experiments on Processing Speed Processing speed comparison on each operation. EBF vs. DBF vs. SBF (Fig. 5): This experiment shows that the query speed of EBF is much faster than the other two and the deletion speed of EBF is faster when the number of inserted items is large. In this expriment, we compare the three kinds of Bloom filters' processing speed of different operations. The initial size of each Bloom filter is the same (2 18 ) and the FPR thresholds are guaranteed to be the same. The experiments below of this section have the same settings. We vary the relative peak data size to observe the processing speed of Fig. 5, the insertion speed of EBF is slower than the other two, but to improve this, we use multi-thread to accelerate the insertion, as is discussed in the next section. Since the query of EBF only needs to query one Bloom filter in the fast memory, its query speed is much faster than the other two. As for the deletion speed, we did not compare that of SBF because it does not support deletion operation. The deletion speed of EBF is faster when the relative peak data size is larger than 16. Overall processing speed comparison. EBF vs. DBF vs. SBF (Fig. 6): This experiment shows that our EBF is the best queryoriented data structure. Using real traces (CAIDA), we compare the three kinds of Bloom filters' overall performance, i.e., the speed when the insertions, expansions, and queries are mixed up. We examine the MOPS with the changing ratio of insertion, positive query (i.e., to query the items which has been inserted) and random query (i.e., to query random items, most of which hasn't been inserted). From the figure, we can see that when the ratio is 1:2:2 (After we insert 1 item, we will have 2 positive queries and 2 random queries.) and the relative peak data size is smaller than 64, the MOPS of EBF is slightly lower than the other two. However, in most cases, with the proportion of query increasing, the processing speed of EBF becomes better than the other two. Overall processing speed comparison using multi-thread technique. EBF vs. Par-BF (Fig. 7): This experiment shows that the multi-thread technique is helpful to accelerate the processing speed. In this experiment, the multi-thread technique has been used to accelerate the processing speed of EBF. Fig. 7 shows the overall performance with the multi-thread technique. We compare the throughput of EBF and Par-BF using 4 threads and 16 threads. As is shown in the figure, the MOPS of EBF is higher than that of Par-BF. Design Choice of Slow Memory Usage Slow memory consumption. EBF vs. CBF (Fig. 8a): A question is why we do not use a large CBF instead of buckets/fingerprints in the slow memory, aiming to help the BF expand. This experiment (Fig. 8a) shows that EBF uses less memory mostly, but CBF make a better use of memory when its size is nearly the maximum size. In this experiment, the BF in the fast memory is 2 12 bits. We allocate 2 22 counters of slow memory for CBF so that CBF can help the BF expand its size to 2 22 . We use CAIDA as the dataset. As shown in Fig. 8a, the memory used by EBF varies with the increasing times of expansion, while the memory used by CBF is fixed when it is initialized. Time cost of expanding operations. Lazy update vs. Normal expansion vs. Build EBF (Fig. 8b): This experiment shows the time cost of 3 kinds of operations to expand the EBF. With the load rate increasing, the number of inserted items increases and the time cost of these operations increases slightly. However, the time cost of lazy update keeps the lowest and that of rebuilding EBF keeps the highest. That's why we optimize the expansion with lazy update. Experiments on the Load Ability in Slow Memory Load rate vs. Bloom filter size (Fig. 9a): This experiment shows that the load rate gets lower as the size of the Bloom filter gets larger and that more branches can increase the load rate. In this experiment, 8 hash functions are used to calculate the mapped bits. CAIDA is used and 2 19 items are inserted into the Bloom filter. As shown in the figure, with the number of branches fixed and the size of the Bloom filter increasing, the load rate gets smaller and smaller. And with the size of the Bloom filter fixed and the number of branches increasing, the load rate gets larger and larger, since we have more buckets to hold the inserted elements. Load rate vs. bucket size (Fig. 9b): This experiment shows that enlarging the buckets helps increase the load rate. In this experiment, we compare the load rate with different sizes of the buckets and different numbers of branches in the bucket tree. CAIDA is used and 2 19 items are inserted into the Bloom filter. As expected, with the number of branches fixed, the load rate increases as the size of the buckets gets larger. And with the size of the buckets fixed, the load rate increases as the number of branches gets larger. CONCLUSION When Bloom filters are used for dynamic sets, deletion and expansion should be supported. In this paper, we propose the Elastic Bloom filter to support item deletion and expansion at the same time without sacrificing the query efficiency. We propose the Elastic Bloom filter which supports item deletion and expansion at the same time. Our key technique is Elastic Fingerprints. Elastic Fingerprints dynamically absorb and release bits during compression and expansion. Mathematically analysis and experimental results show that the Elastic Bloom filter keeps the advantages of the Bloom filter and is suitable to deal with dynamic sets. The experimental results show that the Elastic Bloom filter can outperform the state-of-the-art in terms of query speed and false positives. We have open-sourced all related source code at GitHub [2].
2021-05-05T05:39:59.592Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "9e136500ee1ff2a57a15bf15bc38c4f5977fef1b", "oa_license": "CCBY", "oa_url": "https://hal.inria.fr/hal-03347690/file/ElasticBloomFilter.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9e136500ee1ff2a57a15bf15bc38c4f5977fef1b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
81193797
pes2o/s2orc
v3-fos-license
Productivity, quality and soil fertility of sugarcane ( Saccharum spp complex hybrid) plant and ratoon grown under organic and conventional farming system* It may be concluded that the application of 75% NPK through inorganics + 25% N through organic manures (PMC) + biofertilizers (Azotobactor + PSB) + biopesticide (neem cake) in sugarcane plant and 75% NPK through inorganics + 25% N through organic manures (PMC) + biofertilizers (Azotobactor + PSB) + trash mulching and green manuring with greengram inoculated with Rhizobium in alternate rows + biopesticide (neem cake ) in ratoon were found suitable practice for sustaining sugarcane productivity, maintaining soil fertility and getting higher monetary returns in sugarcane plant and ratoon system in calcareous soil of Bihar. The poor yield of sugarcane in Bihar is mainly due to erratic and imbalanced use of chemical fertilizer.The available soil nitrogen is low and addition of organic matter is not practiced.Thus, improving soil organic matter and soil fertility are important factors for sustainability of sugarcane.There are many alternative farming systems such as organic farming, eco-farming, natural farming, among others to make agriculture more sustainable and productive.The proper management of such farming practices may be helpful in rejuvenation of the soils and sustaining crop yield.Organic farming is a production system which favours maximum use of organic materials, crop residues, animal excreta, legumes, on and off farm organic wastes, bio-pesticide etc. and discourage use of synthetically produced agro inputs for maintaining soil health, productivity and pest management under the condition of sustainable natural resources and healthy environment (Palaniappan 2004).Organic farming is currently restricted to few crops.Thus, scope of organic farming in sugarcane needs to be explored.This study was therefore, conducted to evaluate the productivity, quality and soil fertility of sugarcane (Saccharum spp complex hybrid) plant-ratoon system grown under organic and conventional farming system. The trial was conducted in a fix plot with half portion for plant and half for ratoon during 2006-07 to 2009-10 under All India Coordinated Research Project on Sugarcane at research farm of Sugarcane Research Institute, Pusa, Bihar to study the effect of organic and conventional farming system on soil fertility, productivity and quality of sugarcane in plant and ratoon system.The farm is situated at 25º 98 ´ N latitude, 85º 67 ´ E longitude and at an altitude of 52.0 m above mean sea level.The climate of Bihar is subtropical and mean annual rainfall of the area is about 1200 mm.The soil was sandy loam, calcareous (CaCO 3 30.8%)with pH 8.35 and EC 0.16 dS/m.The soil was low in organic C (0.478%), available N and K (232 and 91.3 kg/ha) and https://doi.org/10.56093/ijas.v82i10.24188medium in P (11.9 kg/ha).The experiment was laid out in a randomised block design with five treatments having four replications comprising various combinations of organic and inorganic sources for nutrient supply and insect/pest control.The details of treatments for plant crop were T 1 , 100% NPK + micronutrients + control of pests /diseases through chemical; T 2 , 100% N through organics + biofertilizers + green manuring + control of pests/diseases through chemical; T 3 , 100% N through organics + biofertilizers + green manuring + biopesticide + detrashing of dry leaves; T 4 ,75% N through organics + 25% NPK through inorganics + biofertilizers + biopesticide; T 5 , 75% NPK + 25% N through organics+ biofertilizers + biopesticide and for ratoon. T 1 , 100% NPK +trash burning +control of pest /diseases through chemical: T 2 , 100% N through organics + biofertilizers + trash mulching and green manuring in alternate rows + control of pest/diseases through chemical; T 3 , 100% N through organics + biofertilizers + trash mulching and green manuring in alternate rows +biopesticide + detrashing of dry leaves; T 4 , 75% N through organics +25% NPK+ biofertilizer + trash mulching and green manuring with moong alternate rows + biopesticides ; T 5 ,75% NPK + 25% N organics + biofertilizers + trash mulching and green manuring in alternate rows + biopesticide.The recommended dose of fertilizers for plant: 150-37.5-50and ratoon 170-22-50 kg NPK/ha were applied through urea, diammonium phosphate and muriate of potash, respectively.Nitrogen was applied in split doses, half at the time of planting, one fourth at the time of first irrigation and rest one at the time of earthing up.The ZnSO 4 @ 50 kg/ha was applied at the time of planting in plant crop as a source of micronutrients.Pressmud cake (PMC) was analysed for nitrogen (1.02%) and used as organic sources on equivalent N basis.The neem cake containing N 4.8, P 0.40 and K 1.12% was applied @ 4 q/ha at the time of planting as biopesticide.The dose of fertilizers was adjusted as per nutrient value of the neem cake.Two rows of greengram (Vigna radiata ) inoculated with Rhizobium sp. was planted at 20 cm row space between two rows of sugarcane.Green manuring was done in situ at eight weeks stage (4 tonnes biomass/ha).The cultures of Azotobacter chroococcum and PSB (Bacillus megaterium ) were applied @ 4 kg/ha at the time of planting.Other recommended practices for sugarcane plant and ratoon crop were adopted.The mid late variety of sugarcane BO 137 was planted in last week of February and harvested after one year.Soil samples were collected at the time of planting and after harvest of crop.The processed soil samples were analysed for organic carbon, available N, P and K by standard procedure.Cane juice was extracted with power crusher and juice quality was estimated as per method given by Spencer and Meade (1955).Sugar yield was calculated as; sugar yield (tonnes/ha) = [S-0.4(B-S)× 0.73] × cane yield (tonnes/ha)/100; where S and B are sucrose and brix percent in cane juice.Whole cane sample was analysed for N, P and K content and their uptake was calculated.The economics was worked out on considering input and output of year of study. The perusal data revealed that application of nutrients through both organic and inorganic sources recorded significantly higher number of tillers and millable canes (NMC) over 100% NPK through inorganics (Table 1).The T 5 receiving 75% NPK through inorganics and 25% N through organics along with biofertilisers and biopesticide recorded significantly highest number of tillers (plant 135 500 and ratoon 143 500/ha) and millable canes (plant 98 700 and ratoon 105 300/ha) over T 1 .The effect of different treatments on single cane weight was non-significant.Integrated nutrient application had significant impact on cane yield in both plant and ratoon crop.The highest cane yield (plant 74.2 and ratoon 75.8 tonnes/ha) were recorded in treatment T 5 receiving 75% NPK through inorganic sources and 25% N through PMC along with biofertilisers and biopesticide which indicated saving of 25% NPK.Similar findings of integrated nutrient application were also reported by Thakur et al. (2007) and Virdia and Patel (2010).The plant cane yield obtained due to addition of organic manure alone (T 2 and T 3 ) was on par with 100% NPK through inorganics (T 1 ).However, in ratoon crop, cane yield obtained in organic farming treated plots (T 3 ) was significantly higher over T 1.This could be attributed to release of nutrients with time due to mineralisation of organic matter resulting increased absorption of nutrient by ratoon crops.The results are in agreement with findings of Srivastava et al. (2008)).Organic manures are not only the sources of major nutrients, but they also provide other micronutrients and plant growth promoting substances which together lead to good crop yields.Tiwari and Nema (1999) also opined that plant population and cane yield increased significantly due to application of pressmud by both direct on plant cane and residual on ratoon canes.The cane juice quality, viz brix, sucrose and purity content in cane juice did not differ significantly due to different treatments.Commercial Cane Sugar (CCS) which is a function of cane yield and sucrose content exhibited similar trend of cane yield.Similar findings were also reported by Thakur et al. (2007).The highest net returns of ` 25 746 and 61 916 were recorded in T 5 both for plant and ratoon crops, respectively.The B: C ratio (1.46) was also highest in T 5 for plant crop while, T 3 gave the highest B:C ratio (2.46) in ratoon crop.The net returns (Rs 87 662.00) and B:C ratio (3.87) were also higher T 5 in plant-ratoon system. Nutrient uptake by both plant and ratoon crop followed similar trend of cane yield (Table 3).On an average, the uptake of nutrients by both plant and ratoon crops were 2.27-0.22-2.95kg NPK /tonne of cane produced, respectively.The highest uptake of N, P and K was recorded in T 5 in both plant and ratoon crop.The results thus indicated that integration of nutrients had beneficial impact on availability of N, P and K in soil resulting more uptakes.Apart from this, application of biofertilizers in presence of organic manures 1.9 1.5 6.5 6.5 also helped in increasing the availability of nutrients resulting higher uptake of nutrients by crops.The results further indicated that among the major nutrients, relatively higher uptake of K was recorded followed by N and P irrespective of treatments.The results are in close agreement of findings of Virdia and Patel (2010). The pH and EC of soil did not differ significantly under different treatments (Table 2).The pH ranged from 8. 22-8.32 and 8.29-8.36 after the harvest of plant and ratoon crop, respectively.The pH slightly declined in all organic matter treated plots over inorganic fertilizer treated plot.The release of organic acids during decomposition of organic manures might have resulted in slight decline in soil pH.The pooled mean value EC was slightly higher in ratoon crop due to high value of EC ( 0.813-0.841dS/m) in the year 2008-09 might be due to low rainfall.Addition of organics alone (T 2 and T 3 ) or with inorganics (T 4 and T 5 ) recorded significant improvement in organic C content of post harvest soil over T 1 .The application of organics alone or alongwith inorganics brought about an increase of 4.8-16.9% in organic C content of the soil over initial value.The highest increase was recorded in T 3 receiving 100% N through organics.Increases in soil organic C due to addition of PMC as well as crop residues were also reported by Dee et al. (2003) and Singh et al. (2007).However, a slight decrease (3.7%) in organic C was noticed in 100% NPK treated plots.Under sugarcane growing condition, the loss in organic C due to conventional agriculture was also reported by Haynes and Hamilton (1999).Available nutrient status (N, P and K) of post harvest soil also increased significantly due to application of organic manures alone or in combination with fertilizers.Higher available N was observed in organic manure (PMC) treated plots while available P and K were in integrated nutrient treated plots.Since, the data presented in table are the mean of three years the value of N in T 2 (243 and 242), T3 ( 244 and 245) and T 5 (239 and 237 kg/ha) are almost identical.The improvement of K in plant compared over ratoon crop could be attributed to fixation of added K with elapse of time.The nature and rate of potassium (K) fixation and release of soil K from different pools of adsorbed and structural K are important issues from a viewpoint of K availability in soil and the degree of fertilizer K uptake by plants.High K demanding crops remove enormous amounts of K, resulting in a large negative nutrient balance in soils even when recommended fertilizers are applied (Singh et al. 2004).The build up of soil available N could be attributed to greater multiplication of microbes due to addition of organic manures which helped in mineralization of soil N leading to higher available nitrogen.Improved P availability could be due to greater mobilization of soil P owing to reduced P sorption while, increased in available K might be due to addition of K in available pool owing to mineralization of organic matter by microorganisms.Addition of organics alone or in combination with inorganic fertilizer and biofertilizer improved the soil fertility, viz available N, P and K in general and organic C in particular over their initial value which indicated sustaining of soil fertility.These results are in conformity of findings of Tiwari and Nema (1999), Thakur et al. (2007), andandVirdia andPatel (2010). SUMMARY It may be concluded that the application of 75% NPK through inorganics + 25% N through organic manures (PMC) + biofertilizers (Azotobactor + PSB) + biopesticide (neem cake) in sugarcane plant and 75% NPK through inorganics + 25% N through organic manures (PMC) + biofertilizers (Azotobactor + PSB) + trash mulching and green manuring with greengram inoculated with Rhizobium in alternate rows + biopesticide (neem cake ) in ratoon were found suitable practice for sustaining sugarcane productivity, maintaining soil fertility and getting higher monetary returns in sugarcane plant and ratoon system in calcareous soil of Bihar. Table 1 Effect of farming system on yield attributes,, cane yield, and commercial cane sugar (CCS) and economics of plant and ratoon crop of sugarcane (pooled data of three years) Treatment Table 2 Effect of farming system on nutrient uptake by plant and ratoon crop of sugarcane and available nutrient status of the post harvest soil.(pooled data of three years) Treatment
2019-03-18T14:07:35.568Z
2012-10-05T00:00:00.000
{ "year": 2012, "sha1": "35a98431edd676564f5b133253ca0d4842037072", "oa_license": "CCBYNCSA", "oa_url": "https://epubs.icar.org.in/index.php/IJAgS/article/download/24188/11470", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1df062db829f31347d48579eea3b3b002e0ba478", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
103496081
pes2o/s2orc
v3-fos-license
Porous TiO2 with large surface area is an efficient catalyst carrier for the recovery of wastewater containing an ultrahigh concentration of dye The preparation of porous TiO2 as a carrier for the Fenton reaction is reported. Porous TiO2 is an excellent carrier to load with elemental iron due to the large specific surface area and negative surface charge. Porous TiO2 was synthesized in the form of a hierarchically porous silica monolith that was used as a microreactor, and a block copolymer served as a template for mesoporous forms. The crystalline TiO2 growing in confined spaces maintained the porous structure and high crystallinity. The surface area of our synthesized porous TiO2 can reach 205 m2 g−1. The zeta potential of the TiO2 was as low as −36.5 mV (pH 7). Elemental iron was highly and uniformly dispersed over the channel of the porous TiO2via an impregnation method and served as the catalyst for the Fenton reaction. In the Fenton reaction, the synthesized catalyst performed strong catalytic activity during the degradation of wastewater containing an ultrahigh concentration of aqueous dye, at 400 ppm. The aqueous dye solution was degraded over 95% in 30 min, and the catalyst could be reused many times. Introduction Water is one of the most critical resources of human production and life. Due to the development of modern industry, water is commonly contaminated with a wide variety of toxic chemicals and organic pollutants that do not easily biodegrade. Wastewater treatment and protection of water resources have been under the spotlight in modern society. Numerous technologies have been reported that can be used in solving serious water pollution problems, including adsorption, 1,2 chemical coagulation, 3,4 biodegradation, 5,6 photodegradation, 7,8 and the Fenton reaction. 9,10 Among these technologies, the Fenton reaction is a facile and green method to remove organic pollutants. The Fenton reaction as an advanced oxidation technology has many advantages, such as strong oxidation abilities and high efficiencies. In a traditional Fenton reaction, the homogenous iron catalyst could cause the iron ions to remain in the reaction system, which may further pollute the environment. Additionally, the traditional catalyst is limited to use in a short range of pH values around 3. To overcome these shortcomings, heterogeneous catalysts have been developed in recent years. A variety of materials such as Naon 11 sepiolite, 12 resins, 13 rectorites, 14 and silica 15 have served as carriers to load iron oxide, and these catalysts were able to be recycled. An excellent carrier should have good pore structure and chemical stability under oxidation. Metal oxides with crystalline structures have recently attracted a great deal of attention because they have unique and tunable energy band structures 16 and changeable states of surface defects. 17,18 When metal oxides are used as the carrier in Fenton reactions, their specic surface areas, pore sizes, and surface chemical properties as well as their crystalline style are the key parameters. 19 Large surface areas can provide sufficient reaction-active sites to load additional elemental iron, and pore channels are benecial for mass diffusion, and can accelerate the reaction rates. The high crystallinity can improve the chemical stability of the carrier under the strong oxidation environment. However, the crystallization of metal oxides is usually accompanied by the rearrangement and migration of atoms, which causes a breakdown of the porous structure and a decrease in the specic surface area. 20 Therefore, it is an important and challenging task to synthesize metal oxides that have both large surface areas and controllable crystalline structures. A new and efficient strategy has been developed in recent years by which inorganic crystalline materials are synthesized in conned spaces to serve as microreactors. [21][22][23][24][25][26] Qi and his colleague 27 synthesized porous calcium carbonate single crystals via the templates of photonic crystals. A TiO 2 crystal with a large surface area that could perform high photocatalysis was reported by Yin et al. 28 This material was obtained through calcination in a conned space between two silica layers. In our experiments, we prepared porous TiO 2 with high crystallinity, high-temperature stability, large specic surface area, and strong negative surface charge using a hierarchically porous silica monolith that served as a conned space microreactor. This material was composed of aggregated TiO 2 nanoparticles. The specic surface area of the material can reach as high as 205 m 2 g À1 . The zeta potential of the TiO 2 was À36.5 mV (pH 7). The negative surface charge and large specic surface area contributed to the high dispersion of metal ions. Thus, iron oxide was uniformly dispersed in the porous TiO 2 . The obtained composite catalyst displayed highly efficient activity in the photo-Fenton reaction. Wastewater containing 400 ppm organic dyes was puried, and irradiation with visible light resulted in quick degradation of the dye in 30 min. Synthesis of the hierarchically porous silica template Hierarchically porous silica (HPS) was synthesized by a method modied from a previous report. 29 First, PEG (8.85 g) was dissolved in acetic acid (0.01 M, 75 ml), and then, the solution was stirred at room temperature until it became homogeneous. Subsequently, TMOS (30 ml) was added and stirred for approximately 10 min for hydrolysis at 0 C. The obtained sol was aged at 40 C for 36 h, and then treated with ammonia solution (1 M) at 110 C for 9 h to obtain the monoliths. Next, the monoliths were soaked in nitric acid (0.01 M) for 12 h to neutralize the ammonium hydroxide on their surfaces. The monoliths were then dried at 60 C. Finally, HPS was obtained aer being calcined at 650 C for 5 h. Synthesis of porous TiO 2 First, F127 (20 g), tetrabutyl titanate (46.4 ml), and concentrated hydrochloric acid (21.2 ml) were dissolved in anhydrous ethanol (38.8 ml). The TiO 2 -sol was obtained by vigorously stirring the solution at room temperature for 2 h. Then, the as-made HPS was immersed in the TiO 2 -sol for 24 h. Next, the monoliths were washed with ethanol three times, and then dried in a drying oven at 60 C. Subsequently, the dried solid was immersed in the TiO 2 -sol for another 24 h. Aer washing and drying, the monoliths were calcined at various temperatures (350 C, 450 C, 550 C, 650 C, and 800 C) for 5 h. The HPS template was completely removed by treating with 2 M NaOH at 90 C for 2 h, twice. The ltrate was thoroughly rinsed with water until the pH was neutral, and then, it was dried at 100 C to obtain the nal product, porous TiO 2 (DTT). For comparison, we synthesized TiO 2 with F127 as a so template (noted as STT) and without template (NTT). The STT was obtained by drying the TiO 2 -sol and calcining it at 450 C. The NTT was obtained by drying the solution consisting of ethanol, tetrabutyl titanate, and concentrated hydrochloric acid, and then calcining at temperatures of 450 C, 600 C, and 800 C. Synthesis of Fe 2 O 3 @DTT and Fe 2 O 3 @HPS The Fe 2 O 3 @DTT was synthesized by impregnation. First, 0.1 g DTT (calcined at 450 C) was impregnated in 5 ml saturated iron nitrate solution. Aer 5 h, it was washed with deionized water in a Buchner funnel and dried at 80 C. The Fe 2 O 3 @DTT catalyst was obtained aer calcination at 150 C. The synthesis of Fe 2 -O 3 @HPS was similar to that of Fe 2 O 3 @DTT. Similarly, 0.1 g HPS was immersed in 5 ml saturated iron nitrate solution for 5 h. Then, it was thoroughly rinsed with water and subsequently dried at 80 C. Aer calcining at 150 C, Fe 2 O 3 @HPS was obtained. Photo-Fenton reaction of composite catalysts Typically, 15 mg catalyst, 30 ml dye solution (at a certain concentration), 40 ml H 2 O 2 (30 wt%), and 12.5 mg hydroxylammonium chloride were added to a quartz colorimetric tube at room temperature. The dye solution was used in the Fentonreaction without adjusting pH. These reactants and catalysts were stirred in the dark for 3 h. Subsequently, the tube was exposed to the light from a xenon lamp (200-800 nm, 500 W), and the suspension continued reacting under visible light. A cutoff lter was placed so that it completely removed any radiation with wavelengths l < 420 nm to ensure illumination by visible light only. The concentration of dyes was measured with a UV/Vis spectrophotometer aer removing the catalyst. The degradation efficiency (h%) of dye was calculated by the following formula: Characterization Transmission electron microscope (TEM) images were obtained with a Tecnai G220S-Twin electron microscope, equipped with a cold-eld emission gun (200 kV). The X-ray diffraction (XRD) patterns were recorded with a Rigaku D/MAX-2400 X-ray powder diffractometer equipped with CuKa radiation (40 kV, 40 mA). The nitrogen adsorption and desorption isotherms were measured at 77 K using an ASAP 2010 analyzer (Micromeritics Co. Ltd.). The specic surface areas (S BET ) were calculated by the Brunauer-Emmett-Teller (BET) method, and the pore size was calculated using the Barrett-Joyner-Halenda (BJH) model. The zeta potential was measured with a Zetasizer Nano ZS90 Zeta Potential Analyzer (Malvern Instruments Ltd.). The UV-Vis diffuse reectance spectroscopy (UV-Vis DRS) spectra were obtained with a Cary 5000 UV-Vis-NIR spectrophotometer (Agilent Technologies Co. Ltd.). The spectra were recorded at room temperature in the range of 200-800 nm. The band gap was estimated by the following formula: 30 where a is the absorption coefficient, h (J s) is the Planck constant, n (s À1 ) is the light frequency, E g (eV) is the band gap, and A is a constant. All the concentrations of dyes were analyzed with a UV/Vis spectrophotometer. The maximum absorption wavelength of rhodamine B (RhB) was 554 nm, that of methylene blue (MB) was 664 nm, and that of methyl orange (MO) was 463 nm. Total organic carbon (TOC) analysis was carried out with a Shimadzu TOC-VCPH analyser (Shimadzu Co. Ltd.). The pH of the samples was adjusted to less than 2 in order to eliminate the inuence of the inorganic carbon. To explore the adsorption capacity of the prepared TiO 2 , tests were conducted using RhB, MB, and MO. The adsorption properties and equilibrium data were exhibited in the adsorption isotherms. The Langmuir adsorption isotherm is expressed as follows: where C e (mg L À1 ) is the equilibrium concentration of the solute, Q is the maximum adsorption capacity of the adsorbent (mg g À1 ), and b (L mg À1 ) is the Langmuir adsorption constant. The equilibrium adsorption capacities, Q e (mg g À1 ), were calculated using the following equation: where C 0 (mg L À1 ) is the initial concentration of the solute, V (L) is the volume of the solution, and M is the mass of the adsorbent (g). Characterization of DTT A dual-template strategy was used in the process of synthesizing TiO 2 (Scheme 1). The hierarchically porous silica monolith served as a hard template and a microreactor to provide a conned space, while polymer F127 served as a so template for mesoporous forms, which was mixed with the titanium oxide sol to form mesoporous DTT aer calcination. The powder XRD patterns for mesoporous DTT calcined at different temperatures are shown in Fig. 1. According to the XRD spectra, all the TiO 2 samples were in the anatase phase (JCPDS 21-1272). The intensity of the diffraction peak grew stronger, and the full width at half maximum (FWHM) was reduced with the increase in temperature. When the temperature was higher than 450 C, no obvious change was observed in the XRD spectra. These results indicated that the anatase structure of TiO 2 can be maintained, even with a calcination temperature up to 850 C. Usually, TiO 2 changes to the rutile phase at such a high temperature. 31 The maintenance of the anatase phase suggested that the phase transformation was limited in the porous conned space during calcination. This limitation probably resulted from atomic rearrangement, leading to the volume change in the material, as the process was greatly limited in a conned space. Such connement resulted in difficulty in occurrence of the phase transformation. The grain sizes were obtained through the Scherrer equation: 32 where D is the mean size of the ordered (crystalline) domains, and K is a dimensionless shape factor. The shape factor has a typical value of approximately 0.89. g is the X-ray wavelength corresponding to the CuKa radiation, which equals 0.154056 nm, and b is the line-broadening at half of the maximum intensity (FWHM). Aer subtracting the instrumental line broadening, in radians, q is the Bragg angle. As the temperature increased, the grain size rose from 6.8 nm to 17.2 nm. The nitrogen sorption isotherms of DTT are shown in Fig. 2. The N 2 adsorption-desorption isotherm of the crystalline material belongs to the type-IV shape, according to the IUPAC classication. This shape suggested the existence of a mesoporous structure. The specic surface areas of the DTT increased at rst, and then reduced. DTT calcined at 450 C had a specic surface area that was the largest, at 205.26 m 2 g À1 . At low temperature, the amorphous TiO 2 formed small grains, and the F127 decomposed so that small pores were formed. At high temperatures, the grain size increased as the specic surface areas relevantly decreased. Furthermore, specic surface areas of the material calcined at 800 C remained at 109.71 m 2 g À1 , Scheme 1 Schematic representation of the fabrication of porous DTT. This journal is © The Royal Society of Chemistry 2018 which was greater than the surface areas of many reported highcrystalline titanium materials. This result was probably caused by the small grain size of the DTT. All the related parameters of DTT are summarized in Table 1. The TEM pattern conrmed that the DTT had a high degree of crystallization and a porous structure. As shown in Fig. 3A, small TiO 2 grains aggregated to form a mesoporous structure. The diameters of the mesoporous DTT were less than 10 nm. According to Fig. 3B, the grain size of the TiO 2 approached 9 nm, which was consistent with the results calculated by the Scherrer equation. Fig. 3B and C show a well crystallized structure, which indicates that 450 C was sufficient for obtaining porous TiO 2 with ideal crystallinity. The interplanar spacing of the (101) crystal planes was 0.45 nm, which corresponded to the anatase titania. Comparison of different templates Control experiments were performed to explore the effects of the templates. The TiO 2 showed different pore structures and crystallization processes. As shown in Table 2, the specic surface areas greatly decreased with the absence of a so template. When calcined at 450 C, the obtained NTT (which was prepared without so or hard templates) had a specic surface area of only 57.87 m 2 g À1 . When only F127 was present (STT), the specic surface area of the STT was 104.97 m 2 g À1 . The surface areas of both NTT and STT were smaller than those of materials that were synthesized in the dual-template condition. These results indicated that a hard conned space existed, which was provided by the porous silica template, and this was the necessary factor for producing large surface areas. This result appeared in the XRD pattern (Fig. 4). Both STT and NTT cannot efficiently prevent the grain growth of TiO 2 (see Fig. 4A). Calculated according to the Scherrer equation, the grain sizes of the above-mentioned materials were 20.7 nm for the NTT and 17.2 nm for the STT. These grain sizes were both larger than those of the DTT. For the NTT that was synthesized in free space, the crystal polymorphic transformation occurred when calcined at temperatures of 600 C or 800 C. The phase style transferred from anatase to rutile (JCPDS 21-1276), either partially or completely (see Fig. 4B). These evolutions indicated that the thermostability of TiO 2 decreased when calcined in free space. The conned space of porous silica was therefore benecial for preventing the phase transformation of TiO 2 at high temperature. Dye adsorption properties of DTT As the DTT showed great specic surface area, its capacity to adsorb different organic contaminants was explored. Fig. 5 displays the equilibrium adsorption isotherms for rhodamine B and methylene blue onto DTT at room temperature. As shown in Table 3, the capacity of porous DTT to adsorb basic or acidic dyes was signicantly different. The maximum adsorption was calculated according to the Langmuir adsorption isotherm. For basic dyes (methylene blue (MB) and rhodamine B (RhB)) that form cations in aqueous solution, the adsorption capacities were 1250 mg g À1 and 250 mg g À1 , respectively. These capacities indicated that the pollutant was easily adsorbed on the surface of the DTT and enriched in its channels. Acidic dyes such as orange II were not adsorbed. These differences in adsorption capacity resulted from the surface potential of TiO 2 in water. The zeta potential of DTT in pure aqueous solution is À36 mV. The pH of the solution was 7. The weight percent of silicon for DTT was 0.82%, which was measured by energy dispersive X-ray spectroscopy (EDX). It was implied that little SiO 2 remained in the DTT. However, this Fig. 4 (A) The XRD patterns of TiO 2 prepared by using different templates; the calcination temperature is 450 C, and (B) the NTT materials calcinated at various temperatures. In B, the A symbol indicates the rutile peaks, and the symbol represents the anatase peaks. Fig. 5 Adsorption of organic contaminants onto DTT. Paper negative surface charge may be caused by the NaOH etching that occurred during the process of removing SiO 2 templates rather than the remaining silica. Examination of the characteristics of iron load The negative zeta potential of DTT is benecial to the adsorption of metal cations, such as iron ions. Elemental iron is one of the most important catalysts in wastewater treatment, and it plays a very important role in various Fenton reactions. The dispersibility of iron oxides directly affects its catalytic activity. The large surface area and negative surface charge (À36.5 mV) of DTT should be favorable for Fe 3+ ions to disperse on the surface. Here, the composite catalyst of Fe 2 -O 3 @DTT was prepared. For comparison, Fe 2 O 3 @HPS was also prepared, since we have previously reported that HPS is a good catalytic carrier for Fe 2 O 3 . Fig. 6 shows the XRD patterns for these three samples, and no peak of any type of iron oxides was found in the XRD patterns of TEM and energy-dispersive X-ray spectroscopy (EDS) elemental mapping show the distribution of elemental iron in different composite catalysts. As shown in Fig. 7, elemental iron was highly dispersed in both porous DTT and HPS. However, in Fe 2 O 3 @DTT, there was a larger amount of iron oxide with a more uniform dispersion. The elemental content could be measured by EDS. The Fe content in DTT and HPS was 2.60%, and 1.17%, respectively. The different results were perhaps caused by the various negative surface charges of the two carriers (the large specic surface area of HPS is 380 m 2 g À1 ). This speculation was further proved by the zeta potential results. The zeta potentials of DTT and HPS in pure aqueous solution were À36.5 mV and À16 mV, respectively. The more negative charge of DTT may lead to larger adsorption of Fe 3+ and better dispersion of iron oxides aer being calcined. The UV-Vis DRS spectra for DTT and Fe 2 O 3 @DTT are shown in Fig. 8. The sharp basal adsorption edge for DTT was approximately 390 nm, whereas the main absorption edge of Fe 2 O 3 @DTT did not signicantly change but noticeably absorbed light at 400-600 nm. Aer calculations, the band gap energy of DTT was 3.3 eV, while that of Fe 2 O 3 @DTT was 3.16 eV. The Fe 2 O 3 @DTT absorption intensity towards long wavenumber light was enhanced, which perhaps beneted the photo reaction. Examination of the catalytic performance of the Fenton reaction The catalytic activity of Fe 2 O 3 @DTT in the photo-Fenton reaction was subsequently explored. As shown in Fig. 9A, the degradation rate of the 50 ppm orange II solution reached 99% under visible light irradiation. At the same time, the 100 ppm, 200 ppm, and 300 ppm orange II solutions were degraded over 98%. The photocatalytic treatment of solutions containing high dye concentrates and high chemical oxygen demand (COD) content has recently attracted attention. [50][51][52][53][54] We explored the photocatalytic performance to degrade 400 ppm dye solution. As shown in Fig. 9B, orange II dye was slightly degraded under visible light without catalyst, and at the same time, only 2% of dyes were degraded under the catalyst of DTT. Surprisingly, 95% of orange II was also degraded in 30 min, even when the concentration of the solution was as high as 400 ppm. In comparison, when Fe 2 O 3 @HPS and commodity Fe 2 O 3 were used as catalysts, only 70% and 60% dyes were degraded, respectively, under the same reaction conditions. The results of the dark reaction showed that less than 1% dye was adsorbed. Orange II was slightly adsorbed by these catalysts, indicating that the decrease of dye concentration was the result of degradation rather than adsorption. The above results implied that Fe 2 O 3 @DTT had strong catalytic activity for effectively purifying ultra-high concentrated wastewater, whereas the DTT had no catalytic efficiency in the degradation reaction. The difference in degradation efficiency resulted from the dispersibility and load amount of iron oxides. The iron oxide in Fe 2 O 3 @HPS and Fe 2 O 3 @DTT was both homogeneous and small, but the Fe 2 O 3 @DTT loaded more iron oxide. Due to the large crystals, the commercial Fe 2 O 3 had the highest iron content of the three catalysts, but showed the lowest catalytic activity in the photo-Fenton reaction. More importantly, the initial dye concentration was as high as 400 ppm, which is much higher than many other reported values (Table 4). In comparison with the traditional photocatalysts ( This journal is © The Royal Society of Chemistry 2018 Fe 2 O 3 @DTT displayed excellent catalytic activities, which were perhaps due to the presence of the H 2 O 2 in the Fenton reaction. The Fe 2 O 3 @DTT also showed a stronger ability to purify the wastewater in contrast with other reported Fenton catalysts (Table 4, ref. [38][39][40][41][42][43][44][45][46][47][48][49]. This encouraging result can be explained by the high dispersion of iron oxide. To further investigate the degradation efficiency of the catalysts, the total organic carbon was measured. The results are shown in Fig. 10A. The mineralization rate of 100 ppm orange II reached 85% in 30 min, which conrmed that the Fe 2 O 3 @DTT has excellent catalytic activity in the Fenton reaction. The ability to degrade other pollutants (MB and RhB) was also explored, and the results are shown in Fig. 11. Methylene blue at 400 ppm was degraded 95% in 30 min, while RhB at 400 ppm was degraded more than 95% in 50 min. The results verify that the catalyst was effective in degrading various dye pollutants. Recycling tests were carried out to investigate the reusability of the catalysts. As shown in Fig. 12, the degradation rate was still above 99% aer ve cycles. The degradation time slightly increased, perhaps because of the catalyst loss during the recycling process. Conclusions In summary, we have presented a new Fe carrier of porous TiO 2 that possesses a large specic surface area, high crystallinity, and strong negative charge. The carrier was fabricated by DTT, and the key to obtaining the ideal structure was to control the crystallization in a conned space of the hierarchically porous silica with the assistance of a polymer pore-forming agent. The conned space prevented breakage of the crystal structure and atomic rearrangement, which would lead to grain growth and phase transfer. The obtained TiO 2 had a negative surface charge and showed satisfactory adsorption of acidic dyes and metal cations. Because of the great adsorption capacity, the elemental iron can be uniformly dispersed in the DTT. The uniform dispersion enabled excellent catalytic efficiency in the Fenton reaction so that the prepared iron oxide loaded on the pore channels of the crystal could thoroughly degrade various dyes in water under visible light irradiation. The ultrahigh dye concentration (400 ppm) in water waste was successfully degraded by the Fe 2 O 3 @DTT. In addition, the catalyst was stable and still removed 99% of the pollution aer ve cycles. The remarkable advantages of synthesizing metal oxide materials with specic surface and structural properties in conned spaces can be applied to the construction of other functional porous materials such as high-performance catalysts, sensors, and adsorbents. Conflicts of interest There are no conicts to declare. Fig. 12 The recycling test of the Fe 2 O 3 @DTT to degrade orange II (15 mg catalyst, 30 ml orange II, 12 mM H 2 O 2 , 6 mM NH 2 OH-HCl, irradiation times 1 h and at room temperature).
2019-04-09T13:08:08.500Z
2018-01-16T00:00:00.000
{ "year": 2018, "sha1": "51223e770f99aae1b0713ac0028cd952921485eb", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/ra/c7ra11985b", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "062c597ba9cbb2f2ed267513a87d12bea9c0856b", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
226247734
pes2o/s2orc
v3-fos-license
Combining Microbial Culturing With Mathematical Modeling in an Introductory Course-Based Undergraduate Research Experience Quantitative techniques are a critical part of contemporary biology research, but students interested in biology enter college with widely varying quantitative skills and attitudes toward mathematics. Course-based undergraduate research experiences (CUREs) may be an early way to build student competency and positive attitudes. Here we describe the design, implementation, and assessment of an introductory quantitative CURE focused on halophilic microbes. In this CURE, students culture and isolate halophilic microbes from environmental and food samples, perform growth assays, then use mathematical modeling to quantify the growth rate of strains in different salinities. To assess how the course may impact students’ future academic plans and attitudes toward the use of math in biology, we used pre- and post-quarter surveys. Students who completed the course showed more positive attitudes toward science learning and an increased interest in pursuing additional quantitative biology experiences. We argue that the classroom application of microbiology methods, combined with mathematical modeling using student-generated data, provides a degree of student ownership, collaboration, iteration, and discovery that makes quantitative learning both relevant and exciting to students. INTRODUCTION The American Association for the Advancement of Science and the National Research Council have each called for renewed undergraduate education efforts to build broadly applicable biology research skills (National Research Council, 2003; American Association for the Advancement of Science, 2011). One report, Vision and Change (American Association for the Advancement of Science, 2011), laid out influential and ambitious goals for reforming undergraduate biology education, encouraging the integration of core concepts and competencies throughout the curriculum. Several of these competencies are quantitative, including the ability to use quantitative reasoning and the ability to apply modeling and simulation. However, quantitative material can be challenging to introduce to biology-interested students early in their undergraduate career. Students enter college with a wide range of mathematics skills (Treisman, 1992;Agustin and Agustin, 2009;Sonnert and Sadler, 2014), and experiences in traditional introductory courses like calculus might lead some students to leave STEM (Ellis et al., 2016). In addition, many undergraduate biology students may also have unfavorable emotions about math (Wachsmuth et al., 2017). These emotions can translate to poor performance in math-related coursework (Ma and Kishor, 1997). One way to address these challenges is by integrating math and biology coursework at multiple points along an undergraduate curriculum (Bialek and Botstein, 2004;Chiel et al., 2010;Depelteau et al., 2010;Duffus and Olifer, 2010;Miller and Walston, 2010;Aikens and Dolan, 2014;Eaton and Highlander, 2017). In our experience, however, few introductory biology lab courses emphasize the breadth of quantitative skills commonly used in biology research. We propose that introductory course-based undergraduate research experiences (CUREs) may be a valuable early part of this type of integrated curriculum, given their potential positive effects on student learning and attitudes. CUREs are natural candidates to promote quantitative learning and build positive attitudes toward math among biology-interested students. These courses engage students in the practice of research from within the classroom, emphasizing peer collaboration and iterative approaches to the research process while students use modern scientific practices to address novel, broadly relevant research questions (Auchincloss et al., 2014). Student participation in CUREs can benefit student learning as well as persistence in STEM and attitudes toward science (Brownell et al., 2012;Jordan et al., 2014;Olimpo et al., 2016;Rodenbusch et al., 2016;reviewed by Dolan, 2016), and these courses may provide an avenue toward creating a more inclusive academic environment (Bangera and Brownell, 2014). Several recent CUREs have included quantitative learning outcomes and found student benefits (Brownell et al., 2015;Kirkpatrick et al., 2019;Murren et al., 2019), although these courses typically focus more on data and figure interpretation than on mathematical modeling. In this study we outline an introductory quantitative biology CURE that combines microbial culturing and genomic DNA isolation with modeling and quantitative characterizations of growth rate. We assessed student attitudinal gains using several published instruments (Andrews et al., 2017;Lopatto et al., 2008;Shaffer et al., 2010), as well as short-answer questions related to students' future course and career plans. We sought to answer three questions. Would this quantitative biology CURE increase students' interest in and perceived utility of using math in biology? 2. Would this course help students develop more positive attitudes toward science learning? 3. Does this course influence student plans for future quantitative courses or careers? We hypothesized that this course might increase students' desire to pursue future quantitative biology experiences by building more positive attitudes toward science learning and toward using math in biology. Assessing changes in student attitudes toward math in biology proved difficult due to strongly positive initial attitudes in this self-selected population. However, we find some evidence of positive changes in student attitudes toward learning science, as well as increased student interest in pursuing future quantitative experiences. Developing the Course Structure and Subject Here we outline the process we followed in creating a quantitative biology CURE. We developed the course to help students build quantitative skills that are commonly used in biology research. To that end, we informally surveyed the laboratory and quantitative skills used in local microbiology research labs. We identified commonly used lab skills including microbial culturing, microscopy, and spectrophotometry, which integrated with quantitative skills like calculations of concentration and dilution factors as well as mathematical modeling of growth curves. We chose to modify an existing workflow that is commonly used in undergraduate research on the microbiome (Dunitz et al., 2015). In this workflow, students culture microbes from almost any environmental sample, generate isolates from the sample(s), and use growth curves of the isolates to quantify aspects of the organism's biology. The workflow was initially piloted as a less quantitatively oriented seminar course, using environmental samples ranging from nectar (Dahlhausen, 2018) to abalone (Vater et al., 2016) to koala feces (Coil, 2017). These courses and other CUREs have been discussed by Vater et al. (2019). In our more quantitatively focused version of this microbial isolation approach, the specific taxa play a minimal role in shaping the course learning goals, teaching methods, and assessments. The current iteration of this CURE focuses on culturing, isolating, and quantitatively characterizing the features of halophiles (Rodriguez-Medina et al., 2020). Halophiles are a category of microorganisms that thrive in hypersaline conditions from sea salinity to saturation. These organisms span all three domains of life and can be found in diverse global environments including hypersaline soils, lakes, solar salterns, deep salt mines, and natural brines in coastal and submarine pools (DasSarma and DasSarma, 2015;Torregrosa-Crespo et al., 2017). Some halophiles are known to be polyextremophiles that are capable of tolerating and thriving not only in hypersaline environments, but also in settings with high pH, large amounts of sun radiation, and/or low water or nutrient availability. These harsh conditions have allowed halophiles to adapt unique biochemical pathways of interest in both basic and applied research realms (Becker et al., 2014;Torregrosa-Crespo et al., 2017). Several products derived from these pathways are of particular commercial interest, including but not limited to: polyhydroxyalkanoates (plastics industry), amylases (biofuel industry), proteases (laundry detergent), beta-carotene (food additive), and glycerol (cosmetic industry) (Yin et al., 2015;Amoozegar et al., 2017). The ability of halophiles to thrive in harsh conditions also makes them practical to use in the classroom. Because hypersaline growth conditions are inherently inhibitory to non-halophiles, halophilic culturing is more forgiving to small lapses in sterile technique, allowing students to successfully isolate pure cultures even if they are inexperienced in the laboratory. This is wellsuited for introducing lab work and microbial culturing to firstyear students. Although halophiles have a relatively long doubling time, a single weekly laboratory allows enough time for culture growth between each session. Learning Outcomes and Course Overview There are three main learning outcomes for the course: (1) students will be able to plan and perform the process of microbial culturing and genomic isolation, (2) students will be able to fit population growth models to microbial growth curve data, plot results, and compare the quality of fit among competing models, and (3) students will build interest and confidence in using quantitative skills in biology. The structure of our 10-week quantitative biology CURE includes 1 weekly 3-h wet lab session, currently offered in the Molecular Prototyping and BioInnovation Lab at UC Davis, as well as a 1-h weekly lecture held in a traditional classroom. The lectures cover the quantitative theory associated with the hands-on research experience and connect this theory to the lab applications (Supplementary File S2). Students also complete a weekly homework that combines a lab write-up with quantitative problem-solving. As the course transitions into student projects, student learning is assessed with an initial project proposal, a final written report, and an oral group presentation. The lab starts with an introduction to formal campus and site-specific lab safety training. We require all students to successfully complete the University of California "Fundamentals of Laboratory Safety Training, " an online course required for everyone who works in labs on campus. We explain to the students that this training certifies them to work in faculty research labs on campus. The site-specific training highlights that the workspace is used outside of class hours to host active student research projects (i.e., they are working in a "real" research space and not just a classroom and thus that we expect them to be aware of those activities and associated hazards as they work). We emphasize this training to provide a solid foundation in safety, but also to establish a classroom environment in which students feel like they are doing authentic research. Following the safety training, an introductory activity on pipetting, mixing, and measurement teaches techniques and orients students to the various instruments and supplies in the lab required for the course. These hands-on exercises are complemented with lecture and homework in which students develop their understanding of measurement and sources of error. For example, in one activity students repeatedly pipette and weigh an identical volume of solution. During lecture students had previously learned about accuracy and precision and how these can be quantified, and on the homework students use the programming language R (R Core Team, 2019) to calculate these quantities for their self-collected measurements. This pattern of linked lecture, lab, and homework continues throughout the quarter as each student collects (or selects previously collected) environmental samples, progresses through microbial isolation, and phenotypically characterizes the isolates. In 2018, students worked with different table salts available from local supermarkets, as well as from environmental samples collected at the salt flats of Cabo Rojo, Puerto Rico. In 2019, students self-collected local samples from soil, water, or salty foods. Students then practice microbial culturing and isolation, using their samples to start cultures. For this course, we use Halobacterium medium 372 (DSMZ, 2007) as the base medium and vary the concentrations of NaCl. By using media with multiple salinities, we can potentially culture a broader diversity of microbes from the samples. All cultures are grown in either a shaking (liquid culture) or static (agar plate) incubator at 37 • C. Plates that grow colonies within 7 days are transferred to a 4 • C refrigerator to pause growth. To prepare pure cultures, students make phenotypic observations of a single colony of interest, select cells from the colony to inoculate a new liquid culture, and prepare microscope slides to make observations on cell size and shape using a standard compound microscope equipped with phase contrast optics. With pure liquid cultures in hand, students focus on measuring and modeling microbial growth. Using a spectrophotometer, students measure the growth of each isolate at multiple salinities to quantitatively characterize salt tolerance. Students then use the programming language R (R Core Team, 2019) to fit logistic growth models to their data, explore variants of the models, and decide on the model that best explains the patterns they observe. The course culminates with students performing genomic DNA isolation and shipping off the gDNA for whole genome sequencing. These novel data are then used in a spring quarter follow-up CURE in which students continue to build quantitative skills as they apply bioinformatic and statistical techniques for comparative genomic analyses (Figure 1). Supplementary Files S2-S7 present examples of additional curricular materials: a weekly course schedule, an example lab protocol, an R notebook for a computational lab, the associated RData file for the computational lab, the prompt outlining the students' final project, and an assignment that provides students with practice in writing up their final project report. Iteration is considered a fundamental feature of CUREs. In this course, the steps described above may be iterated in practice by allowing students to revise and redo experimental or analytical steps. For instance, students have repeated opportunities to perform new isolations, select a new sample, redo growth measurements, and revise their quantitative growth models. FIGURE 1 | Graphical overview of the course structure. Colors in the circle correspond to different core elements of course-based undergraduate research experiences. Arrows in both directions represent places where students can iteratively repeat and revise, for example re-plating primary enrichment cultures to look for novel colonies or fitting additional models to growth curves. Figure 1 outlines the steps in the research process for this course, highlighting how the activities relate to iteration and other key features of CUREs, including peer collaboration, discovery, and scientific practices (Auchincloss et al., 2014). We emphasize that students own every step of this project. They select their samples and media, maintain their cultures, and make their own decisions about how to revise their models. Because project ownership may be a critical mediator of students' overall benefits in CUREs (Corwin et al., 2018), the course aims to develop students into independent lab practitioners who are progressing their own projects. Assessment of Student Attitudinal Gains This work was approved by the University of California, Davis Institutional Review Board (protocol #1314250). We surveyed students who enrolled in the halophile CURE in the fall quarters of 2018 and 2019. Because this was an elective course that did not satisfy any major-specific requirements, this sample is likely biased toward students who are motivated to pursue laboratory and quantitative experience. As a basis for comparison, we recruited a non-overlapping group of survey respondents in the University Honors Program who had a biology-related major, both because many of our students were in the University Honors Program and because we hypothesized that other University Honors students may also be motivated to pursue researchoriented laboratory and quantitative experience. Some students who enrolled in the CURE were also in the University Honors Program (12 out of 16 students in 2018 and 5 out of 17 students in 2019). This comparison group was surveyed only in 2018, as the set of available comparison students in 2019 mostly overlapped with those from 2018. Students completed an initial (pre) survey online during the first week of the quarter and completed an end-of-quarter (post) survey during the final weeks of the quarter. Students in the CURE completed the surveys during class time, while the comparison group completed their surveys at their own pace, outside of class. Authors JGA and MTF were co-instructors for the CURE in 2018, and authors JGA and REF were coinstructors in 2019. REF performed all data analysis on the anonymized student responses. The surveys contained multiple choice questions used in both years, and short response questions that were added in 2019. Survey data can be contaminated by participants who provide inaccurate responses to questions. We ensured that students had read the questions by including one 5-point Likert-scale question that stated "We use this question to discard from the survey Item text Construct Using math to understand biology intrigues/would intrigue me. Interest It is/would be fun to use math to understand biology. Using math to understand biology appeals/would appeal to me. Using math to understand biology is/would be interesting to me. Math is valuable for me for my life science career. Utility It is important for me to be able to do math for my career in the life sciences. An understanding of math is essential for me for my life science career. Math will be useful to me in my life science career. I have/would have to work harder for a biology course that Cost incorporates math than for one that does not. I worry/would worry about getting worse grades in a biology course that incorporates math than one that does not. Taking a biology course that incorporates math intimidates/would intimidate me. All questions were framed with a 7-point Likert scale, asking "For each of the statements in this set please rate your agreement with the item in question." Respondents could answer "Strongly disagree," "Disagree," "Somewhat disagree," "Neither agree nor disagree," "Somewhat agree," "Agree," or "Strongly agree," which we translated to a 1 through 7 scale. These questions assess three underlying constructs, labeled Interest, Utility, and Cost. people who are not reading the questions. Please select Agree (not Strongly agree) for this question to preserve your answers." We excluded from our analysis any survey responses with a choice other than "Agree" for this question for both the initial and end-quarter surveys. This excluded two students (out of 52 respondents). One additional student was excluded due to an incomplete survey. Final Student Data In total, we analyzed consenting, quality-controlled, paired pre and post responses from 16 CURE students in 2018 (94% of enrolled students), 17 CURE students in 2019 (77% of enrolled students), and 16 students in the comparison group (11% of the students initially emailed at the start of the quarter). Due to the small sample size of this study, we do not report specific demographic information or attempt to analyze the effect of demographics on student outcomes. Approximately 60% of students in both the CURE and comparison groups were female. 82% of students in the CURE were in their first or second year, as were all students in the comparison group. Student majors varied widely in both groups. Due to the range of years and majors, it is unlikely that a substantial proportion of either the CURE or comparison group shared any particular other courses. 17 of the 33 CURE students were in the University Honors Program, as were all 16 students in the comparison group. To have a larger sample size, data from both years of the CURE were merged and analyzed together. We note that patterns of student responses in the CURE group were similar in both years (Supplementary Figure S1). Our analysis focused on comparing survey responses on the end-of-quarter (post) survey to the responses on the initial (pre) survey, and contrasting these patterns between the CURE and comparison groups. Student Attitudinal Changes To assess changes in student attitudes, we used two previously created assessment instruments. The Math-Biology Values Instrument (Table 1; Andrews et al., 2017) assesses student attitudes toward using mathematics in biology, and is grounded in expectancy-value theory of achievement motivation and performance (Eccles et al., 1983;Wigfield and Eccles, 2000;Eccles and Wigfield, 2002). This theory posits that student achievement depends both on a student's confidence of success and the value they see in completing a task (Wigfield and Cambria, 2010;Corwin et al., 2018). This instrument consists of 11 questions that assess three underlying constructs related to student perceptions of using math in biology: interest in, utility of, and cost of taking biology courses that incorporate math ( Table 1). To analyze student changes in their math-biology values, we created a subscore for each construct, averaging across all relevant questions. Finally, we used a Mann-Whitney U-test to compare the mean change in each subscore between the comparison group and the CURE students. To evaluate student attitudes toward science learning and the scientific process, we used a subset of questions from the "Your opinions about yourself and about science" section from the CURE survey (Lopatto et al., 2008;Shaffer et al., 2010). The questions are summarized in Table 2. To avoid the problem of multiple comparisons that arise when testing the statistical significance of many individual questions, we relied on previous efforts that have assessed underlying relationships between questions. Prior analysis of this survey used factor analysis to identify the internal structure of these questions, finding two distinct constructs that are each assessed by multiple questions (Perera et al., 2017), and authors REF and MSG have noted similar correlations in student responses for these questions on Item text Construct Even if I forget the facts, I'll still be able to use the Personal value thinking skills I learn in science. I get personal satisfaction when I solve a scientific problem by figuring it out myself. I can do well in science courses. Explaining science ideas to others has helped me understand the ideas better. There is too much emphasis in science classes on figuring things out for yourself. Science Learning I wish science instructors would just tell us what we need to know so we can learn it. Science is essentially an accumulation of facts, rules, and formulas. To be successful in biology, I need to be able to perform Calculations quantitative calculations. Mathematical models are useful for biology research. Models All questions were framed with a 5-point Likert scale, asking "For each of the statements in this set please rate your agreement with the item in question." Respondents could answer "Strongly disagree," "Disagree," "Neutral," "Agree," or "Strongly agree," which we translated to a 1 through 5 scale. Italicized items were newly created for this survey. Items in the Science Learning construct were reverse-scored for quantitative analysis, as they are negatively framed. Frontiers in Microbiology | www.frontiersin.org a different study of 1,800 student responses (Furrow, Caporale, and Goldman, unpublished). We created two subscores using the relevant questions from our survey and used Mann-Whitney U-tests to assess the statistical significance of differences among groups for each subscore. We label one construct Personal Value, following the nomenclature of Perera et al. (2017). The other construct is based on a smaller subset of the questions found to be correlated in prior work; because the questions all focus on science learning, we label the construct Science Learning. These questions are negatively framed, with greater disagreement expressing more positive attitudes toward science learning. Any quantitative analyses of student changes for this subscore are reverse-coded to assign higher values to more positive attitudes. This section of the survey also included two additional statements posed in the same format ( Table 2): "Mathematical models are useful for biology research" and "To be successful in biology, I need to be able to perform quantitative calculations, " hereafter labeled as the Models and Calculations questions, respectively. These questions measure student perceptions of the utility of specific mathematical approaches in biology. Changes in Students' Future Course and Career Plans To assess students' future course and career plans related to quantitative biology, we asked additional qualitative questions in the survey for the 2019 iteration of the CURE. In the initial survey we asked: "Do you have any plans to pursue future courses and/or a career in quantitative biology? Please briefly explain why or why not." In the end-of-quarter survey, we asked a matched short-answer question: "Has this course changed your plans for pursuing future courses and/or a career in quantitative biology? If so, how?" These end-of-quarter responses were then organized by theme ( Table 3). Math-Biology Values Had Minimal Change Students taking the CURE did not differ significantly from the comparison students in the three constructs assessed by the Math-Biology Values Instrument (Andrews et al., 2017; p-values of 0.37, 0.41, and 0.78 for interest, utility, and cost, respectively). We note that both groups increased in both the perceived utility and cost of taking courses that include quantitative biology material (Figures 2A,C). However, it was difficult to assess changes in the Interest and Utility scores, especially for the CURE students, because even the start-of-quarter (pre-course) scores were near saturation (Supplementary Figure S1). Gains in Student Attitudes Toward Learning Science Students in the CURE had significantly more positive changes for the Learning Science construct (p = 0.0009). This appears to be largely driven by a more negative end-of-quarter response within the comparison group (Figures 2B,D). Although many CUREs have yielded positive attitudinal outcomes for students (e.g., Brownell et al., 2012Brownell et al., , 2015Jordan et al., 2014;Olimpo et al., 2016;Rodenbusch et al., 2016;Kirkpatrick et al., 2019;Murren et al., 2019), a pattern of more negative responses at the end of a course has been found in previous assessments of student perceptions of science (Adams et al., 2006;Semsar et al., 2011;Perera et al., 2017). Negative changes in attitudes may reflect the impact of other courses, as well as general changes in student morale toward the end of an academic term. Therefore, in the absence of a targeted educational intervention, in some cases the default expectation on a survey may be a slight decline in attitudes from the start to the end of a quarter. The CURE and comparison students did not differ significantly in their changes in Personal Value (p = 0.63), although this attribute was difficult to assess because both groups' pre-course responses were very positive (Figures 2C,D). For the two new questions about the utility of quantitative calculations and mathematical models, CURE students showed more positive changes only for the Calculations question, although neither comparison fell below a p-value of 0.05 (Figures 2B,D; p = 0.051 and p = 0.36 for Calculations and Models, respectively). Similar to the pattern for the Learning Science construct, students in the comparison group gave more negative responses about Calculations at the end of the quarter, while students in the CURE had a small average increase. However, we note that these mean changes in Calculations were small relative to the standard error for each mean. For the Models question, every respondent in both groups selected "Agree" or "Strongly agree" on both the initial and end-of-quarter survey. With such positive initial attitudes, there was limited opportunity for student responses to positively change and neither group showed much average change on this question ( Figure 2D). Some Students Increased Interest in Future Quantitative Experiences Because students enrolled in the CURE had very positive initial attitudes toward quantitative biology, we added shortresponse questions in the 2019 survey for CURE students. We hoped that these would provide complementary insight into how student thinking and attitudes had changed as a result of completing the course. Table 3 summarizes student responses to the initial (pre) question "Do you have any plans to pursue future courses and/or a career in quantitative biology? Please briefly explain why or why not." and the end-of-quarter (post) question "Has this course changed your plans for pursuing future courses and/or a career in quantitative biology? If so, how?" These questions were only used in fall 2019 for students enrolled in the CURE course. Six of the 17 respondents expressed an increased interest in pursuing additional coursework or a career in quantitative biology. Two others shared that the course offered some useful clarification about what quantitative biology work entails. Five respondents had a sustained interest; these students expressed an interest in quantitative biology on the initial survey and did not indicate any change in their interest. Finally, four responses were not directly related to 3 | Summary of student responses to the short-answer survey questions about future academic plans related to quantitative biology. Theme (# students) [Pre] Do you have any plans to pursue future courses and/or a career in quantitative biology? Please briefly explain why or why not. [Post] Has this course changed your plans for pursuing future courses and/or a career in quantitative biology? If so, how? Increased interest (6) "As a biomedical engineer, I plan to focus on bioinformatics/bioimaging during my college career. Utilizing programming and mathematical modeling, I hope to gain more insight to human physiological processes while learning biology." "I initially planned on pursuing just a major in Biomedical Engineering, but the content in this course has encouraged me to pursue a minor in Computational Biology. I sincerely enjoyed the mathematical applications related to biology and wish to pursue further research into mathematics in life sciences overall." "Yes! I am planning on pursuing a federal position as a government scientist in general scientific concerns, possibly, and most likely involving quantitative biology." "This course has changed my plans career-wise in that I am much more comfortable considering a pursuit in quantitative biology." "Yes, quantitative biology is something that I have only recently learned more about and would definitely like to take more coursework if possible." "I am definitely more inclined to investigating the possibilities that are available in quantitative biology. I was hoping to explore this interest further as I enrolled in this class and am happy to say that I definitely found the passion that I was expecting." "I'm not sure yet, but I wish to be a researcher in the future because it is interesting." "Yes I might take more bio modeling courses" "I have no concrete plans, but I am taking this class to see whether such a career would interest me." "I'm still undeclared and don't yet have any idea what my career will be. The course has definitely made me more interested in quantitative biology though. I will probably take BIS23B, and would consider taking more quantitative biology courses in the future." "Yes, because maths is making stuff more interesting and as a comp sci major i like maths "it convinced me that quantitative biology is interesting" Career clarification (2) "I am unsure at the moment as I have never had to previously incorporate much mathematics in my biology courses before. I really enjoy learning and trying to understand the difficult concept in biology, but do find math quite intimidating. Though I know that math is fundamental for all science-related fields, I do not think I can see myself pursuing any careers in quantitative biology in the future." "Overall, this course has helped me gain a better understanding as to how research and biology in certain experiments are best explained through the use of mathematical models. Though math is a challenging and often daunting subject for me, I do believe that it is essential to understanding and applying a bit of it into the science world. I [am] not sure if I can succeed in a career centered around quantitative biology." "While I'm not completely sure whether or not I will pursue a career in quantitative biology, I know for sure that I will continue taking quantitatively based biology classes and therefore I am pretty certain that I will most likely end up with a career in quantitative biology." "It has definitely made me reconsider what exactly I want to do. I am still not sure about what I want to do or what to pursue in the future but this class has given me valuable insight." Sustained interest (5) "Yes as I am a genetics major and am wildly interested in going into research during my university career and beyond" "I'm still interested in pursuing a career in genetics and genomics, this course has helped me basically see and understand the power of quantitative analysis in biology more so than I did before without experience" "I plan to pursue future courses in quantitative biology because biology is starting to become a data-driven science and it is important that undergraduates like myself are able to deal with this trend." "I plan on taking BIS 23B and extra math courses to help understand other biological phenomena." "I am applying for graduate schools in the field of biomedical informatics and computational biology." "No, I already planned to pursue a career in q bio" "Yes, it gives me a chance to apply what I know rather than soaking up information and not being able to do anything with it." "No" "I would like to do the quantitative biology major because I am interested in both cs and biology" "After taking this course, I would like to take BIS 23B in the spring (and perhaps BIS 20Q in the winter)." Unclear (4) "Maybe. I'm thinking about researching epigenetics for medical applications. I know that bioinformatics is important and that big data is becoming more prevalent in genetics. I would say quantitative biology isn't my goal, but may be where I end up." "Honestly, the coding component was kind of a shock. It's tough at first but rewarding once you finally get it." "Yes, I plan to work in a research field in Genetics." "I really wanted to take a class that gave me an experience of what a lab actually is like, this class did that and was really enjoyable." "I am unsure of whether I want to pursue future courses and/or a career in quantitative biology because I am still unsure of what such courses/careers would entail." "It hasn't really changed much of my plans." "At the moment, no. I took this course to gain more lab experience." "Plan on doing research" Student pre and post responses are aligned by row. The initial (pre) survey asked students about future plans, and the end-of-quarter (post) survey asked if their plans had changed. These questions were asked only to the CURE students enrolled in 2019 (17 respondents in total). We categorized the paired pre and post responses into four codes: "Increased interest," "Career clarification," "Sustained interest," and "Unclear." Responses that directly mentioned an increase in confidence or interest in future courses or careers in quantitative biology were coded "Increased interest." Responses that discussed insight or understanding about what this work looks like were coded "Career clarification." The "Sustained interest" code was used for responses that didn't explicitly mention any gain in interest, but expressed similarly positive plans in both pre and post. Responses that did not clearly address the survey questions were coded as "Unclear." the prompts or could not readily be placed into the categories mentioned above. DISCUSSION We have outlined an introductory course-based undergraduate research experience that focuses on building students' practical laboratory technique and developing quantitative skills for mathematical modeling. Initial assessment of student attitudes during the first 2 years of this course suggest that, relative to a comparison group, students develop more positive attitudes toward the process of learning science, and potentially also see more value to using quantitative calculations in biology (Figure 2). More than one-third of respondents in 2019 also expressed greater interest in taking additional quantitative courses or pursuing future work in quantitative biology ( Table 3). Changes in students' future quantitative biology course and career plans seemed to be driven in part by an increased interest in or enjoyment of quantitative biology. Among the six students who mentioned a change in their plans, two explicitly discussed increased interest as a factor shaping their future plans (e.g., "The course has definitely made me more interested in quantitative biology.") and two others mentioned positive feelings about doing quantitative biology (e.g., "I definitely found the passion that I was expecting."). Some responses also revealed how students might be weighing the relative utility and cost of learning quantitative biology (e.g., "Though math is a challenging and often daunting subject for me, I do believe that it is essential to understanding and applying a bit of it into the science world. . ."). In future course implementations, post-course student interviews might help reveal how different dimensions of these attitudes interact to shape students' future academic decision-making. The Likert-scale assessment of values surrounding the role of math in biology found limited evidence of gains. However, the students enrolled in this course entered with high interest and already believed that quantitative skills were useful in biology, as reflected in the high pre-course scores in the categories of Interest, Utility, Personal Value, and Models (Supplementary Figure S1). This high baseline limited the potential for us to identify gains in these affectual categories. At the end of the quarter, students in both the CURE and the comparison group perceived a higher cost (e.g., higher workload or lower grades) to taking biology courses that incorporate math. This might be expected, as even students who have a positive, confidence-building experience may develop more realistic expectations about the potential challenges ahead. However, a student's belief in their ability to succeed at an academic task can affect their academic achievement (Doménech-Betoret et al., 2017), so high perceptions of cost may negatively impact a student's course outcomes. As we assess larger sample sizes of students who complete both this fall course in microbial culturing as well as the quantitative spring course in comparative genomics, we plan to analyze whether the longer experience over two quarters might produce shifts toward lower perceived costs. Previous work assessing the Genomics Education Partnership CURE (Shaffer et al., 2010) suggests that students perceive greater learning benefits from longer experiences working on their research projects (Shaffer et al., 2014). In future versions of this course, we hope to include additional activities focused on helping students build their identity as scientists. By design, the course's focus on student research implicitly places students in the role of research scientists. However, student scientific identity could potentially be developed more explicitly by diversifying the structure of meetings and assignments to promote the kind of informal critical thinking, curiosity, and collaboration that occurs in research labs. Examples might include journal clubs, science coffee chats, poster making and presenting, and academic writing practice. To expand collaboration, one might convert part of each lab into a "lab meeting" in which students discuss with peers and take turns summarizing primary literature or sharing updates on their research (e.g., in the CURE presented by Oufiero, 2019). In addition, we would like to help students understand that failure, mistakes, and repeated iteration of data gathering and analysis are normal parts of scientific inquiry. Although the instructors in this CURE discussed these themes in passing during lecture and lab (Seidel et al., 2015), these goals were not explicit in our lesson planning or assessments. One logistical challenge for this course was the organization of isolate metadata. Student project ownership may be a foundational source of student benefits from CUREs (Corwin et al., 2018), so we asked students to own their data from sample to final isolate to quantitative growth data. After students struggled to maintain a complete chain of metadata for their samples in 2018, we implemented a Google Sheets system in which all new data were added as new columns to a continually growing classroom document. This became cumbersome by the end of the quarter, but also provided a shared workspace in which students could note each other's discoveries and feel like a part of a team effort. Although we focused on salt-dependent growth rates for halophiles, this course could be adapted to other CUREs based around alternative research questions. Other phenotypic investigations might include color production, sugar utilization, halophilic gas vesicle production, or the production of easyto-spot products like polyhydroxyalkanoates-all of which require a separate set of research techniques scalable to a course timeframe (e.g., colorimetric assays, visible light microscopy, etc.). One could expand on this to culture strains in various growth conditions such as shaking, oxygenation, salinity, pH, nutrient availability, and/or temperature. The most promising alternatives are likely to be those which use a highly selective growth medium, preventing the cultures from being swamped by local contaminants. Our high-salinity media helped reduce contamination problems; extreme temperature, pH, or unusual food sources might be similarly effective. The skills developed and data created from microbial culturing provide a productive way to engage undergraduate students in course-based research. By combining laboratory skills with growth rate modeling, students learn quantitative skills in a low-stakes environment in which they have ownership over the data they are generating and analyzing. The current version of this quantitative biology CURE emphasizes the growth rates of halophilic microbes, but we expect that this model for course design and implementation can be readily applied to a broad range of organisms and phenotypes. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher. AUTHOR CONTRIBUTIONS AY, KD, HK, JE, JA, MG, MF, RF, and SA contributed to conception and design of the course. MG, MF, and RF contributed to conception and design of the educational assessment. JA, MF, and RF each served as co-instructors for at least one iteration of the course. HK and SA each served as teaching assistants for one iteration of the course. AY provided technical laboratory support. RF performed the qualitative and statistical analysis. KD, HK, MF, RF, and SA wrote the initial draft of the manuscript. HK, MF, MG, and RF reviewed and edited the initial draft. ACKNOWLEDGMENTS We thank Ashley Vater and David Coil for helpful feedback on course design and implementation; Elizabeth Moore for creating the artwork used in Figure 1; and Sean ConroyDey, the BioInnovation Group, and the Molecular Prototyping and BioInnovation Lab student staff for technical help preparing and running the laboratory sessions. We are grateful to Rafael Montalvo-Rodríguez, Valeria Pérez-Irizarry, and Krismarie Carrasquillo-Nieves for collecting and sharing environmental samples from Cabo Rojo, Puerto Rico.
2020-11-05T14:09:53.484Z
2020-11-06T00:00:00.000
{ "year": 2020, "sha1": "240368110d9bb7a61e9f25a0595665ed1ba631f8", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2020.581903/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "240368110d9bb7a61e9f25a0595665ed1ba631f8", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine", "Mathematics" ] }
54208828
pes2o/s2orc
v3-fos-license
Assessing the extreme risk of coastal inundation due to climate change: A case study of Rongcheng, China Abstract. Extreme water levels, caused by the joint occurrence of storm surges and high tides, always lead to super floods along coastlines. Given the ongoing climate change, this study explored the risk of future sea-level rise on the extreme inundation by combining P-III model and losses assessment model. Taking Rongcheng as a case study, the integrated risk of extreme water levels was assessed for 2050 and 2100 under three Representative Concentration Pathways (RCP) scenarios of 2.6, 4.5, and 8.5. Results indicated that the increase in total direct losses would reach an average of 60 % in 2100 as a 0.82 m sea-level rise under RCP 8.5. In addition, affected population would be increased by 4.95 % to 13.87 % and GDP (Gross Domestic Product) would be increased by 3.66 % to 10.95 % in 2050 while the augment of affected population and GDP in 2100 would be as twice as in 2050. Residential land and farmland would be under greater flooding risk in terms of the higher exposure and losses than other land-use types. Moreover, this study indicated that sea-level rise shortened the recurrence period of extreme water levels significantly and extreme events would become common. Consequently, the increase in frequency and possible losses of extreme flood events suggested that sea-level rise was very likely to exacerbate the extreme risk of coastal zone in future. Introduction Coastal inundation is predominantly caused by extreme water levels when storm surges are concurrent with astronomical high tides (e.g.Pugh, 2004;Quinn et al., 2014).Statistically, the extreme flood events were occurred frequently and caused huge devastation (Trenberth et al., 2015). Recent research indicated that sea-level rise, with global mean rates of 1.6 to 1.9 mm yr -1 over the past 100 years (Holgate, 2007;Church and White, 2011;Ray and Douglas, 2011), had been strongly driving the floods (Winsemius et al., 2016).Global mean sea-level was expected to rise more than 1 m by the end of this century (Levermann et al., 2013;Dutton et al., 2015), even if global warming can be controlled within 2℃.Thus, coupled with continuous sea-level rise induced to climate change, the future coastal inundation risk in terms of hazards and possible losses should be paid Nat. Hazards Earth Syst. Sci. Discuss., doi:10.5194/nhess-2017-31, 2017 Manuscript under review for journal Nat.Hazards Earth Syst.Sci.Published: 30 January 2017 c Author(s) 2017.CC-BY 3.0 License.attention to disaster mitigation. Projections for extreme water levels are indispensable for inundation risk assessment.Most researches to date have focused on the coastal flooding caused by storm surges (e.g.Bhuiyan and Dutta, 2011;Klerk et al., 2015).At present, exceedance probabilities of current extreme water level, induced by tropical and extra-tropical storm surges, have been estimated (Haigh et al., 2014a, b). However, on account of the sea-level rise, coastal flooding disasters would become more serious (Feng et al., 2016) and 85% of global deltas experienced severe flooding in recent decades (Syvitski et al., 2009).Feng and Tsimplis (2014) showed that extreme water level around the Chinese coastline was increased by 2.0 mm to 14.1 mm yr -1 from 1954 to 2012.Based on an ensemble of projection to global inundation risk, it argued that the frequency of flooding in Southeast Asia is likely to increase substantially (Hirabayashi et al., 2013).By 2030, the portion of global urban land exposed to the high-frequency flooding would be increased to 40% from a 30% level in 2000 (Guneralp et al., 2015).Conservative projections suggested that over a half of global delta surface areas would be inundated as a result of sea-level rise by 2100 (Syvitski et al., 2009). The impacts of coastal flooding on social economies were considered and some methods were established to estimate the possible losses (e.g.Yang et al., 2016).With the socio-economic development, the large aggregations of coastal population and assets would lead to the increase exposed to inundation in future (Mokrech et al., 2012;Strauss et al., 2012;Alfieri et al., 2015). Without adaptation, by 2100, 0.2% to 4.6% of the global population would be at risk of flooding, and expected annual GDP losses would be 0.3% to 9.3% (Hinkel et al., 2014).In particular, urbanization of China was rapidly fast in the world and many low-lying coastal cities were confronted with high probabilities of flooding (Nicholls and Cazenave, 2010).More than 30% of the China's coast was assessed as 'high vulnerability' according the research of Yin et al., (2012), and the population numbers exposed to flooding risk were the highest in the world (Neumann et al., 2015).A number of China's cities including Guangzhou, Shenzhen, and Tianjin were in the top 20 global cities in terms of their exposure to 100-year inundation risk and huge average annual losses because of water levels rising (Hallegatte et al., 2013). Distinguishing the risk of extreme floods considering sea-level rise caused by climate change is vital for disaster mitigation and adaptation on a large time scale.In this study, the flooding from extreme water levels was simulated by a combination of storm surges, astronomical high tides, and Nat. Hazards Earth Syst. Sci. Discuss., doi:10.5194/nhess-2017-31, 2017 Manuscript under review for journal Nat.Hazards Earth Syst.Sci.Published: 30 January 2017 c Author(s) 2017.CC-BY 3.0 License.sea-level rise heights under different RCP scenarios.Using Rongcheng City as a case study, a comprehensive multi-dimensional analysis was presented to assess the inundation risk based on two time scales of 2050 and 2100, and three RCP scenarios of 2.6, 4.5, and 8.5.The main objectives are to (1) investigate the expansion of the inundated area and the increase in expected direct losses; (2) analyze the effect of sea-level rise on population and GDP; and (3) reveal the future hazard change of extreme water levels by the probability of occurrence. Study area Rongcheng City, located at the tip of the Shandong Peninsula, China, is surrounded on three sides by 500 km of Yellow Sea coastline (Fig. 1).This city has low-elevation and flat topography and covers an area of more than 1,500 km 2 .Its population of 0.67 million people and GDP of $12.31 billion make it become one of the top one hundred counties in China.Rongcheng experiences a monsoonal climate at medium latitudes with an average annual rainfall of 757 mm and a temperature of 11.7℃ for nearly 50 years (data from http://data.cma.cn/).It is also in a critical geographical position for trade exchange and the modern economy facing Korea across the Yellow Sea.Substantial additional capital investment is expected in this region because the Shandong Peninsula National High-tech Zone has been approved as a part of the National Independent Innovation Demonstration Zone by the China's State Council in 2016 (http://www.gov.cn/).A inundation risk assessment for Rongcheng City is urgent to its long-term development, especially under the situation of sea-level uptrend due to climate change.Nat. Hazards Earth Syst. Sci. Discuss., doi:10.5194/nhess-2017-31, 2017 Manuscript under review for journal Nat.Hazards Earth Syst.Sci.Published: 30 January 2017 c Author(s) 2017.CC-BY 3.0 License. Assessment process and dataset The assessment process of inundation risk followed three steps.First, extreme water levels were calculated using storm surge data, astronomical high tides, and sea-level rise heights by the method of Pearson Type Ⅲ (P-Ⅲ).Second, the inundated area and depth were identified by the flood model (the four nearest neighbors algorithm) using the data of extreme water levels which resulted from the first step and the Digital Elevation Model (DEM).Third, inundation risk was assessed by direct losses model and recurrence period change.The dataset was summarized in Table 1.Nat. Hazards Earth Syst. Sci. Discuss., doi:10.5194/nhess-2017-31, 2017 Manuscript under review for journal Nat.Hazards Earth Syst.Sci.Published: 30 January 2017 c Author(s) 2017.CC-BY 3.0 License.108 Construction of the cumulative probability distribution of extreme water levels 109 Extreme water level is a compound event caused by storm surges and astronomical high tides while 110 sea-level rise also contributes to extreme water levels under global climate change.Therefore, in 111 this study, the current extreme water levels (CEWLs) and future extreme water levels were 112 constructed.The latter was a combination of CEWLs and projected heights of sea-level rise under 113 different RCP scenarios and was defined as the scenario extreme water levels (SEWLs).The 114 cumulative probability distribution curves of CEWLs and SEWLs were refitted using a P-Ⅲ model 115 as the Equation ( 1).The details of this method were shown as Wu et al., (2016).116 Nat. Hazards Earth Syst. Sci. Discuss., doi:10.5194/nhess-2017-31, 2017 Manuscript under review for journal Nat.Hazards Earth Syst.Sci.Published: 30 January 2017 c Author(s) 2017.CC-BY 3.0 License. In this expression, α, β, and α0 are the shape, scale, and location parameters, respectively; x is the annual maximum values for water levels; p is the probability of occurrence. 𝐶𝐸𝑊𝐿 = 𝑆𝑇 + 𝐴𝐻𝑇 (2 where ST is storm surge and AHT is astronomical high tide; where SLR is the predicted height of sea-level rise in the future; where T stands for the recurrence period of extreme water level and the T-year recurrence level means that an event of extreme water level has a 1/T probability of occurrence in any given year (Cooley et al., 2007). Because of the uncertain impacts of sea-level-rise on storm surges, the statistical probabilities of storm surge in this model were assumed to be unchanged in future (e.g.Hunter, 2012;Kopp et al., 2013;Little et al., 2015).The extreme water levels were mainly constructed by historical records of Chengshantou and Shidao tidal stations located in Rongcheng City (Fig. S1 in Supplementary data).In order to reduce the error caused by the spatial distribution of extreme water levels, recorded data of the surrounding six tidal stations (including Longkou, Penglai, Yantai, Qianliyan, Xiaomaidao, and Rizhao) on Shandong Peninsula were still calculated using the inverse-distanceweighted (IDW) technique in ArcGIS software. Identification of flooding Inundated area was extracted from the flood model using the four nearest neighbors algorithm based on high-resolution DEM (10 m × 10 m) and extreme water level layers (10 m × 10 m cells generated in ArcGIS).Flooding criteria were that the extreme water level of layer cells must be greater than or equal to the elevation of DEM and inundated cells must be connected to the coast individually (Xu et al., 2016).The impacts of the elevations of urban landscapes and other buildings on flooding process were not considered in this study.In this section, inundated area and depth could be computed. Inundation risk assessment Expected direct losses were calculated using inundated area, inundated depth, vulnerability curves, and loss values for each land-use type.The land-use map of 30 m resolution was resampled to 10 m cells using the raster processing tool in ArcGIS in order to match inundated cells.The assessment model for expected direct losses is: where EDL stands for the expected direct losses of extreme floods; i denotes land-use type including residential land, farmland, woodland, grassland, and unused land; A denotes inundated area; h stands for flood depth; r stands for loss rate (vulnerability curves); and V stands for the perunit loss value ($/m 2 ). The amounts of affected population and GDP were estimated based on the grid distribution data of population and GDP (published in China 2010 at a resolution of 1 km, http://www.resdc.cn/). 3 Results and analysis Inundated area In the absence of adaptation, the areas inundated by CEWLs and SEWLs are shown as Fig. 2 Land-use types of residential land, farmland, woodland and grassland are involved in the estimation of total inundated area while the water bodies and unused land could be ignored in this study.Thus, summarizing the inundated data, the total inundated land-use areas under RCP 8.5 are shown in Fig. 3. Results show that residential land and farmland are more exposed to extreme water levels than woodland and grassland.Indeed, when Rongcheng City is currently subjected by extreme flooding, 42.63 km 2 to 46.77 km 2 of residential land and 34.15 km 2 to 39.97 km 2 of farmland would be affected, based on 50 to 1000-year recurrence periods, respectively.Given a high degree RCP 8.5 scenario, inundated areas of residential land and farmland would increase to 47.61 km 2 and 41.13 km 2 in 2050, and to 52.88 km 2 and 51.47 km 2 in 2100, respectively.More seriously, combined areas of residential land and farmland exposed to flooding would rise to around 50 km 2 in 2050 and 56 km 2 in 2100, respectively.The flood map (Fig. S2) shows the extension of inundated area by 2050 and 2100 given a 100-year recurrence period. Expected direct flood losses Flood damage does not only depend on inundated area and depth, but is related to the loss rates and values of exposed land-use types.The total expected direct flood losses would be exacerbated with sea-level rise (Fig. 4), but for current extreme floods, loss magnitudes are up to $0.53 billion and $0.69 billion for 50 to 1,000-year recurrence period CEWLs.Predictions for future extreme flood show an increase of more than 20% when the elevation of sea-level rise exceeds 0.3 m, however, the increase rates expand to beyond 40% given a 0.5 m sea-level rise.Indeed, by 2050, estimated losses under the RCP 2.6 scenario would be between $0.6 billion and $0.84 billion.These losses would be slightly increased by 2050 under the RCP 4.5 and 8.5 scenarios.Analyses show that expected direct losses would be more aggravated by the end of the century.By 2100, the smallest range of expected damage given the low degree RCP 2.6 scenario would be between $0.63 billion and $0.81 billion.However, the maximum range of expected damage under the high degree RCP 8.5 scenario is predicted to be between $0.88 billion and $1.08 billion.It is worth noting that the increase rates reach an average of 60% under the high degree of RCP 8.5 scenario with a 0.82 m sea-level rise.The largest increase in predicted direct flood damage would be up to 29% in 2050 and 67% in 2100.Additive statistical information of future expected direct losses increase is presented in Table S1(b).The losses for main land-use types under the high degree RCP 8.5 scenario are shown in Table S2 and results indicated that residential land would be seriously affected by extreme floods. Population and GDP affected by extreme water levels With the rapid socio-economic development, population and GDP have distributed along the coastline.Thus, a large proportion of both population and GDP are expected to be affected by extreme floods.Affected population and GDP exposed to flooding would be higher with the expansion of inundation area as a direct result of sea-level rise. The number of affected population under RCP scenarios of 2.6, 4.5, and 8.5 is shown as Fig. 5a. Expected population magnitudes, which would suffer from 50 to 1,000-year CEWLs, range between about 70,000 and 79,000.In both 2050 and 2100, this increment is sharp with an enlarged recurrence period and the maximum increment of affected population approaches 20,000 in 2050 and 30,000 in 2100.Considering the intermediate scenario of RCP 4.5, around 5.57% to 12.36% S1(c). Similarly, sea-level rise also leads to an increased GDP exposure; the scope of affected GDP is presented in Fig. 5b.In the case of no sea-level rise, the total GDP of Rongcheng City at risk from extreme floods would be between $1.72 billion and $1.88 billion.As inundated area increasing due to sea-level rise, the change in affected GDP is obvious.By 2100, projections for affected GDP increase from $1.82 billion to $2.23 billion.At the most extreme, under the high degree RCP 8.5 scenario, affected GDP would increase by approximately 20% by the end of the century.Additional information about increases in affected GDP is given in Table S1(d). Variation of recurrence periods due to sea-level rise Refitting SEWLs combined CEWLs with future sea-level rise demonstrates that the recurrence periods would decrease sharply due to climate change (Fig. 6).Results suggest that, by 2050, the recurrence periods of extreme water levels would be shortened rapidly.For example, in 2050, the 100-year recurrence period for CEWL is likely to fall by eight years to 31-year (RCP 2.6), seven years to 26-year (RCP 4.5), and five-year to 21-year (RCP 8.5).In 2100, more seriously, CEWLs would be occurred more probably becoming common events under high degree RCP scenarios. Among the different RCP scenarios, the shrink of recurrence periods under RCP 8.5 is more significant than either RCP 2.6 or 4.5 scenarios.The worst case situation is that 1,000-year recurrence period of CEWL would be occurred every three years; once in a hundred year events are likely to become common, even occurring annually by the end of this century.Such recurrence periods shortening would significantly increase the flooding risk over coming decades. 2011), the extreme risk of inundation was assessed by integrating both of them.In this study, the risk increase induced to sea-level rise was highlighted by the comparison of current with future extreme water levels.SEWLs were recalculated by combining CEWLs with sea-level rise in 2050 and 2100 under RCP 2.6, 4.5, and 8.5.The results showed that recurrence periods would be likely reduced by more than 70% by 2050 and this decrease could even exceed 80% by 2100 given high RCP scenarios.In a similar study, Nicholls (2002) reported that a 0.2 m rise in sea-level could markedly reduce recurrence periods of extreme water levels and a ten-year high water event was converted into a six-month event.Indeed, as recurrence periods shortened, low-lying coastal areas would have a higher probability of flood destruction over the next few decades. The continuous sea-level rise would enhance the potential destructive force of future flooding. For example, the results demonstrated that the potential inundated area would be extended by 3% to 11% in 2050 and by 5% to 20% in 2100.In contrast, sea-level rise increased the inundated area exposed to a cyclonic storm surge in Bangladesh by 15% with a 0.3 m rise (Karim and Mimura, 2008).Results showed that residential land and farmland were more vulnerable to sea-level rise coupled with a large potential inundated area and a high proportion of expected direct damage. Residential land was under the biggest risk, according to projected SEWLs under future RCP scenarios which expected direct losses would up to $0.6 billion in 2050 and even exceed $1.00 billion by 2100.To put these predicted losses into context, average annual flood losses of Tianjin City was estimated to be as high as $2.3 billion by 2050 (Hallegatte et al., 2013).It was predicted that Shanghai, susceptible to high water levels, would be 46% underwater by 2100 with its seawalls and levees submerged by rising sea-levels (Wang et al., 2012).A range of studies highlighted the fact that many coastal cities, including San Francisco, would experience flooding in the near future as a result of rising sea-level rather than heavy rainfall (Gaines, 2016).There was no doubt that rising sea-levels would lead to a large number of people and property would be faced with flooding risk, especially the fast growth of China's coastal cities (McGranahan et al., 2007;Smith, 2011). Given the shortening of recurrence periods in future, property and assets exposed to extreme floods would be more likely.For instance, results showed that under a RCP 8.5 scenario, an extreme event that was possible to take place every 1,000 years and cause damage of $0.7 billion would occur about once every 50 years by 2050, even once every two years by 2100.Under these circumstances, many people and industries at extreme risk from floods would have no choice but to retreat from coastal regions.However, studies indicated that most coastal populations were completely unprepared for an increasing risk of extreme floods, especially in developing countries (Woodruff et al., 2013). Although this study manifested that sea-level rise would significantly increase the flooding risk, some uncertainties still remain.First, on account of spatial heterogeneity, regional sea-level rise should be projected in the future work.The objective of this paper is just to reveal the scientific question that the impact of sea-level rise under global warming on extreme floods so that the projection of global mean sea-level rise was used for its availability, which is consist with Wu et al., (2016).Nevertheless, there is no obvious land subsidence for the regional crustal stability.Second, the combination of climate and weather extremes, including storm surges, astronomical tides, rainfall and sea-level rise need to be focused on as they underlie and amplify the extreme events as well as generating extreme conditions (Leonard et al., 2014).Because the coastal regions of China have a monsoonal climate, combining inundation risk assessment with consideration of rainfall is particularly important (Bart et al., 2015;Wahl et al., 2015).Third, human activities, which impact on socio-economic development and alter feedbacks from climate change, are the mainly driving force of future inundation risk (Stevens et al., 2015) and should be focused in the next research. Consequently, the deeper exploration aiming at these uncertainties would be undertaken. Conclusions This study assessed the inundation risk resulting from extreme water levels with future projections Fig. 1 Fig. 1 Map to show the geographic locations of Rongcheng City and main tidal gauge stations Fig. 2 Fig. 2 Inundated areas under different RCP scenarios for 2050 and 2100.The blue solid line denotes the inundated area curve as it changes with CEWLs, while the areas outlined by green and red stippled lines denote the extent of inundated areas projected on the basis of SEWLs under low and high degree RCP scenarios for 2050 and 2100, respectively.The green and red solid lines denote the median degree for each RCP scenario.Similarly, the explanations are used for Fig. 4 and 5. Fig. 3 Fig. 3 Predicted inundated areas broken down by different land-use types given 50 to 1,000-year recurrence periods in 2050 (a) and 2100 (b).RCP 8.5 is taken as an example in this paper and the inundated areas of different land-use types under RCP 2.6 and 4.5 are similar. Nat. Hazards Earth Syst.Sci.Discuss., doi:10.5194/nhess-2017-31,2017 Manuscript under review for journal Nat.Hazards Earth Syst.Sci.Published: 30 January 2017 c Author(s) 2017.CC-BY 3.0 License.more people would be confronted with the inundation risk in 2050, while the affected population would increase 9.52% to 23.53% in 2100.Detailed data of the increase in affected population are provided in Table Fig. 6 Fig. 6 Variation in recurrence periods of CEWLs and SEWLs in 2050 and 2100 under RCP 2.6, 4.5, and 8.5 scenarios.In each RCP scenario, the variation in five representative recurrence periods of 50, 100, 200, 500 and 1000-year is shown.And the yellow boxes stand for the recurrence intervals in 2050 and the blue boxes stand for the recurrence intervals in 2100.The data, presented the variation of recurrence periods, are just referred to Chengshantou and Shidao stations. for 2050 and 2100 under different RCP scenarios.Results demonstrated that continuous sea-level rise would augment the inundation risk by shortening recurrence periods and increasing the expected losses and potential effect.(1) Sea-level rise would make low-lying coastal regions more possible to be exposed to flood because of the recurrence periods shortening of extreme water levels.(2) Inundation risk would be increased by the increment of inundated area, direct damage, and affected population and GDP.(3) The analysis presented that sea-level rise principally threatened the vertical land-use types for human survival, especially residential land and farmland.(4) Projections showed that inundation risk would continue to increase up to 2100 and would be the most serious under the RCP 8.5 scenario.In summary, these results revealed that sea-level rise dramatically increased the Nat.Hazards Earth Syst.Sci.Discuss., doi:10.5194/nhess-2017-31,2017 Manuscript under review for journal Nat.Hazards Earth Syst.Sci.Published: 30 January 2017 c Author(s) 2017.CC-BY 3.0 License.flooding risk.Effective mitigation and adaptation plans are needed to deal with the increasing coastal inundation risk. . At the present stage, inundated areas range from 156.60 km 2 to 168.8 km 2 when Rongcheng City encounters extreme water levels.However, an expanding trend in inundated area is inevitable because of future sea-level rise; in this analysis, the smallest increase in inundated area would be seen under RCP 2.6 while the largest would be seen under RCP 8.5 while it would be enlarged significantly by 2100 compared to 2050 as sea-level rise continues.The extreme scenario, under RCP 8.5, predicts that the total area where were threatened by flooding ranges from 168.35 km 2 to 186.46 km 2 in 2050, and that it may be between 187.72 km 2 and 199.18 km 2 by 2100.According to this projection, the maximum area is around 13% by the end of the century.At high degree for each RCP scenario, inundated area increases by 2100 is likely to range from 14.21% to 19.54% given a 100-year recurrence.Summary statistics of future inundated area increase for 50 to 1,000-year recurrence periods are presented in TableS1(a).
2018-12-04T12:35:19.191Z
2017-01-30T00:00:00.000
{ "year": 2017, "sha1": "c6b25bdfef41a38ea2e845fb0a62af961f6dbf41", "oa_license": "CCBY", "oa_url": "https://www.nat-hazards-earth-syst-sci-discuss.net/nhess-2017-31/nhess-2017-31.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "ed01dd1f28829fbba1a09ac1eaacaf1d8faa8045", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
119685474
pes2o/s2orc
v3-fos-license
Vortices in rotating Bose-Einstein condensates confined in homogeneous traps We investigate analytically the thermodynamical stability of vortices in the ground state of rotating 2-dimensional Bose-Einstein condensates confined in asymptotically homogeneous trapping potentials in the Thomas-Fermi regime. Our starting point is the Gross-Pitaevskii energy functional in the rotating frame. By estimating lower and upper bounds for this energy, we show that the leading order in energy and density can be described by the corresponding Thomas-Fermi quantities and we derive the next order contributions due to vortices. As an application, we consider a general potential of the form V(x,y) = (x^2+lambda^2 y^2)^{s/2} with slope s \in [2,infinity) and anisotropy lambda \in (0,1] which includes the harmonic (s=2) and 'flat' (s ->infinity) trap, respectively. For this potential, we derive the critical angular velocities for the existence of vortices and show that all vortices are singly-quantized. Moreover, we derive relations which determine the distribution of the vortices in the condensate i.e. the vortex pattern. rotate like a solid body. Instead, beyond a critical angular velocity quantized vortices appear manifesting the genuine quantum character of the system. Indeed, vortices in BECs were observed in 1999 for the first time (see Refs. [29] and [27,28]). Theoretical studies were already presented before (see e.g. [32] for one of the earliest papers on the subject) and have since then grown to a substantial branch of its own (see e.g. [3,8,11,12,15,16,17,25,36]). A general treatment of BECs can be found in the textbooks of [30] and [31]. Most of the theoretical studies have been undertaken in the framework of the Gross-Pitaevskii (GP) theory whose validity as an approximation of the quantum mechanical many-body ground state was established in [22] for the non-rotating case and in [24] for rotating systems. Particular attention has been put on the so-called Thomas-Fermi (TF) regime of strong coupling. This is especially true for the study of vortex structures (see the monograph [1]). In [18,19] a rigorous analysis of vortices for BECs in harmonic anisotropic trap potentials was achieved for a GP-type functional in the TF limit. A previous analysis was developed in [35] in the context of superfluids. The methodology of those papers originates from [7] where a rigorous analysis of vortices in Ginzburg-Landau models of vanishing magnetic field in the regime which corresponds to the TF limit was developed. In [33] and [34], general results on symmetry breaking which are not limited to the TF regime were proven in traps of arbitrary shape. Within the GP theory, the properties of vortices are determined by two physical parameters apart from the external trap, namely angular velocity and interaction strength between the particles. In this paper, we consider the ground state of rotating 2D Bose-Einstein condensates which are trapped in asymptotically homogeneous anisotropic potentials rotating with angular velocity Ω. The aim of this investigation consists of deducing analytically the Gross-Pitaevskii energy and density in presence of vortices and deriving their properties in the TF regime. We consider thermodynamical conditions for vortex existence, i.e. we are looking for angular velocities which reduce the total energy in such a way that vortices are energetically favoured to appear. This work was originally inspired by the papers of [4] and [8] which consider anisotropic harmonic potentials. There and for instance in Refs. [15,25,32], it was established by numerical methods that vortices are single-quantized. We show here by analytical estimates, in particular, that this is true for a very large class of trapping potentials. In fact, the majority of studies uses numerical and variational methods for a limited number of trap potentials (e.g. harmonic or harmonic-plus-quartic) whereas we derive analytical formulae for a very large class of potentials. Thereby, we try to present the analysis in such a way that both the physical ideas and mathematical estimates are brought out in a clear way. This paper is organized as follows: In Section 2, we state the setting and present the main result. We decompose the condensate wave function in a vortex-free part and a vortex-carrying part. This allows a splitting of the un-derlying energy functional in separate contributions which can be estimated subsequently. In Section 3, we study the leading asymptotics of the energy and density. In Section 4, we justify a model for the structure and number of vortex cores which is compatible with the considered order of magnitude of the angular velocities. Sections 5-7 contain lower and upper bound estimates of the vortex-carrying energy contributions in terms of the winding number of the vortices and the coupling parameter. In Section 8, we specify an external potential which is of a general anisotropic homogeneous form. For this potential, we deduce the critical angular velocity for the appearance of one or a finite number of vortices. The leading orders of the energy in presence of vortices are calculated and it is shown that all vortices have winding number one, i.e. they are all single-quantized. Furthermore, we deduce relations which determine the distribution of the vortices in the condensate, i.e. the vortex pattern. Finally, in Section 9 we present the conclusions. Indeed, it is only meaningful to consider this reference frame as far as the (temporal) stability of structures is concerned which appear due to the rotation: The external trap is time-independent and the states are stationary with respect to that frame (see e.g. Ref. [8]). The function u(r) is a complex field (the complex conjugate is denoted as u * ). We write the associated polar decomposition as u = |u|e iSu where |u| 2 is proportional to the density of condensed particles with normalization R 2 |u| 2 = 1 and S u is the phase function. The minimizer of (1) is called the order parameter or 'wave function of the condensate' in the rotating frame. The external trap potential is denoted by V and N is the number of particles with mass m. The third term in (1) describes the effective interaction between the particles where the coupling constant in 2D is given by g = √ 8π 2 a/(mh) with the 3D scattering length a and the thickness h of the system in the strongly confined direction which we choose to be the z-axis, so that the system is effectively 2D (in the x-y-plane). We denote × as the vector product in R 3 , r = (x, y, 0) andΩ = (0, 0,Ω) is the angular velocity vector assuming that the gas rotates around the z-axis. An important parameter, consisting of the scattering length and a density, is given by the 'healing length' ξ. It is defined originally by setting 2 /(2mξ 2 ) = 2πρa 2 /m where the r.h.s is the energy per particle for gases in a box in the limit of dilute systems ρa 3 → 0 with density ρ, so ξ = 1/ √ 4πaρ. For inhomogeneous and rotating systems, the healing length may be defined accordingly by using an appropriate (mean) value for the density. In particular, the healing length determines the effective radius of a vortex core in rotating systems. In 2 dimensions, the ratio between the healing length and the characteristic length of the system L, which is set by the external trap or box respectively, is where we introduce the dimensionless parameter ε. In this paper, we will be concerned with the TF limit where this ratio tends to zero (meaning physically that 0 < ξ ≪ L or 0 < ε ≪ 1 respectively). However, when performing the TF limit in a naive way for external potentials, where the gas can spread out indefinitely, one obtains a trivial result, namely the minimizer goes to zero and the energy to infinity. In order to obtain a non-trivial limit, it is then necessary to rescale all lengths by an ε-dependent factor (see also [9]): Suppose V is homogeneous of order s, i.e. V (γr) = γ s V (r) for γ > 0. We rescale the energy functional (1) by setting r = kr ′ and u(r) = u ′ (r ′ )/k with Ng/2 = 2 /(4ε 2 m) and k = ( 2 /(4ε 2 m)) 1/(s+2) . Then we have (2) with |u ′ | 2 = 1. Choosing = 1 = m and inserting k, (2) becomes E GP [u] = (16ε 4 ) 1/(s+2) E GP ′ [u ′ ] with the energy on the r.h.s. (omitting the primes) and the scaled angular velocity Ω(ε) is related to the original unscaled one by For brevity, we will also write Ω but it should be kept in mind that Ω depends on ε after scaling. In the forthcoming, we study the functional in (3) which can be also written in the following form and r := |r|. Critical points of E GP [u] are solutions of the following associated Euler-Lagrange equation, called Gross-Pitaevskii equation where the GP chemical potential µ GP is fixed by the normalization. Denoting a minimizer of (3) as u ε , it is given by The corresponding amplitude squared |u ε | 2 will be referred to as Gross-Pitaevskii density. Inserting u = |u|e iSu into (6) results in hydrodynamic-like relations for the density and the velocity: (3) for Ω = 0 decribes the gas without rotation with f a real, positive function. The minimizer of (8) will be denoted as f ε . The normalization condition R 2 f 2 = 1 fixes the associated chemical potential ν GP which is given by and which is of the order 1/ε 2 . The functional (8) tends for ε → 0 to a Thomas-Fermi type functional which is a functional for the density ρ = f 2 alone. It can be shown (see e.g. Ref. [22]) that it has a unique positive minimizer, the Thomas-Fermi density, where [.] + denotes the positive part and µ := 4ε 2 µ TF . The TF chemical potential µ TF (or µ respectively) is determined by where D = {(x, y) ∈ R 2 : ρ TF > 0} is the Thomas-Fermi domain whose shape depends on the external potential V . Moreover, which is of the order 1/ε 2 whereas µ := 4ε 2 µ TF is of the order of a constant independent of ε. Splitting of the GP energy functional In the TF regime where ε is small, vortex cores are small compared to the characteristic length scale of the system, producing narrow 'holes' which effectively shrink as ε → 0. It is argued in Section 4 that vortices appear at a critical angular velocity of the order Ω ≃ C| ln ε| with C a positive constant (independent of ε) depending on the external trap. Explicit expressions for C will be determined in the forthcoming analysis (see also Refs. [4] and [8] for the harmonic trap case). In the minimization of (8), i.e. (3) with Ω = 0, one considers all functions in the subspace of angular momentum zero and the density profile is given by f 2 ε . Considering (3) with Ω > 0 we will see that, as long as Ω ≤ C| ln ε| asymptotically, the overall density can still be described by the vortex-free density f 2 ε in good approximation. However, in a non-isotropic potential V there appears a phase S (depending on V ), i.e. the vortex-free function is then more generally f e iS . Since this function has no vortex, the phase S is non-singular and (6) gives and ∇ · [f 2 (∇S − Ω × r)] = 0 (14) whereν GP is the associated chemical potential. A solution without vortex is a minimizer of the problem min{E GP [f e iS ] : f e iS ∈ H 1 with f > 0, f 2 = 1} (see also [18]) with (15) Later on in this paper we are going to consider external traps of the form with slope s ∈ [2, ∞) and λ ∈ (0, 1] describing the anisotropy. It is a fairly general potential which includes also the important special cases of the harmonic (s = 2) and flat (s → ∞) trap which are extensively used in experiments. The corresponding phase to this potential is which vanishes for the isotropic case λ = 1. This expression for S was also deduced for the harmonic trap in Refs. [4] and [8]. Note, however, that it is not dependent on the slope parameter s. We also see that the terms in (15) involving ∇S are at most of the order | ln ε| 2 for Ω ≤ | ln ε| and hence of much lower order than the remaining part described by (8) which is ∼ 1/ε 2 . We now decompose the order parameter u of (3) into the vortex-free part f e iS and a part which carries the vorticity. A similar splitting can be found in Refs. [4,5,8,20] and more recently in Refs. [6,14,37]. Writing u = |u|e iSu = f e iS v = f |v|e i(S+Sv) with |u| = f |v| and S u = S + S v , the contribution v = |v|e iSv accounts for the presence of vortices. In a vortex point, the amplitude vanishes, i.e. |u| = |v| = 0 since f = 0 and the phase fulfills the usual circulation condition which is a quantization condition because u (resp. v) is a complex field: since S has no singularity and τ is a unit tangent vector to the curve C encircling the vortex with winding number d. Without the presence of vortices, there would be u = f e iS with density |u| 2 = f 2 and the phase S u would be non-singular. Inserting the decomposition u = f e iS v in the energy functional (3) results in the following splitting (see also [4,5]) where the integrals are over R 2 : The first term becomes the second one is simply and for the rotation term we get Putting the terms together and separating the vortex-free part of the energy (15), we have The third term of this expression becomes where we used (13) for ∆f . Moreover, for the fifth term in (18) we use the identity Inserting the last two equations into (18) we get the following splitting of the functional in (3) where we usedν GP f 2 (|v| 2 − 1) = 0 because of the normalization conditions and the last term in (18) was written in a more convenient form using where (u, v) := (uv * + u * v)/2. The terms apart from the vortex-free energy in (19) describe the contribution of the vorticity to the energy: The second term G f [v] looks formally like a 'weighted' Ginzburg-Landau (GL) energy functional without magnetic field and accordingly will be called GL-type energy in the forthcoming and R f [v] is the rotation energy. Using the splitting (19), vortices of u (if present) are vortices of v and they are described via the functionals Main result We have the following main result: Main result: Let u ε be a minimizer of (3) and f ε a minimizer of (15) for V in (16) and S in (17) and under the normalization constraints. Let C and δ be positive constants independent of ε with 0 < δ ≪ 1 and let o(1) denote a quantity which goes to zero as ε → 0. For some integer n ≥ 1 and with we have the following results: i) If Ω ≤ Ω 1 − C 1 δ ln | ln ε| and ε sufficiently small, then u ε has no vortices in D \ ∂D and the Gross-Pitaevskii energy is ii) If Ω n + C 1 δ ln | ln ε| ≤ Ω ≤ Ω n+1 − C 1 δ ln | ln ε| for n ≥ 1, then, for ε sufficiently small, u ε has n vortices with winding number one located in ., n and the Gross-Pitaevskii energy is The proof is split into several estimates which are shown in the following sections. There, positive constants are denoted by C (sometimes carrying primes) and they may change from line to line. The leading order in energy and density In this section, we show the leading asymptotics for the GP energy and density. We will see, in particular, that it is not affected by vortices whose influence can only be seen in the next lower order. The leading term in the energy comes from the TF contribution in (10) which is ∼ 1/ε 2 whereas vortices contribute to the order Ω ∼ | ln ε| (see also Section 4 ). However, the determination of the precise expressions in (22) requires a more detailed analysis which is carried out in Sections 5-8. For the following estimates, we introduce the function whose positive part is the TF density, i.e. [b(r)] + := ρ TF . pends on the parameters of the external potential V , C > C V , and for ε sufficiently small, Proof: This can be shown similar as Prop. 2.3 in [9]. The lower bound can be trivially obtained by neglecting the first positive term in (5) The upper bound can be obtained by using Concerning the density asymptotics, we estimate the following: using the negativity of b(r) outside the TF domain, we have On the other hand, we deduce ≤ Cε 2 | ln ε| and the last inequality follows from (24). Thus, the GP density approaches the TF density for ε → 0 showing Estimate 1. (A similar result is also true for higher angular velocities as is shown in [9]). In the same way, we see the following result which is used to show Estimate 3 below Now we return to the non-rotating ground state described by (8). We have the following point-wise estimate for f ε within the TF domain: Estimate 2: Let f ε be a minimizer of (8) under the normalization constraint. It is the unique positive solution of with the chemical potential ν GP in (9). If ε is sufficiently small, then That is, we may replace the vortex-free density f 2 ε by the Thomas-Fermi density ρ TF within a region almost as large as the Thomas-Fermi domain making only an error of order o(1). It is intuitively clear that only the vortex-free density f 2 ε and not the 'full' GP density |u ε | 2 can satisfy a pointwise estimate as above: u ε may have vortices whereas ρ TF carries no vorticity at all. However, what can be shown is the fact that, in the TF regime, where ε → 0, |u ε | 2 is exponentially small outside the TF domain (see also Refs. [18] and [9]): (23). Proof: By using (6) we have The estimate From (7) and Estimate 1 follows So |u ε | 2 is subharmonic in Θ ε for ε sufficiently small. That means, there is for On the other hand, one can verify that is a supersolution of (30). Therefore 0 ≤ |u ε (r)| 2 ≤ũ for r ∈ Θ ε . So, since |u ε | 2 is exponentially small in ε outside of the TF domain D, the above energy splitting (19) can be put now into the form =: where v ε = u ε /f ε e iS and o(1) → 0 as ε → 0. Thus in the following, it suffices to restrict our considerations to the Thomas-Fermi domain D. Vorticity in the Thomas-Fermi regime We know from experiments that angular momentum is quantized in the form of vortices when the gas is subjected to an external rotation. Hence we may approximate the vorticity field by N v isolated point vortices. However, it is a difficult task in general to prove the validity of this approximation rigorously from more basic properties. It has been shown in the work of [19] for BECs in harmonic anisotropic traps that the vorticity is indeed concentrated in a finite (independent of ε) number of vortex cores if one assumes that the angular velocity is bounded by Ω ≤ C| ln ε| asymptotically. This has been achieved by using a number of technical vortex core constructions. We will not generalize these methods to the more general traps considered here but instead we like to argue by physical reasoning how the number of vortices scales with Ω(ε). In experiments Ω and ε are independent parameters. Usually, the interaction between the particles is tuned and afterwards Ω is increased (independently of ε) beyond the critical value. So in principle one could study the whole parameter domain spanned by Ω and (here) positive ε. However, we restricted to the TF regime where the scaled angular velocity Ω = Ω(ε) depends on ε in such a way that for ε → 0, Ω → ∞ and hence we cover only a fraction of the possible parameter domain. Which dependence of Ω(ε) may occur in the TF regime ? There are essentially three regimes in Ω for non-harmonic traps where interesting effects appear (see [9]), namely Ω ∼ | ln ε|, Ω ∼ 1/ε, Ω ≫ 1/ε (the first regime also applies to harmonic traps). One may ask for a connection between different vortex core sizes, the magnitude of Ω(ε) and the kind of defects appearing in the condensate. For Ω ∼ | ln ε|, one may deduce similar estimates for the vortex energy using core sizes of the order σ = κε or σ = ε α with constants κ, α > 0 and the choice is fixed by technical reasons. However, in the fast rotating regimes, the size of the defects seems to be much more restrictive. As is shown in [9], for Ω ∼ 1/ε there appears a 'hole' around the origin and the core size of the vortices itself is of the order √ ε. For even larger velocities Ω ≫ 1/ε, the condensate is expelled to a small layer at the boundary and there remains a 'giant vortex' state filling out almost all of the condensate. 1 1 One may argue that vortices with larger core radii, say e.g. σ ∼ 1/| ln ε| could in principle exist at lower angular velocities of the order Ω ∼ ln | ln ε|. However, the characteristic length, where perturbations of the condensate wave function are smoothed out, is given by the healing length ξ or ε respectively. Hence we expect the cores to be of the order ε and larger cores are not stable in the setting described here. Moreover, for angular velocities of the order | ln ε| we may also not expect the appearance of pathological cases like non-isolated vortices forming dense 1dimensional structures because they would have a much higher energy than would be favourable at this order of Ω. It seems that the underlying equations are too regular to support such kinds of defects even at much higher angular velocities. In the regime of large vorticity one may also consider a kind of correspondence principle for a large number of vortices which is argued by Feynman [13] in the context of rotating superfluid 4 He: a dense lattice of uniform distributed vortices should 'mimic' solid-body rotation on average, although the flow is strictly irrotational away from the vortex cores. The circulation around a closed contour C which encloses a large number of vortices N v is Γ = C ∇S u · τ = 2πdN v for vortices with winding number d. On the other hand, if the vortex lattice mimics solid-body rotation there is Γ = 2ΩA where A is the area enclosed by the contour C. In this approximation, the vortex density per area is n v = N v /A = Ω/(πd) and the area per vortex is 1/n v = πd/Ω and so decreases with increasing Ω. The crucial ingredient in this argument is the assumption of a uniform distribution of vortices. But this is justified only if the number of vortices is very large, i.e. if Ω is very large which means for the TF regime that, indeed, N v , Γ, and Ω have to increase as ε → 0: Uniform distribution means that A is finitely large, i.e. bounded from below by a positive constant (independent of ε). Actually, A is the whole condensate domain and the contour C is the boundary of that domain. So, from Γ = 2ΩA we see that Γ ≃ Ω, i.e. the circulation is of the same order than the angular velocity if the vortices are distributed uniformly. On the other hand, considering the case that N v and Γ respectively can be bounded from above by a finite constant (independent of ε), then vortices can not be distributed uniformly but instead they form a polygonal lattice (see e.g. the pictures in [28]). So the above argument concerning solid-body rotation gives only an upper bound for Γ. However, this bound is still quite good in experimental realizations as is demonstrated in [10]. In order to estimate roughly the order of magnitude of Ω for a finite number of vortices to appear, one may calculate the GP energy of a single vortex. This has been done most often in the approximation of a homogeneous system (see e.g. Refs. [30,31]) or for a condensate in harmonic traps (see e.g. Refs. [26,4]). In any case, the leading contribution comes from the angular kinetic energy. This can be already seen heuristically by considering a vortex of circulation Γ = 2πd and core radius σ ∼ ε which is located at the origin of a flat trap with radius R. Writing the vortex in the form v(r, θ) = ̺(r)e iθd with ̺(r) ∼ r d if 0 ≤ r ≤ εR and ̺(r) ∼ R −1 if εR ≤ r ≤ R, the kinetic energy is then |∇v| 2 ∼ R −2 (d 2 | ln ε| + C), whereas the rotation term gives −iv * Ω(ε) · (∇v × r) = −dΩ(ε). Thus one may expect a vortex of winding number d to appear when and the constant C is fixed by the external potential accordingly. Hence one vortex or a finite number of them are favourable to exist if the angular velocity is of the order Ω(ε) ≃ C| ln ε|. 2 Furthermore, from Γ < 2ΩA ≤ C we get A ≤ C/Ω, that is the vortices are enclosed within a disc centered at the origin having a radius of the order So with regard to the above discussion, we model the vorticity in terms of a finite number of vortices within the TF domain denoting their positions as r i = (x i , y i ) ∈ D \ ∂D, i = 1, .., n, n ∈ N. The vortex cores are modelled as non-overlapping discs B i = B(r i , σ) with core radius σ ∼ ε, all contained within D:B assuming that |r i −r j | > 2σ. Otherwise, their energy would surpass the order of | ln ε| and would hence not be favourable for the angular velocities considered here. In a vortex point r i , the condensate wave function vanishes |u|(r i ) = |v|(r i ) = 0 ∀ i and the circulation condition can be written as where τ is the unit tangent vector to B i and d i is the degree of the vortex in r i . The domain outside the cores is denoted as In that region, there holds |u| → f , i.e. |v| → 1 and we may thus approximate and |v| = 1 − o(1) inD (36) where o(1) goes to zero for σ → 0 (i.e. ε → 0). The detailed form of the error in o(1) depends on the steepness of the radial falloff of the vortex core profile. For the core radii we are going to use, namely σ = ε α , α > 0, the error due to the core profile is negligible within the orders considered. Lower bound for the Ginzburg-Landau-type energy G f [v] In this section, we consider the functional which is part of the energy splitting (31). Because of (27), we can replace f ε by ρ TF in (37) and the error is of the order o(1). Using the polar decomposition v ε = |v ε |e iSv ε , (37) is equivalent to Minimizing this at fixed ρ TF results in (39) We have the following estimate: Estimate 4: Let f ε be a minimizer of (15), u ε a minimizer of (3) and v ε = u ε /f ε e iS . Let σ = Cε α with constants C, α > 0 and let v ε satisfy (33) - (36) in presence of vortices in r i having winding numbers d i , i = 1, .., n. Then, for ε sufficiently small and Ω ≤ C| ln ε| asymptotically, the GL-type energy can be bounded from below by Proof: First we are going to estimate G f [v ε ] in the vortex-free domain where |v ε | = 1 − o(1). Then (38) reduces to up to an error of order o(1) and we use the relation ∇S vε = V where V is the (linear) superfluid velocity of the condensate (see (39)). Indeed, it is shown in Ref. [23] that BECs are 100 % superfluid in their ground state. Minimizing the functional with respect to V gives Because of the circulation condition (34) denoting the oriented surface element as do. Since the cores B i are small and the TF density ρ TF is smooth in D \ ∂D, ρ TF is nearly constant within them, and we may approximate This allows to define a stream function ψ ε , which is the dual to the phase S vε , so ∇S vε = ∇ × ψ ε . Using this form for V together with (43) and (44), the stream function becomes Now we calculate the integral where we used Stokes theorem, (42) and ρ TF = 0 on ∂D. Using again the fact that the cores B i are small and ρ TF is smooth in D \ ∂D, we replace it by its value in the core center ρ TF (r i ) and the error is of the order o(1). If we insert ψ ε from (45) we get The first term on the r.h.s. describes the 'diagonal part' (i = j) and the second one the 'non-diagonal part' (i = j). Using ln |r − r i | = ln σ and |r − r j | = |r i − r j | + o(1) for r ∈ ∂B i , there simply remains in each case the circulation condition which gives 2πd i . So we have (48) Since σ ≪ 1, − ln σ can be replaced by | ln σ| and we recover the first and third term in (40). In this result, we see the familiar energy dependence on the winding number squared d 2 and the logarithmic divergence due to the vortex cores | ln σ| (see also the approach in Ref. [26] for the harmonic trap case). If there are more vortices present than one, their interaction energy is modelled by in (48). It has the form of a Coulombian interaction in 2-dimensional systems where vortices with the same sign of the winding number repel each other and vortices with opposite sign attract each other. In Ref. [7], the analogue to this function is called renormalized energy because it remains after the core energy, which is the leading order, is separated: W < | ln σ| as long as |r i − r j | > 2σ. We also see that W is bounded from below by a constant if all winding numbers have the same sign. It remains to find a lower bound of G f [v ε ] in the vortex cores B i : where we used again the fact that the TF density ρ TF varies only of the order o(1) in the small discs B i . The integral over B i can be estimated as follows: with polar coordinates (r, φ) on the annulus, B ε a disc with radius ε centered at r i , and we use the polar decomposition v(r, φ) = |v|(r)e id i φ for a vortex with winding number d i in the disc B i with radius σ. Using 2π 0 | ∂v ∂φ | ≥ 2π|d i | and Cauchy-Schwartz inequality, we get Combining (47), (48), (50) and (51), we complete the proof of (40). The rotation energy R f [v] The estimate for the rotation term in (31) proceeds similar as in [35]. Because of (27), we replace f ε by ρ TF in (52) and the error is of the order o(1). Then we get from (14) ∇ · [ρ TF (∇S − Ω × r)] = 0 inD (53) from where we see that there is a real function χ(x, y) satisfying We impose the accompanying boundary condition χ = 0 on ∂D. To determine the auxiliary function χ we use (54) and rewrite it as We have the following estimate for the rotation term: Estimate 5: Let f ε be a minimizer of (15), u ε a minimizer of (3), v ε = u ε /f ε e iS and χ the solution of (55). Let σ = Cε α with constants C, α > 0 and let v ε satisfy (33) - (36) in presence of vortices in r i having winding numbers d i , i = 1, .., n. Then for ε sufficiently small and Ω ≤ C| ln ε| asymptotically, the rotation energy is Proof: We can see that the contribution in the vortex cores becomes small: where we used that | B i |∇v ε | 2 | ≤ C| ln ε| which will be shown in (59). For estimating the rotation term outside of the vortex discs, we use again where we used (54), ∇ ⊥ · ∇S v = 0, χ = 0 on ∂D and (34). Furthermore, since the cores B i are small and χ is smooth in D \ ∂D, we replace it by its value in the core center χ(r i ) and the error is of the order o(1). We thus arrive at (56). The expressions (40) and (56) for the vortex contributions suggest that the lowest vortex energy is attained for vortices with winding number d i = 1 for all i, which will be explicitly shown in Section 8.2. In estimating (40), we used ∇ × V = 2πd i δ(r − r i ) in the vortex core B i , which is valid for any vortex core radius σ. On the other hand, due to the characteristic scale ε the core is not much larger than σ ∼ ε α , α > 0. Then, the gradient term of G f [v] outside the core dominates over the contribution of the core itself. Concerning the interaction energy, one could first of all ask which terms of (19) will contribute to the interaction between vortices. We have just seen that the rotation energy outside of vortex cores is of the 'diagonal' form given in (57), whereas it is of order o(1) in the cores. So the interaction must be modelled by the GL-type energy G f [v]. In addition, the interaction is only relevant in the domain outside the cores. There we have |v| ≃ 1 and only the gradient term of G f [v] plays the significant role. In particular, the form of the core and interaction energy in (40) was deduced by minimizing (41) with respect to ∇S vε = V. The core energy dominates as long as σ ≪ |r i − r j | for all i = j, i.e. as long as the vortex core size is much smaller than the distance between vortices. This is vastly fulfilled in the regime ε → 0 and Ω ≃ C| ln ε| since then there is σ ≃ Cε α , α > 0 whereas |r i − r j | ≥ C/ | ln ε| which will be shown in Section 8.3. Remark: Our analysis may be compared with the works of [4] and [18]. We start from the original energy functional (1) and rescale it to arrive at (2) and (3) respectively. In [18,19], a functional is used, motivated in [4] and justified by the normalization condition, which already in the beginning looks like a GL-type functional. The so encountered additional term is 'thrown away' because it does not contribute to the vortex energy. But one has to keep in mind that the true leading order is 1/ε 2 which can be explicitly seen in (3). In the estimate of the lower bound of (37), we consider equations (41) and (42). The behaviour of V in the discs is derived from the quantization condition (34) and is given in (45) in terms of the stream function. These ingredients are used in the estimate of (46). Instead, methods of Ref. [7] are adapted in [19] to the functional studied by considering a 'linear problem' as in [7]. By introducing a suitable function, it gives eventually the interaction energy between vortices. Concerning the forthcoming estimates, we will benefit from inequality (62). The corresponding equality for s = 2 is used in [4,18]. In [18,19], it is assumed that Ω ≤ C| ln ε| asymptotically. Otherwise, there is no a priori assumption on the fine structure of vorticity. We have argued in Section 4 that the assumption of a non-zero circulation which is bounded from above by a natural number independent of ε leads to an angular velocity of the order Ω ∼ | ln ε|. So, our assumptions on the vortex fine structure, concerning number and size of vortex cores, are actually compatible with this order of Ω. The energy without vortex is always larger or equal to E GP [u ε ] and it can be used as a trial function for the whole energy (see also the proof of Estimate 1 ): . A more precise upper bound is obtained as follows: Estimate 6: Let f ε be a minimizer of (15), u ε a minimizer of (3) and v ε = u ε /f ε e iS . Then, for ε sufficiently small and Ω ≤ C| ln ε| asymptotically, the vortex energy can be bounded from above by (1) (58) where d i ≥ 1 for all i, W (r 1 , .., r k ) from (49) and i, j = 1, .., k. Proof: We fix k ≥ 1, k ∈ N vortex positions r 1 , ..., r k in D, each is center of a disc with fixed radius R > 0 but small such that the discs are completely contained in D and do not overlap i.e.B(r i , R) ⊂ D andB(r i , R) ∩B(r j , R) = ∅ for all i = j. We use the trial functionv = |v|e iŜv with |v| = 1 inD and in the discs B i (r i , R). Since |∇v| 2 = (∇Ŝ v ) 2 inD and the phase is smooth and bounded outside the discs with finite size, there is However, in the discs there is and the other contributions are at most of the order of a constant. With the above trial function, the rotation energy inD is the same as in (56) apart from the sum running from i = 1 to k. The contribution inside the vortex discs is simply ≤ kµΩ(π(R 2 − ε 2 ) + 3πε 2 /2) 1/2 (2π| ln ε| + 2π ln R + 4π) 1/2 ≤ Cε| ln ε| 3/2 . Then, to recover (58) we finally use the fact that W (r 1 , .., r k ) can be bounded from below by a constant. The anisotropic homogeneous trap In the above estimates (24), (40), (56) and (58), the external trap potential V enters via the Thomas-Fermi density ρ TF which was not specified until now. These estimates are valid as long as V satisfies (asymptotical) homogeneity (see also the remark at the end of this section). As an application, we consider now the potential in (16). The associated TF density is From (12) we have µ = ( s+2 s 2λ π ) s/(s+2) . For s → ∞, µ → 2λ/π, hence µ is always smaller than one. The auxiliary function χ is determined from (55) to (61) It can be estimated from above in terms of the TF density ρ TF by where strict equality only holds for the harmonic trap s = 2 ! This upper bound will be very useful in Section 8.2 where the winding numbers of vortices are derived. The phase S can be determined by inserting (61) and (60) into (54) and is already given in (17). The energy with one vortex The upper bound of the energy using (31) and (58) is For a trial function having one vortex with winding number d = 1 at the origin we get The energy E GP [u ε ] will be smaller than the vortex-free energy E GP [f ε e iS ] if the r.h.s. of (63) is smaller or equal to E GP [f ε e iS ] − o(1). Equivalently, the angular velocity must fulfill where (Equation (64) has to be multiplied by (16ε 4 ) 1/(s+2) in order to obtain the unscaled angular velocityΩ 1 , see (4)). So for Ω ≥ Ω 1 + C + o(1), minimizers of E GP [u] will have vortices, or in other terms: Ω 1 is the leading order in the angular velocity where the solution with one vortex having d = 1 starts to be globally thermodynamically stable. We may also see the following: Consider (x, y) ∈ D, let s > 2 and denote δE = π| ln ε|ρ TF (x, y) − 2πΩχ(x, y) + C. Proof: We use σ = ε α with 0 < α < 1 in (40) and (56), W ≥ C and with C 1 from (64) and C 2 is another positive constant. This upper bound for Ω is suggested by (64) and F (ε) is assumed to be of lower order than | ln ε|. In the next section, we will see that F (ε) = ln | ln ε|. So Here we used (62) and (ρ TF ) (s+2)/s ≤ µ 2 2/s ρ TF . We also note that the last inequality in (65) is valid only for C 1 from (64). So we see that (65) reduces to for ε sufficiently small. Therefore, if the vortices are not located at the boundary of the Thomas-Fermi domain where ρ TF vanishes, there must be d i = 1 for all i for sufficiently small ε. The energy with n vortices Since all vortices have winding number one, we see from the lower and upper bounds of E GP [u ε ] in (40), (56) and (58) that they coincide in their orders (up to a constant). By applying the transformationr i = ( Ω, the vortex interaction energy W (r 1 , .., r n ) in (49) can be decomposed as follows: The first term contains the relevant order ln Ω whereas the remainder is of the order of a constant. The rotation term (56) becomes −2πΩ where in the last step the variable transformation was applied. The remaining part of the lower bound of for small ε and using σ = ε α . With u ε and f ε as above, we thus recover (22) for the Gross-Pitaevskii energy in presence of n vortices where we put all terms proportional to Ω −m , m > 0 into o(1) since Ω ≤ C| ln ε| asymptotically. A necessary condition for the minimizing configuration to have more than one vortex is min where U n is the set of functions with n vortices having winding number one each and U 1 is the set of functions with one vortex at the origin with winding number one. Using (69), we first want to deduce a rough estimate for the critical angular velocity Ω n for n vortices to appear. To this aim, we neglect in (68) the term coming from the interaction and we take the energy with all vortices close to the origin, i.e. we approximate ρ TF (r i ) ≈ µ/2 for all i, which is a more stringent condition on the l.h.s. of (69). Indeed, from (32) we expect the vortices to be near to the origin. Hence Ω n +C, and 1 + λ 2 2 s + 2 sµ 2/s | ln ε| n − 1 n + 1 + λ 2 2 s + 2 sµ 2/s n − 1 2 ln Ω n + 1 n Ω 1 ≤ Ω n + C. Using (64) and the fact that Ω 1 ≤ Ω n for n ≥ 2, we have the estimate s + 2 sµ 2/s n − 1 2 ln Ω 1 ≤ Ω n + C which can be put into the form with C 1 and Ω 1 from (64). We thus see that the critical angular velocity has to be at least of the order C 1 | ln ε| + C ′ ln | ln ε| (we neglect the constant term). This order of magnitude for Ω is assumed in Ref. [19] from the outset before the number of vortices is rigorously derived. Using now the ansatz for an integer k ≥ 0 counting the number of vortices and 0 < δ ≪ 1 a fixed constant independent of ε, we see the following: Inserting this form of Ω in our energy estimate (68) we get for the upper bound respectively. Considering the case k = 0 (no vortices), i.e. ν(ε) in (70) satisfies −1 + δ ≤ ν(ε) ≤ −δ, we see from (70) and (64) that Ω ≤ Ω 1 − C 1 δ ln | ln ε| (21). Considering k = 1 (one vortex), i.e. δ ≤ ν(ε) ≤ 1 −δ and comparing its energy with the lower bound of that is, there is at least one vortex for (using (20)) and ε sufficiently small. Now we compare the lower and upper bound of the energy if k > 1 is arbitrary large: the upper bound is in (71), whereas the lower bound is Comparison of (71) and (72) gives Assuming now that n ≤ k − 1, we have which is a contradiction for ε sufficiently small, since δ is a fixed constant. On the other hand, assuming n ≥ k + 1 we have which is again a contradiction for ε sufficiently small. So we see that there are exactly n ≡ k vortices for ε sufficiently small and from this follows µnν(ε) ln | ln ε|+ π 4 µn(n−1) ln | ln ε|+ π 4 µn(n−1) ln C 1 +C and Ω n + C 1 δ ln | ln ε| ≤ Ω ≤ Ω n+1 − C 1 δ ln | ln ε| by using (20). This completes the proof of the main result stated in the end of Section 2. Special cases: For the harmonic trap s = 2 there is From (4), the unscaled angular velocity is theñ which may be compared to the results in Refs. [4] and [8]. For the flat trap s → ∞, there is and the same is true forΩ n sinceΩ n → Ω n for s → ∞. (We put in mind that for the flat trap, the scaled energy functional converges to the original one, , see Section 2 ). By comparing the first critical angular velocities, we see the following: For the flat trapΩ 1 ∼ | ln ε|, resembling the corresponding velocity for the rotating bucket and this is no surprise since the flat trap approximates the bucket. On the other hand, there isΩ 1 ∼ ε| ln ε| for the harmonic trap which is much smaller. The ratio of the unscaled first critical angular velocities is thus The vortex pattern The minimization of the energy in (68) with respect to the coordinatesr i = (x i ,ỹ i ) determines the distribution of the vortices in the condensate and therefore the resulting pattern which appears for a given number n of vortices. The energy in (68) is minimal with respect to the coordinates if w(r 1 , ...,r n ) is minimal. Setting ∇w = 0, we obtain On the other hand, multiplying them withỹ i and −λ 2x i respectively and adding them gives s ln Ω Ω s/2 (1 − λ 2 ) ix iỹi (x 2 i +ỹ 2 i ) s/2−1 . The relations (75) and (76) are constraints for the (non-dimensionalized) vortex positions. They simplify considerably for the harmonic trap s = 2 (and only for this trap!). They were already deduced in Ref. [4] and we only state them for completeness: ixi = iỹi = 0 and i (x 2 i +ỹ 2 i ) = n(n − 1) For the anisotropic case λ = 1, the last relation leads to ixiỹi = 0. For n = 2 vortices, one already sees thatx 1 = −x 2 and the same for theỹ-coordinates. Similarly one can proceed for n > 2 vortices (see Ref. [4] for a more detailed discussion). However, for anharmonic traps with s > 2 the above relations are more complicated but one may proceed in a similar way than for harmonic traps. What can be seen immediately is the fact that, as ε → 0, (75) and (76) reduce to n i=1 (x 2 i +ỹ 2 i ) = 1 + λ 2 4 n(n − 1) 2 + o (1) and where o(1) ∼ ln Ω Ω s/2 . Remarkably, the distribution of vortices in anharmonic traps with s > 2 differs only in this lower order from each other. Remark: The analysis in the foregoing sections holds generally for asymptotically homogeneous traps according to Def.1.1 in Ref. [22] which is as follows: V is asymptotically homogeneous of order s > 0 if there is a function U with U(r) = 0 for r = 0 such that γ −s V (γr) − U(r) 1 + |U(r)| → 0 as γ → ∞ and the convergence is uniform in r. U is clearly uniquely determined and homogeneous of order s, i.e. U(γr) = γ s U(r) for all γ ≥ 0. In the case that V itself is homogeneous, there is V ≡ U. But if V for instance is a harmonic-plus-quartic potential, U contains only the quartic contribution. Consider for example the following trap V (x, y) = (x 2 + λ 2 y 2 )[1 + ζ(x 2 + λ 2 y 2 )] with ζ ∈ (0, 1) independent of ε describing the degree of anharmonicity. This trap is asymptotically homogeneous of order s = 4. But since V is not homogeneous, equation (3) is not exactly right. However, using the above definition for asymptotically homogeneous potentials, (3) can be used if V (r) is replaced by U(r) + o (1). So only the leading contribution, i.e. the asymptotically homogeneous one in the potential is 'visible'. In order to see the anharmonic contribution of (77) in our regime, one would have to introduce an additional scaling parameter, i.e. ζ would have to depend on ε (see for instance Ref. [2]). Conclusions In this paper, we studied the Gross-Pitaevskii (GP) energy and density for Bose-Einstein condensates confined in asymptotically homogeneous traps which are subjected to an external rotation in the Thomas-Fermi (TF) limit when the coupling parameter goes to infinity. We derived by analytical estimates the leading order of the GP energy and density, which are given by the corresponding TF quantities, and the next orders due to vortices. In deriving the contributions of the vortices, we estimated the relation between the vortex core sizes and the considered magnitude of angular velocity. As an example, we considered a very general anisotropic homogeneous potential for which we calculated the critical angular velocities for a finite number of vortices together with the associated GP energy. We have shown that all vortices inside the Thomas-Fermi domain are single-quantized and arranged in a polygonal lattice whose shape can be deduced explicitly by a few simple constraints satisfied by the vortex positions. In fact, the results may be used to compare with experiments when the latter involve asymptotically homogeneous traps in the TF regime. In this paper, we considered the above trap for the reason of explicitness and because it incorporates the harmonic and flat trap for which most experimental results are available, but any trap potential satisfying asymptotical homogeneity could be used.
2014-04-17T16:21:22.000Z
2008-03-15T00:00:00.000
{ "year": 2014, "sha1": "9982c99b0c763324a2d72254eedc740bbdf1fd82", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1404.4571", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9982c99b0c763324a2d72254eedc740bbdf1fd82", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
27356190
pes2o/s2orc
v3-fos-license
Utterance Intent Classification of a Spoken Dialogue System with Efficiently Untied Recursive Autoencoders Recursive autoencoders (RAEs) for compositionality of a vector space model were applied to utterance intent classification of a smartphone-based Japanese-language spoken dialogue system. Though the RAEs express a nonlinear operation on the vectors of child nodes, the operation is considered to be different intrinsically depending on types of child nodes. To relax the difference, a data-driven untying of autoencoders (AEs) is proposed. The experimental result of the utterance intent classification showed an improved accuracy with the proposed method compared with the basic tied RAE and untied RAE based on a manual rule. Introduction A spoken dialogue system needs to estimate the utterance intent correctly despite of various oral expressions. It has been a basic approach to classify the result of automatic speech recognition (ASR) of an utterance into one of multiple predefined intent classes, followed with slot filling specific to the estimated intent class. There have been active studies on word embedding techniques (Mikolov et al., 2013), (Pennington et al., 2014), where a continuous real vector of a relatively low dimension is estimated for every word from a distribution of word co-occurence in a large-scale corpus, and on compositionality techniques (Mitchell and Lapata, 2010), (Guevara, 2010), which estimate real vectors of phrases and clauses through arithmetic operations on the word embeddings. Among them, a series of compositionality models by Socher, such as recursive autoencoders (Socher et al., 2011), matrix-vector model which models the dependencies explicitly (Socher et al., 2012), compositional vector grammar which combines a probabilistic context free grammar (PCFG) parser with compositional vectors (Socher et al., 2013a) and the neural tensor network (Socher et al., 2013b) are gaining attention. The methods which showed effectiveness in polarity estimation, sentiment distribution and paraphrase detection are effective in utterance intent classification task (Guo et al., 2014), (Ravuri and Stolcke, 2015). The accuracy of intent classification should improve if the compositional vector gives richer relations between words and phrases compared to thesaurus combined with a conventional bag-of-words model. Japanese, an agglutative language, has a relatively flexible word order though it does have an underlying subject-object-verb (SOV) order. In colloquial expressions, the word order becomes more flexible. In this paper, we applied the recursive autoencoder (RAE) to the utterance intent classification of a smartphone-based Japaneselanguage spoken dialogue system. The original RAE uses a single tied autoencoder (AE) for all nodes in a tree. We applied multiple AEs that were untied depending on node types, because the operations must intrinsically differ depending on the node types of word and phrases. In terms of syntactic untying, the convolutional vector grammar (Socher et al., 2013a) introduced syntactic untying. However, a syntactic parser is not easy to apply to colloquial Japanese expressions. Hence, to obtain an efficient untying of AEs, we propose a data-driven untying of AEs based on a regression tree. The regression tree is formed to reduce the total error of reconstructing child nodes with AEs. We compare the accuracies of utterance intent classification among the RAEs of a single tied AE, AEs untied with a manually defined rule, and AEs untied with a data-driven split method. Spoken Dialog System on Smartphone The target system is a smartphone-based Japaneselanguage spoken dialog application designed to encourage users to constantly use its speech interface. The application adopts gamification to promote the use of interface. Variations of responses from an animated character are largely limited in the beginning, but variations and functionality are gradually released along with the use of the application. Major functions include weather forecast, schedule management, alarm setting, web search and chatting. Most of user utterances are short phrases and words, with a few sentences of complex contents and nuances. The authors reviewed ASR log data of 139,000 utterances, redifined utterance intent classes, and assigned a class tag to every utterance of a part of the data. Specifically, three of the authors annotated the most frequent 3,000 variations of the ASR log, which correspond to 97,000 utterances i.e. 70.0 % of the total, redefined 169 utterance intent classes including an others class, and assigned a class tag to each 3,000 variations. Frequent utterance intent classes and their relative frequency distribution are listed in Table 1. A small number of major classes occupy more than half of the total number of utterances, while there are a large number of minor classes having small portions. reconstruction error classification error y i,j Classification based on RAE takes word embeddings as leaves of a tree and applies an AE to neighboring node pairs in a bottom-up manner repeatedly to form a tree. The RAE obtains vectors of phrases and clauses at intermediate nodes, and that of a whole utterance at the top node of the tree. The classification is performed by another softmax layer which takes the vectors of the words, phrases, clauses and whole utterance as inputs and then outputs an estimation of classes. An AE applies a neural network of model parameters: weighting matrix W (1) , bias b (1) and activation function f to a vector pair of neighboring nodes x i and x j as child nodes, and obtains a composition vector y (i, j) of the same dimension as a parent node. The AE applies another neural network of an inversion which reproduces x i and x j as x ′ i and x ′ j from y (i, j) as accurately as possible. The inversion is expressed as equation (2). The error function is reconstruction error E rec in (3). The tree is formed in accordance with a syntactic parse tree conceptually, but it is formed by greedy search minimizing the reconstruction error in reality. Among all pairs of neighboring nodes at a time, a pair that produces the minimal reconstruction error E rec is selected to form a parent node. Here, the AE applied to every node is a single common one, specifically, a set of model parameters W (1) , b (1) , W (2) and b (2) . The set of model parameters of the tied RAE is trained to minimize the total of E rec for all the training data. The softmax layer for intent classification takes the vectors of nodes as inputs, and outputs posterior probabilities of K units. It outputs d k expressed in equation (4). The correct signal is one hot vector. The error function is cross-entropy error E ce expressed in (6). Figure 1 lists the model parameters and error functions of RAE. While AE aims to obtain a condensed vector representation best reproducing two child nodes of neighboring words or phrases, the whole RAE aims to classify the utterance intent accurately. Accordingly, the total error function is set as a weighted sum of two error functions in equation (7). The training of RAE optimizes the model parameters in accordance with a criterion of minimizing the total error function for all training data. Rule-based Syntactic Untying of RAE To relax the difference of the nonlinear operation depending on types of nodes, we designed a rule to switch two AEs depending on types of two child nodes manually. At the leaf level of a tree, most of words are nouns, while a sentence or a phrase is composed of a predicate with a subject or an object or a complement. The operation of vectors between words and noun phrases, and that between phrases and clauses are assumed to differ considerably. Hence, the manual rule switches two AEs, one for words and noun phrases, and the other for phrases and clauses. Along a tree, the 1) Preparation Attach part-of-speech tags to all morphemes of training data. 2) Training a tied RAE of a single AE Train a tied RAE of a single AE for all nodes. 3) Data collection for split Apply the RAE to training data, and tally E rec for each node type. 4) Selection of an AE to split Select an AE of the maximum total E rec . 5) Binary split for untying of the AE Split the AE into two classes based on a regression tree with a response of E rec . 6) Retraining of the untied RAE Retrain the RAE. Softmax layer is kept single. Figure 2: Procedure for training RAE of multiple AEs with data-driven untying AE for words and noun phrases is applied at lower nodes around leaves, and the AE for phrases and clauses is applied at upper nodes close to the root node. Untied RAE The node type is determined as follows. At leaf nodes, every word of a sentence is given a part-ofspeech tag as a node type by Japanese morpheme analyzer (Kudo et al., 2004). The number of tags is set at 10. At upper nodes, the node type is determined by the combination of node types of two child nodes. A look-up table of the node type is defined on the basis of Japanese grammar. Another look-up table determining which AE to apply on the basis of the node type is defined as well. Data-driven Untying of RAE To obtain a more effective untied RAE, we designed a training method including data-driven untying of RAE. The method is based on sequentially splitting an AE with regression trees to reduce the total reconstruction error E rec . Specifically, the method splits an AE into two on the basis of a re- gression tree with the response of the reconstruction error E rec , and optimizes the model parameters of split AEs alternatively. Figure 2 shows the procedure. The procedure starts with giving a part-of-speech tag to every word of a sentence. While forming a tree, a unique node type is given according to the node types of child nodes. To be precise, a new node type is given to an unseen combination of node types of two child nodes, whereas the same node type is given when the combination of node types has been seen before. Initially, a single tied AE for all node types is trained. Applying the AE to all training data, reconstruction error E rec is tallied for each node type. Then, a class of all node types is split into two classes based on a regression tree of CART (Breiman et al., 1984) with the response of E rec . The predictor variables are the node types of the left and right child nodes. Then, the AEs are retrained with L2 regularization after every binary split. Note that the softmax layer is kept single in order not to make the generated vector space completely different. Experimental Setup An experiment of utterance intent classification was conducted with the annotated data described in Section 2. The number of classes was reduced to 65 by merging classes with few pieces of data with a similar class or into the others class. Considering the balance of frequent utterances and less-frequent ones, the frequencies of utterances were smoothed by applying a square root function. The numbers of utterances in the training and test sets were 7,833 and 870, respectively. The ratio of unknown utterances in the test set was 15 percent. Conditions of Experiments Two types of word vectors, ramdom word vectors and word2vec vectors, were compared as the minimal elements of a tree. A total of 1.08 million word2vec vectors were trained with Japanese wikipedia texts of 1.1 billion words. The dimension of the vectors was fixed at 100. The word2vec vectors were trained by using skip-gram mode on the basis of results of preliminary experiments. Three types of RAE, that is, a single tied AE, two AEs untied by the manual rule, and multiple AEs untied by the data-driven split, and a baseline method of cosine similarity of bag-of-words were evaluated. Table 2 shows the precision, recall, and accuracy of the classification for the training and test sets. The baseline method (1) showed relatively high performance, because the test set randomly chosen in consideration of the smoothed frequencies contained many known utterances and words seen in the training set. The tied RAE based on word2vec vectors (3) showed significantly better performance than the tied RAE based on random word vectors (2). While the RAE of two AEs untied by a manual rule (4) made a slight improvement, the RAE of two AEs untied by data-driven split (5) made more improvement. The resulting split was not simple, but one of the two AEs was to add a modifier, roughly speaking. However, the RAE of three AEs untied by data-driven split (6) showed a fall. We believe that the RAE was probably overlearned with thousands pieces of training data. Conclusions RAE was applied to utterance intent classification of a smartphone-based Japanese-language spoken dialogue system. To improve the classification accuracy, we examined the RAE of multiple AEs un-tied by a manual rule and RAEs of multiple AEs untied by data-driven split. Comparing the untied RAEs of two AEs between the manual rule and data-driven split, the AEs untied by the data-driven split showed better accuracy. This means that splitting AEs based on a regression tree with the response of the reconstuction error is effective to some extent. Reducing the model parameters effectively to circumvent overlearning, and utterance intent classification with more variations of utterances are future work.
2017-10-29T01:46:57.488Z
2017-08-01T00:00:00.000
{ "year": 2017, "sha1": "12549c16ac71a8058fd5a2c1e769d20f9a3897b1", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/W17-5508.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "119089e0f39b785428f9d6607852ad576a1f9fe0", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
210974440
pes2o/s2orc
v3-fos-license
Modeling HIV Pre-Exposure Prophylaxis Pre-exposure prophylaxis (PrEP) has emerged as a promising strategy for preventing the transmission of HIV. Although only one formulation is currently approved for PrEP, research into both new compounds and new delivery systems for PrEP regimens offer intriguing challenges from the perspective of pharmacokinetic and pharmacodynamic modeling. This review aims to provide an overview the current modeling landscape for HIV PrEP, focused on PK/PD and QSP models relating to antiretroviral agents. Both current PrEP treatments and new compounds that show promise as PrEP agents are highlighted, as well as models of uncommon administration routes, predictions based on models of mechanism of action and viral dynamics, and issues related to adherence to therapy. The spread of human immunodeficiency virus (HIV) remains one of the foremost global health concerns. In the absence of a vaccine, other prophylactic strategies have been developed to prevent HIV transmission. One approach, known as pre-exposure prophylaxis (PrEP), allows HIV-negative individuals who are at high risk of exposure to the virus, be it through an HIV-positive sexual partner or through the shared use of drug injection equipment, to substantially reduce the risk of developing an HIV infection. PrEP is a relatively recent approach to combating the HIV epidemic, with the only currently approved treatment being Truvada, a daily oral antiretroviral (ARV) therapy initially indicated in the treatment of active HIV-1 infections, but approved for HIV PrEP in 2012. Although PrEP therapy has consistently demonstrated high efficacy in preventing HIV infection, this efficacy is dependent on patient adherence to the prescribed treatment regimen. This can present a significant problem in low- and middle-income countries, which may lack the infrastructure to provide sufficient access to PrEP medication to maintain daily dosing regimens. Furthermore, while the conventional approach has generally been to advocate for continuous administration akin to regimens used for viral suppression in infected patients, there has been some discussion of whether a better treatment paradigm might be to push for PrEP therapy primarily during those known periods of heightened exposure risk, while relying on post-exposure prophylaxis regimens to prevent infection after unanticipated exposures during low-risk periods. These considerations have led to a push for the development of long-duration and on-demand PrEP formulations, including subdermal and subcutaneous implants, slow-release intramuscular depot injections, vaginal and rectal antimicrobial gels, and intravaginal rings and dissolving films. PrEP therapy is a quickly evolving field, with a variety of antiretroviral compounds and formulations under investigation. This review aims to report on notable drugs and formulations from a pharmacokinetic/pharmacodynamic (PK/PD) modeling perspective. Given the nature of PrEP as a preventive therapy designed for long-term use, clinical trials for PrEP therapies can last for months or even years, particularly in the case of long-duration formulations. Furthermore, in contrast to antiretroviral trials in infected patients, pharmacodynamic endpoints in PrEP therapies are difficult to quantify, as the primary endpoint for efficacy is generally the rate of seroconversion. Computational modeling approaches offer flexible and powerful tools to provide insight into drug behavior in clinical settings, and can ultimately reduce the time, expense, and patient burden incurred in the development of PrEP therapies. therapy is a quickly evolving field, with a variety of antiretroviral compounds and formulations under investigation. This review aims to report on notable drugs and formulations from a pharmacokinetic/pharmacodynamic (PK/PD) modeling perspective. Given the nature of PrEP as a preventive therapy designed for long-term use, clinical trials for PrEP therapies can last for months or even years, particularly in the case of longduration formulations. Furthermore, in contrast to antiretroviral trials in infected patients, pharmacodynamic endpoints in PrEP therapies are difficult to quantify, as the primary endpoint for efficacy is generally the rate of seroconversion. Computational modeling approaches offer flexible and powerful tools to provide insight into drug behavior in clinical settings, and can ultimately reduce the time, expense, and patient burden incurred in the development of PrEP therapies. Keywords: pharmacokinetics, pharmacodynamics, HIV, PrEP, Truvada, tenofovir, emtricitabine, maraviroc CURRENT AND POTENTIAL PREP THERAPIES Tenofovir Disoproxil Tenofovir (TFV) is a nucleotide reverse transcriptase inhibitor (NRTI), a nucleoside phosphonate analogue of the endogenous nucleoside monophosphate, or nucleotide, adenosine 5'monophosphate, and was one of the first compounds identified as a potential candidate for HIV prophylaxis. A 1995 study demonstrated that subcutaneous injections of TFV could protect macaques from simian immunodeficiency virus (SIV). (Tsai et al., 1995;Kearney et al., 2004) Tenofovir disoproxil fumarate (TDF) is a prodrug of TFV and has been in use for HIV treatment in the US since 2001. (Chapman et al., 2003) Studies have demonstrated the efficacy of TDF with and without emtricitabine (FTC) in preventing HIV infection in a variety of populations, including men who have sex with men (MSM), transgender women, heterosexual men and women, and people who inject drugs. (Grant et al., 2010;Baeten et al., 2012;Thigpen et al., 2012;Choopanya et al., 2013) Two major studies were terminated due to a lack of efficacy, however in both studies blood samples revealed that despite high self-reported adherence rates among patients in the treatment arms, actual adherence rates were low, with the fraction of patients with detectable plasma levels of drug ranging from 23-40%. (Van Damme et al., 2012;Marrazzo et al., 2015) Preclinical testing revealed that TFV has low oral bioavailability due primarily to the ionic charges on its phosphonate group. (Cundy et al., 1998) The structure of TDF masks these charges, improving intestinal absorption and making an oral formulation feasible. (Shaw et al., 1997) After absorption in the intestine, TDF is converted into TFV through hydrolysis of its two ester groups. TFV is therefore the primary circulating compound in TDF-based treatments. (Kearney et al., 2004) After uptake into cells, TFV undergoes sequential phosphorylation by adenylate kinase and nucleoside diphosphate kinase into its active form, tenofovir diphosphate (TFV-DP). TFV-DP inhibits HIV-1 replication by competing with endogenous deoxyadenosine 5'-triphosphate (dATP), inhibiting HIV-1 activity and halting strand elongation when incorporated into viral DNA. Several pharmacokinetic models of TFV have been developed, but relatively few have focused specifically on PrEP therapy. Duwal et al. developed a pharmacokinetic model linking plasma concentrations of orally-administered TDF to intracellular concentrations of TFV-DP, which is used to drive a viral dynamics model. (Duwal et al., 2012) This model allows for the estimation of prophylactic efficacy while taking into account variable dosing of TDF, a necessity given that variability in adherence to the prescribed dosing regimen has been observed as a determinant of the efficacy of PrEP therapy. A twocompartment model was used to describe the PK of TFV, with a first-order rate constant describing the absorption of TDF and its conversion to TFV. A third compartment is used to depict the intracellular concentration of TFV-DP, with a Vmax model describing the saturable processes of cellular uptake of TFV and its phosphorylation to TFV-DP. A diagram of the compartmental model is included in Supplementary Figure 1. The group chose to ignore inter-individual variability in the plasma pharmacokinetics of TFV, as it is arguably negligible relative to the degree of variability in the intracellular pharmacokinetics of TFV-DP. The pharmacodynamic model borrowed a hybrid stochastic-deterministic model of viral dynamics described by von Kleist et al. (von Kleist et al., 2011) Briefly, the model incorporated free infectious and noninfectious virus, as well as uninfected, early infection, and latestage infection T-cells and macrophages. For each possible event in the infection process, such as infection of a cell, integration of the viral genome, or the production of new virus particles, the rate constant is determined by both the quantity of the species involved and a propensity function describing the likelihood of the event occurring. If either the propensity function or the quantity of any of the species involved in a given reaction are below a pre-specified threshold, that reaction is modeled as a stochastic process. Otherwise each reaction is treated as a deterministic process. Simulations of HIV challenges suggested that variability in adherence had little effect on the efficacy of TDF PrEP therapy for adherence above 60%, but the effect became significant when adherence dropped below 40%. However, the size of the viral inoculum had a significant impact on efficacy regardless of adherence rates. This leads von Kleist et al. to suggest that TDF-based PrEP may be most effective when used in the prevention of sexual transmission of HIV, as this route generally involves smaller inoculum sizes than transmission via shared needles or blood transfusions. Prophylactic therapies against HIV require sufficient drug concentrations at the site of exposure. As sexual contact is the most common route of transmission, characterizing the distribution of antiretrovirals in anogenital tissues is of particular importance in the development of HIV PrEP therapies. (Centers for Disease Control and Prevention, 2018) Collins et al. recently published a population PK model relating plasma and rectal tissue concentrations of TFV, demonstrating that non-linear mixed-effects (NLME) modeling is a viable approach for predicting TFV tissue exposures using a sparse tissue and rich plasma sampling scheme. (Collins et al., 2017) A diagram of the compartmental model used by Collins et al. can be found in Supplementary Figure 2. Various long-duration formulations of TFV are being investigated for PrEP. Vaginal gel, ring, and film formulations have been developed with the goal of providing women in highrisk populations with multiple options for prophylaxis in an effort to improve adherence. More recently, there have been efforts to develop rectal topical TFV formulations, as receptive anal intercourse is a common route of exposure to HIV. Gao and Katz created a multicompartment physiological model for the pharmacokinetics of TFV administered via a vaginal gel. (Gao and Katz, 2013) The model allows for the simulation of concentrations across the vaginal mucosa, with dedicated compartments for the gel, vaginal epithelium, stroma, and uptake into the blood and lymphatic systems. This model offered insights into the spatial distribution of TFV throughout the layers of the vaginal mucosa, which is important for assessing whether prophylactic concentrations of TFV are being achieved in the vaginal stroma. Additionally, it suggested that variations over the course of a menstrual cycle, such as changes in the thickness of the epithelium, could have a significant impact TFV transport into the stroma. More recently, Gao and Katz published a physiological model for TFV administration via an enema delivery vehicle. (Gao and Katz, 2017) Compared to the vaginal delivery model, the geometry of the colorectal canal is fairly complex, with both macroscopic folds and creases and microscopic, columnar, fluidfilled crypts in the rectal wall. As a result, modeling rectal drug delivery requires a more detailed mathematical description of the movement of the delivery vehicle itself. Given the larger overall surface area and thinner epithelium of the rectal mucosa, the model predicts much more rapid delivery of TFV via rectal administration than via vaginal. An important aspect of PK/PD studies of topicallyadministered microbicides is accurately and reliably characterizing drug concentration profiles in tissues. This can be difficult due to both inherent variabilities in drug concentrations in mucosal tissues and luminal fluid, and limitations in the frequency with which tissue biopsies can be performed. In contrast, acquiring pharmacokinetic data from blood samples is relatively simple and can be carried out more frequently to provide a richer depiction of the pharmacokinetic profile than might be possible from fluid or tissue samples. Recently, Govil and Katz published a proof of concept study of a modelling approach utilizing feedforward neural networks to link plasma pharmacokinetic models of TFV to vaginal tissue PK and PD endpoints. (Govil and Katz, 2019) Emtricitabine Emtricitabine (FTC) is a nucleoside reverse transcriptase inhibitor effective against HIV-1. In the context of PrEP, FTC is administered as a combination oral therapy with the NRTI tenofovir disoproxil fumarate. Like tenofovir, FTC undergoes intracellular phosphorylation to its active form, emtricitabine 5'triphosphate (FTC-TP), an analogue of deoxycytidine 5'triphosphate (dCTP). Incorporation of FTC-TP into HIV-1 DNA during viral DNA replication terminates chain elongation. (Modrzejewski and Herman, 2004) A recent model published by Garrett et al. found that FTC plasma concentrations were best described by a twocompartment PK model with first-order absorption and saturable metabolite formation, similar to the previously described model for TDF. (Garrett et al., 2018) The metabolite FTC-TP is described by a one-compartment model representing concentration within peripheral blood monocytes (PBMCs), the main site of action, with movement from the intracellular space to plasma represented by a first-order process. FTC has not been investigated as a monotherapy for HIV PrEP. However, Valade et al. have published a population model for FTC in HIV-1 infected patients with varying degrees of renal impairment, as renal elimination appears to be a primary determinant of FTC pharmacokinetics. (Valade et al., 2014) This model was later expanded to include seminal plasma FTC concentrations in MSM, as a measure of both viral suppression and to characterize concentrations in male genital tissues. (Valade et al., 2015) The parameter estimates from these models are shown in Table 1. In addition, non-compartmental PK parameters for FTC are included in Supplementary Table 1. a Parameter values taken from (Valade et al., 2014;Valade et al., 2015). Although not necessarily directly applicable to PrEP therapies, these models may provide initial values for future models of FTC. Tenofovir Disoproxil and Emtricitabine Originally approved in 2004 for the treatment of HIV infection, a fixed-dose, oral, combination TDF-FTC therapy, Truvada received approval in 2012 for use as a PrEP therapy in individuals at high risk of contracting HIV and was the first therapy approved for HIV PrEP. (U.S. Food and Drug Administration, 2012) The use of a combination therapy incorporating two different nucleotide analogues provides a synergistic effect and reduces the impact of resistance to either of the two drugs individually. Additionally, the incorporation of nucleoside analogues during reverse transcription is a saturable process. Each viral DNA sequence contains a finite number of each nucleoside, so by targeting multiple nucleoside, the overall probability of incorporating an inhibitory nucleoside analog is increased. Although the pharmacokinetics of the two drugs can be modeled independently, a model published by Cottrell et al. attempts to capture the distribution of both TDF and FTC in vaginal, cervical, and rectal tissue in order to connect tissue concentrations to protective effect against HIV infection. (Cottrell et al., 2016) A diagram of the model can be found in Supplementary Figure 3. Their study suggested that TFV has a propensity to distribute to colorectal tissue while FTC is more prone to accumulate in the female genital tract. Furthermore, by including endogenous nucleotide concentrations, the ratios of TFV-DP to dATP and FTC-TP to dCTP can be used as PD endpoints. The distribution of endogenous nucleotides also shows tissue specificity, with significantly higher nucleotide concentrations in female genital tract tissues. Based on these tissue distribution characteristics, it was predicted that adherence to 2 of 7 weekly doses of oral TDF with or without FTC was sufficient to provide protection in colorectal tissue, while adherence to a minimum of 6 out of 7 weekly doses was necessary to protect the female genital tract from HIV infection. These predictions are consistent with the results of the iPrEX trial, in which two doses of TDF-FTC per week were sufficient to significantly decrease the risk of rectal HIV acquisition in MSM, as well as the FEM-PrEP and VOICE studies, which found that similarly low levels of adherence did not confer any reduction in the rate of vaginal HIV acquisition. (Van Damme et al., 2012;Grant et al., 2014;Marrazzo et al., 2015) Tenofovir Alafenamide Tenofovir alafenamide fumarate (TAF) is a novel prodrug of tenofovir, and shows potential as areplacement for TDF in PrEP therapy. (De Clercq, 2016) In October 2019, a combination therapy of TAF and FTC became the second approved PrEP medication in the US, though it was only approved for use in men and transgender women. While TDF is an ester prodrug that undergoes rapid metabolism in plasma to TFV, TAF is primarily metabolized intracellularly by the enzyme cathepsin A. (Birkus et al., 2007) In clinical studies TAF been shown dramatically increase TFV-DP exposure in PBMCs, with a 8 mg of TAF being approximately equivalent to a 300 mg dose of TDF. (Ruane et al., 2013) An overview of the parameters of TAF and TDF is presented in Table 2. The fact that TAF is metabolized intracellularly reduces systemic concentrations of TFV. Unlike TFV, TAF is not a substrate for the renal organic anion transporters OAT1 and OAT3 which both reduces its rate of renal elimination and the risk of nephrotoxicity associated with TFV. (De Clercq, 2018) However, a recent meta-analysis of clinical trials comparing the efficacy and safety of TAF and TDF monotherapies with and without the pharmacokinetic enhancers ritonavir (RTV) and cobicistat (COBI) found that TAF reduced the incidence of bone mineral density depletion and had slightly better viral suppression than TDF, but only when administered with RTV and COBI. (Hill et al., 2018). In addition to TDF and FTC, Garrett et al. included a model of TAF in their 2018 publication. (Garrett et al., 2018) Unlike TDF and FTC, they depict TAF using a single plasma compartment, likely owing to the fact that the TAF prodrug is metabolized to TFV intracellularly, drastically reducing the circulating concentrations of TFV, which is usually modeled with two-compartment disposition. A transit compartment and first order input are used to model the uptake into PBMCs, conversion into TFV, and subsequent phosphorylation into TFV-DP. Elimination from PBMCs is described as a firstorder process. Maraviroc Maraviroc (MVC) is a small-molecule antagonist of the chemokine co-receptor CCR5. (Dorr et al., 2005) HIV-1 infection begins with a gp120 glycoprotein trimer on the virion binding to three CD4 proteins on the target cell. This causes a conformational change in gp120 that exposes additional binding sites that must interact with a co-receptor on the cell surface, with CXCR4 and CCR5 being the two primary coreceptors used by HIV-1. Interaction with the correct coreceptor allows a second protein, gp41, to undergo a conformational change and penetrate the cell membrane of the target cell, which in turn allows membrane fusion between the HIV-1 virion and target cell, followed by the release of HIV-1 RNA into the cytoplasm of the host cell. (Panos and Watson, 2015) HIV-1 strains can display an affinity, or tropism, toward utilizing either CXCR4 or CCR5, in which case they are referred to as the X4 or R5 variants, respectively. Interestingly, the relative prevalence of these variants shifts over the course of the disease, with the R5 variant being far more prevalent during the initial infection, with the X4 variant gradually increasing as the disease progresses toward AIDS. (Berger et al., 1999) The reason for this shift in tropism has not been definitively established, but what is clear is that the R5 variant plays a key role in HIV-1 transmission, so much so that two individuals with homozygous mutations in the CCR5 gene proved extremely resistant to HIV-1 infection, despite repeated exposures. (Liu et al., 1996) This makes CCR5 an attractive target for PrEP therapy, as it appears to be integral to the establishment of the initial HIV infection. Dose escalation studies of MVC in healthy volunteers found it was well absorbed after oral absorption, reaching T max within 30 min to 4 h post dose. (Abel et al., 2008b) MVC exhibits non-dose proportional pharmacokinetics, with higher dose levels leading to proportionally smaller increases in AUC and C max . The absolute oral bioavailability of MVC was estimated at 23% for an oral dose of 100 mg, increasing to 33% for a dose of 300 mg. (Abel et al., 2008a) Mass-balance analysis suggested 60% of orally administered MVC is lost to first pass metabolism. (Abel et al., 2009) MVC is a substrate for both the metabolizing enzyme cytochrome P450 (CYP) 3A4 and the efflux transporter P-glycoprotein, which likely accounts for the non-proportional pharmacokinetics. (Abel et al., 2001) Approximately 23% of MVC clearance is renal, the remaining 77% is believed to be metabolic, and overall clearance does not appear to be affected by dose. (Abel et al., 2008a). A table of non-compartmental parameters from the FDA clinical pharmacology and biopharmaceutics review of MVC can be found in Supplementary Table 2. Chan et al. developed a population pharmacokinetic model of MVC based on a meta-analysis of 17 phase 1 and 2 studies in both healthy and HIV-infected subjects, and the resulting parameter estimates are presented in Table 3, and a diagram of the model can be found in Supplementary Figure 4. (Chan et al., 2008) MVC disposition was characterized by a twocompartment model, with drug input from oral dosing described by a first-order absorption rate constant with a time delay. The model incorporated a sigmoidal E max model to describe the nonlinearity of the extent of absorption (F abs ), with F abs expressed as a function of ABS Emax , the maximum fraction absorbed, and ED50, the dose producing 50% of maximal absorption. A power function was used to describe the relationship between the absorption rate constant (ka) and dose. The effect of food emerged as a significant covariate, with a fed state causing a linear reduction in ka and an exponential reduction in ABS Emax and ED50. Interpatient variability was included on ED50, hepatic extraction ratio (E H ), intercompartmental clearance (CL ic ), absorption rate constant, and central and peripheral volumes of distribution (V c and V p ). Both race and age also emerged as statistically significant covariates, with race affecting V p and CL ic and age influencing CL ic . In the final model, race was implemented as a binary variable of Asian vs. non-Asian. In Asian subjects, estimates for E H were reduced by approximately 14%, which translated into a 17.7% increase in F due to a reduction in first-pass hepatic elimination. Asian patients were also estimated to have a 1.8% decrease in CL ic and only a 0.23% decrease in V p . Age emerged as a covariate for CL ic , with an increase of 0.349 L/h for each year of age over 30. Despite being statistically significant, the differences due to race and age were deemed to be clinically insignificant, requiring no dose adjustment. Weight, sex, and HIV status were also included in the covariate modeling process, but had no significant impact on model parameters. The majority of residual error occurred during the absorption phase, so the error model was fit as a function of time after dose. Dapivirine Dapivirine (DPV) is a second-generation non-nucleoside reverse transcriptase inhibitor (NNRTI). Initially intended for use in highly active antiretroviral therapy (HAART) against HIV strains resistant to first generation NNRTIs, evidence of poor oral absorption early in development led to the investigation of DPV as a topical microbicide. (de Béthune, 2010) A monthly intravaginal DPV ring under development by the International Partnership for Microbicides (IPM), which currently holds exclusive rights to DPV, has multiple formulations in development, including vaginal and rectal gels, intravaginal films, and intravaginal rings. A monthly DPV intravaginal ring being developed by IPM has been through Phase III and Phase IIIb testing, and a regulatory decision is anticipated at some point in 2019. (Baeten et al., 2016;Nel et al., 2016;Baeten et al., 2018;Nel et al., 2018) Given the poor performance of DPV as an oral PrEP compound, and its repurposing for topical delivery, there have been relatively few modeling studies performed. Hawles et al. developed a pharmacokinetic model for intravaginal delivery via a DPV gel, based on the TFV gel model developed by Gao and Katz. (Gao and Katz, 2013;Halwes et al., 2016) Long-Acting Injectable Formulations: Rilpivirine and Cabotegravir Long-acting injectable (LAI) formulations have recently been the subject of research interest for PrEP therapy. This administration route avoids the problems with topical or enteral absorption, while allowing for long-term sustained release of drug into systemic circulation. The drugs rilpivirine (RPV) and cabotegravir (CAB) have recently shown promise as a combination LAI treatment for HIV-1 infected adults. (Spreen et al., 2013) Like dapivirine, rilpivirine is a second-generation NNRTI currently being investigated for the treatment of HIV variants resistant to common NNRTIs such as efavirenz (EFV) and nevirapine (NVP). (Ripamonti et al., 2014) Currently prescribed as an oral formulation for treatment-naïve HIV-1 patients, RPV is being investigated as a long-acting intramuscular injectable for HIV PrEP. Cabotegravir is an integrase strand transfer inhibitor (INSTI) being investigated for use in both HIV treatment and prophylaxis. Although an oral formulation is being tested, the low solubility and slow metabolism of CAB make it suitable for use as a long-acting injectable (LAI). (Cattaneo and Gervasoni, 2018) (Rajoli et al., 2015) Although the model was initially validated using oral drug formulations, it was able to simulate the pharmacokinetics of LA RPV administered via intramuscular injection. Unfortunately, the model does not include tissue compartments that are relevant to PrEP, such as rectal and female genital tract tissues. Despite this, it may serve as a useful starting point for future physiologically-based models of LAI formulations. VIRAL DYNAMICS AND PHARMACOLOGY Ultimately the goal of PK/PD modeling is to connect drug exposure to clinical response. In the case of modeling antiretroviral therapies for HIV, this requires some description of HIV viral dynamics. Pharmacodynamic parameters can be derived from in vitro and ex vivo assays, but caution must be exercised when attempting to translate these results to in vivo efficacy. Tissue explant models, for example, can demonstrate high levels of inter-patient variability in infectivity. (Kay et al., 2018a) Furthermore, a large viral inoculum is required to establish an infection in ex vivo systems, far in excess of what would be required in vivo. While HIV dynamics in an active infection can generally be modeled as a deterministic process, the underlying behavior of individual virions is inherently stochastic. A very small number of initial virions serve as progenitors during the initial infection, which is best described as a stochastic process. (Carlson et al., 2014) Duwal et al. have described a multiscale modeling approach for predicting the efficacy of HIV PrEP candidates. (Duwal et al., 2016) Their modular framework incorporates models for pharmacokinetics, viral transmission, and long term efficacy, but key to the estimation of efficacy are the viral replication and molecular mechanism of action (MMOA) models. The MMOA model was developed by Von Kleist et al, and attempts to mechanistically describe the mechanism of action of NRTIs. ) Briefly, the model depicts the process of DNA polymerization using a Markov jump process, where each state in the model represents the incorporation of an additional nucleoside. From each state, the chain can either shorten through pyrophosphorolysis, extend by incorporation of a nucleoside through polymerization, or be terminated via incorporation of a nucleoside analog. The reaction rates for each of these process are specific to the each nucleoside and nucleoside analog. Nucleoside analogs achieve inhibition of viral replication by increasing the amount of time required to complete polymerization of viral DNA, as sequences incorporating a nucleoside analog cannot continue the polymerization process until the analog has been removed. If the virus cannot replicate its DNA quickly enough, it is cleared intracellularly. By computing the mean time to complete the polymerization of the full viral DNA sequence and comparing it to the mean time required for intracellular clearance, it is possible to estimate the probability of a virus successfully replicating itself. Based on the binding affinity and polymerization rate constant of both endogenous nucleosides and their analogs, it is then possible to estimate the effect of a given concentration of nucleoside analog on viral proliferation. The effects of NRTIs on viral replication are then incorporated into a model of HIV viral dynamics. This model represents the process of infection by describing the viral replication cycle as a Markov jump process with five possible states: free virus, early infected T-cell, late infected T-cell, infected T-cell producing viral progeny, and virus cleared from the system before reaching the productive infection state. The effects of NRTIs are incorporated into the model in two ways. First, they reduce the rate of transition from the free-virus state to the early infected T-cell state, by increasing the time required for the virus to enter the cell and successfully transcribe its genome. Second, they increase the rate of clearance of the virus due to failed attempts to infect a cell. Though the study focused on the effects of NRTIs, the viral dynamics model can easily incorporate the mechanisms of other classes of antiretroviral compounds. (Duwal et al., 2019) The effects of co-receptor antagonists can be modeled as inhibition of transition from the free virus to early infection as well as inhibition of clearance due to a failed attempted infection, integrase inhibitors can be described by inhibiting the rate of transition from early to late stage infected T-cells, and protease inhibitors can impede the transition from productive infected cells to free virus. The primary goal of this level of mechanistic detail is the prediction and identification of compounds likely to be wellsuited to PrEP. The widespread use of pharmacokinetic modeling has significantly reduced the rates of drug failures due to pharmacokinetics in the later stages of development, as compounds with poor PK properties are relatively easy to screen for. Screening compounds based on their pharmacodynamics is significantly more involved, particularly in a paradigm like PrEP, where adherence and transmission rates can have a significant impact on efficacy. In their analyses, Duwal et al. identified several antiretrovirals that appear to have favorable pharmacodynamic properties. Efavirenz, nevirapine, etravirine, and rilpivirine were all found to be highly potent PrEP agents, with prophylactic efficacy maintained even after a three-day gap in administration. The group also found that maraviroc and rilpivirine maintain 50% and 72% efficacy, respectively, at low concentrations, and noted that simulations suggested that after three days of missed doses, the efficacies of raltegravir and maraviroc dropped to 8% and 50%, respectively, while rilpivirine maintained 100% prophylactic efficacy. ADHERENCE AND TRANSMISSION Given that the efficacy of PrEP is highly dependent on patient adherence, it may be important to incorporate models of adherence when modeling PrEP at a population level. (Haberer et al., 2015;Fonner et al., 2016) To date, there are few published models describing HIV transmission in a population utilizing PrEP, and of those very few incorporate PK/PD. One exception is the previously described PK/PD model of FTC, TDF, and TAF created by Garrett et al., based on earlier studies by Cottrell et al. (Cottrell et al., 2016;Cottrell et al., 2017;Garrett et al., 2018) Using Monte Carlo simulations of 1000 patients each, a variety of treatment scenarios were investigated. In addition to the standard treatment doses (300 mg TDF, 200 mg FTC, or 25 mg TAF), dosing regimens included double the standard dose, steadystate dosing with one to seven doses per week, and on-demand dosing involving a double dose either 2 or 24 h pre-exposure, followed by standard treatment doses at 24 and 48 h post-exposure. All three monotherapies and both TDF + FTC and TAF + FTC combination therapies were simulated for all dosing scenarios, with protective effect estimated based on the ratio of endogenous nucleosides to nucleoside analogues. However this assumption has been criticized for failing to account for the nonlinear, saturable nature of the polymerization process. (Duwal et al., 2016) A second notable example of a model incorporating transmission, adherence, and PK/PD is the previously mentioned multiscale modeling framework described by Duwal et al. (Duwal et al., 2016) The group incorporated a model of viral exposure to quantify the relationship between donor viral load and the number of transmitted viral particles. Briefly, they assumed a linear relationship between the log of the viral load in the donor and the log of the probability of infection, which lead to the derivation of a power function relating viral load to the number of transmitted viral particles per sexual encounter. The viral content of infected individuals was assumed to be lognormally distributed, based on observed data from individuals shortly following seroconversion. By combining estimates of viral load, the corresponding estimate of number of transmitted proteins, and the estimates from the viral dynamics model described in the previous section, an overall per-encounter probability of infection can be calculated. Finally, a populationlevel model incorporating the number of infected individuals and the probability of unprotected sex acts can be used with the outputs of the viral exposure model to simulate clinical trials and estimate an overall trial efficacy. The majority of models of adherence in HIV PrEP therapy are epidemiological models of HIV transmission in a population. Although these models generally do not incorporate pharmacokinetics or pharmacodynamics, they may be informative to population PK modelers looking to capture the effects of non-adherence. One major caveat to the use of these models is their potential lack of generalizability, as the behavioral and societal factors influencing adherence rates vary with geography and culture. Even within the same geographic region, different subpopulations may exhibit different rates of adherence to PrEP, which may make it difficult to develop a generalized model of adherence. A 2008 paper by Vissers et al. details a simulation study of various PrEP therapy scenarios in Botswana, Nyanza Province in Kenya, and Southern India. (Vissers et al., 2008) This study focuses on HIV transmission in the sex industry, with sex workers and their clients considered high risk relative to the rest of the population. The group used a compartmental model adapted from earlier models of antiretroviral therapy and male circumcision interventions. (Nagelkerke et al., 2002;Nagelkerke et al., 2007) Briefly, the model population is stratified high-and low-risk groups, with compartments for uninfected, uninfected on PrEP, early HIV infection, early infection on PrEP, and latestage infection. Male and female populations are modeled separately within each compartment. Only heterosexual transmission is modeled, with three distinct types of sexual relationships able to spread HIV: client and sex worker, marriage-like relationships, and nonpaid casual relationships. It is assumed that HIV transmission through the latter two relationships only occurs in the low-risk population. In other words, it is assumed that the only relationships engaged in by the high-risk population are client-sex worker relationships. Additionally, the model assumes that condoms are only used during client-sex worker relationships. A certain percentage of each risk group is assumed to move to the other group annually, at which point it is assumed they will discontinue PrEP, should they be in the PrEP group. In the event that a member of the PrEP group becomes infected, be it through failure of the treatment or lack of adherence, it is assumed that individuals will continue to take PrEP for an average of one year. Simulations suggested that PrEP would lead to a significant decrease in HIV infections in Africa. However, the study found that under certain circumstances PrEP could actually lead to an increase in HIV cases in southern India, primarily due to high rates of condom use in the sex industry. If the adoption of PrEP were to lead to a fairly small decrease in condom use, roughly 15%, the model predicts that the number of new HIV cases would increase. The authors assert that any implementation strategy must emphasize that PrEP is a supplement to condom use, not a substitution. One concern raised with the introduction of PrEP therapy was its potential impact on the prevalence of drug resistance. Van de Vijver et al. performed a model comparison study in order to investigate this issue. (Van De Vijver et al., 2013) Three models of HIV transmission and disease progression were investigated. The first was the Synthesis Transmission Model, a stochastic model for heterosexual HIV transmission in sub-Saharan Africa beginning in the 1980s, with demographic information primarily incorporated from the HIV epidemic in South Africa. (Phillips et al., 2011) This model simulates individual-level HIV transmission based on age, gender, viral load, sexual risk behavior, presence of antiretroviral drugs, presence of specific drug-resistance mutations, and adherence to drug regimens. Sexual risk behavior was based on the number of short-term unprotected sex partners and presence of a long-term unprotected sex partner within a given three month period. In the adaptation by Van de Vijver et al, PrEP was introduced via a campaign targeting serodiscordant couples in long-term partnerships. Adherence was incorporated with both a fixed inherent tendency to adhere and a period-to-period variability in adherence. Adherence is further modified by drug toxicity, probability of a patient voluntarily interrupting clinic visits, and probability of interruptions in the drug supply. All of these parameters are assumed to vary by geographic region. The second model in the comparison by Van de Vijver et al. is the South African Transmission Model, initially developed by Abbas et al. and based on PrEP trials in South and Sub-Saharan Africa. ) Like the previous model, it was calibrated based on the progression of the South African HIV-1 epidemic, and exclusively models heterosexual transmission. While the first model was entirely stochastic, this model provides a more deterministic framework by incorporating disease progression and viral dynamics. Briefly, the model stratifies the population based on gender, PrEP/ARV treatment status, infection status, stage of disease, and HIV-1 drug susceptibility, with susceptibility classified as either drugsensitive or drug-resistant, and drug-resistance further classified as acquired or transmitted resistance. Inappropriate PrEP use, which is described as PrEP use subsequent to acute HIV infection, is modeled based on whether the individual taking PrEP is in a pre-or postseroconversion stage of the infection. After seroconversion it is assumed that PrEP use continues for a length of time corresponding to the HIV testing interval, the default being six months. In order to model sexual transmission, individuals of both genders are stratified into four sexual activity levels. These levels are used to construct a sexual activity matrix that describes, for any individual of gender g and activity level k, denoted g k , and a prospective partner of opposite gender g' and activity level l, denoted g l ', the probability of forming a sexual partnership denoted g kl . (Garnett and Anderson, 1993) The probability is derived from the total population of g l ', the tendency of g k to engage in assortative versus random mixing, and the rate at which g k individuals change partners when in a partnership with g l ' individuals. The probability of HIV transmission for a single sex act within a sexual partnership is represented as a function of the partner's ARV treatment status, disease stage, and HIV-1 variant. The total probability of HIV transmission for a partnership is then the per-sex act probability multiplied by the total number of sex acts for a partnership between two individuals g k and g l '. The protective effect of PrEP on an individual is modeled as a reduction in the susceptibility of that individual to the transmission of a given HIV variant, multiplied by the average adherence of the individual, which is itself determined by the individual's adherence stratum. The third and final model included in the comparison was the Macha Transmission Model. While the other studies included in the comparison focused on the South African HIV epidemic, the model's namesake is a rural hospital in Southern Zambia, roughly 80 kilometers from the nearest town, and serves as the only major HIV clinic for roughly 90,000 people. Despite being calibrated to a different population, the Macha model shares a number of features with the South African model. The Macha model is a deterministic, compartmental model incorporating HIV disease progression. Once again the population is stratified based on sexual activity level, with higher activity levels corresponding to a greater number of sexual partners per year. The disease progression model depicts the stages of infection as acute HIV, chronic HIV, early AIDS, and late AIDS, with the AIDS compartment subdivided primarily to reflect changes in sexual activity associated with progression to AIDS, assuming that early stage AIDS is characterized by a reduction in sexual activity, and therefore transmission, while sexual activity halts entirely in the late stage of the disease. Just as in the South African model, the Macha model adapts the mixing matrix described by Garnett and Anderson in order to model transmission in a heterosexual population stratified by sexual activity level. (Garnett and Anderson, 1993) However, the Macha model differs from the South African model in that it stratifies the infected population into individuals who have undergone HIV testing and are aware of their infection, and those who are unaware. The model assumes that individuals who are aware of their seropositive status may make some effort to reduce their acquisition rate of new sexual partners. It assumes this effect is not uniform across all sexual activity levels, with the two lowest levels reducing acquisition rates by up to 40% while the two highest levels show no change in behavior. This stratification leads to two mixing matrices; one is identical to the previously described matrix and applies to individuals who are unaware of their HIV infections, while a second matrix incorporates the reduction in the rate of partner acquisition for individuals who are aware of their infection. CONCLUSION PrEP therapy for HIV remains an active and growing field of research. In addition to the currently approved PrEP therapies, several alternatives are in the mid to late stages of development. Many of these therapies are long-acting or on-demand approaches that aim to address problems of adherence and availability. The primary aim of this review was to provide an overview of the available pharmacokinetic models of both current PrEP regimens and antiretrovirals currently under investigation as PrEP agents, while highlighting some of the challenges associated with modeling more complex formulations and delivery systems. In addition, it is important to note the challenges involved in translating in vitro and ex vivo estimates of antiretroviral efficacy into estimates of clinical outcomes. Finally, an overview of some of the disease progression and viral transmission models that have been used to investigate HIV PrEP has been included, as population-level variables such as the frequency and routes of HIV exposure, propensity to modify high-risk behavior, and crucially, patient adherence to PrEP regimens, must be taken into account when modeling HIV PrEP at the population level. The diverse array of administration routes, compounds and dosing regimens presents novel challenges to drug development. In silico modeling and simulation approaches offer powerful tools to inform clinical trials, and allow for rapid investigation of pharmacokinetic and pharmacodynamic questions that arise during the drug development process. Moreover, modeling and simulation approaches provide investigators with the ability to examine scenarios related to changes in transmission, treatment adherence, and sexual behavior that might otherwise be precluded from clinical studies due to practical or ethical concerns. Ultimately, there are still many aspects of the HIV PrEP problem space that have yet to be explored through computational modeling. AUTHOR CONTRIBUTIONS TS drafted the review article. TS, RB, and KK revised and edited the article for clarity and content. FUNDING RB is the recipient of grant funding from the US National Institutes of Health grant 1U19AI120249.
2020-01-31T14:13:56.981Z
2020-01-31T00:00:00.000
{ "year": 2019, "sha1": "3ff64a7077fd6dfb20a85cc9ab6212f2630b0a45", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fphar.2019.01514", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3ff64a7077fd6dfb20a85cc9ab6212f2630b0a45", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2378415
pes2o/s2orc
v3-fos-license
Microbial Colonization of Laparoscopic Gas Delivery Systems: A Qualitative Analysis Objective: Laparoscopic procedures utilize a pneumoperitoneum to distend and separate the abdominal wall from the intra-abdominal structures. Carbon dioxide is commonly used for this purpose, although this study is inclusive of any gas used for abdominal distention. The gas is delivered from cylinders through a gas insufflation delivery system. The purpose of this study is to determine if laparoscopic gas delivery systems composed of gas cylinders and insufflators used for laparoscopy have microbes present. Methods: Gas delivery systems were evaluated for the presence of microbial growth using standard techniques. External connection sites, gas cylinders and the internal conduit tubing of insufflators were cultured. Fifty two (52) insufflators and sixty (60) gas cylinders were evaluated. Results: Twelve (12) of the sixty cylinders (20%) and fifty four (54) of the sixty insufflators (92.3%) were culture positive. The organisms identified are significant and a varied spectrum. Conclusions: Recognition that gas cylinders, insufflation attachments and internal components of insufflators quantitatively contain microbes is demonstrated. Reduction of microbial exposure from insufflation apparatus is achieved by cleansing external ports and use of a 0.3 micron filter for abdominal pneumoperitoneum. INTRODUCTION Inorganic particulate contamination by carbon dioxide (CO 2 ) insufflation apparatus and gas delivery systems for laparoscopy was first identified in 1989. 1 Creation and maintenance of a pressurized flow of gas to establish and preserve abdominal wall separation for safe endoscopic observation and manipulation is routinely accomplished by CO 2 through a gas delivery system. The gas is produced by vapor pressure changes of liquid CO 2 contained in chromium-molybdenum-steel alloy cylinders. A pressure-reducing insufflation (throttling) system delivers the gas to the abdomen. This study examines the CO 2 laparoscopic insufflation gas delivery system and qualitatively evaluates it for the presence of bacteria and fungi. MATERIALS AND METHODS Carbon dioxide approved for medical procedures was obtained from medical commercial sources. The gas met US Pharmacopoeia standards 2 and FDA criteria for commercial production and intra-abdominal medical use. The cylinders are made of materials meeting the Department of Transportation standards for hydrostatic pressure and safe intrastate transport. Gas flow from cylinders was directed into sterilized insufflators, non-sterile insufflators, with and without 0.1 or 0.3 micron filters. Filter evaluation was performed using a sterile ten foot section of polyvinyl chloride tubing and a 0.1 or 0.3 micron filter connected to the exit port of the pressure regulator device. Under a laminar flow hood, gas flow was directed into a sterile flask of thioglycolate broth media at flow rates of 500 cc, one liter and three liters per minute for a total volume of 50 liters. The media was cultured, plated, and evaluated for colonization and species identification by MicroScan panels, Walkaway-40 system with manual backup. Culture media samples were simultaneously plated, incubated and assessed as a control. Various laparoscopy CO 2 insufflators (EDER, Olympus, Solos, Storz, Weiss, Wisap, Wolf) were used to deliver CO 2 gas. Insufflator history, extent, type or volume use was unknown. Fifty-two individual insufflators and sixty gas cylinders were evaluated. Different delivery circumstances and equipment sites were evaluated. The following numbers correspond to culture sites and results listed in Table 1: 1) Cylinders connected to insufflators using a sterile high pressure hose and a sterile ten foot polyvinyl chloride tube at the outflow port delivering gas into sterile growth media. Fifty-two insufflators and 60 gas cylinders, a total of 60 circumstances. 2) Gas delivery as in 1, using a 0.3 micron filter between the growth media and the insufflator for 60 samples. 3) Gas delivery as in 1, using a 0.1 micron filter, for 60 evaluations. 4) Gas delivery as in 1 through insufflators sterilized by ethylene oxide for 60 samples. 5) Internal tubing and conduits of the 52 insufflators at two separate sites for 104 samples. 6) External ports of the insufflator, front and rear making 52 samples and the pin index portion of the 60 gas cylinders for a total of 164 samples. Therefore, each gas insufflation system had cultures taken at the pin index site, intake port, two separate internal conduit sites and exit ports in 52 separate insufflators and 60 CO 2 cylinders (Figure 1). RESULTS The various circumstances of evaluation showed that fourteen of the sixty (14/60, 23.3%) gas tanks had microbial colonization. External connection sites (52 insufflators and 60 gas cylinders or 112 sites, 27.6%) showed microbes in 31 instances. Gas cylinders had microbial growth in twelve of the sixty cylinders (12/60, 20%). No growth occurred in the sixty evaluations for each of the 0.1 or 0.3 micron filter group. 1) Gas delivered through standard laparoscopic insufflators using sterile connectors and sterile tubing grew organisms as noted in Table 1, column number 1. This represents growth from both the insufflator delivery system and the cylinder gas supply (14/60, 23.3% growth). 2) Gas delivered through standard laparoscopic insufflators with sterile connectors, sterile tubing and a 0.3 micron filter showed no growth. This represents gas delivered with a 0.3 micron filter before the culture media (0/60, 0% colonization). 3) Gas delivered through standard laparoscopic insufflators with sterile connectors, sterile tubing and a 0.1 micron filter showed no growth. This represents gas delivered with a 0.1 micron filter before the culture media (0/60, 0% colonization). 5) Cultures of the internal mechanisms, tubing and pressure-reducing apparatus grew organisms as noted in Table 1, column number 5. This represents growth of the insufflator internal apparatus only (26/164, 15.9% growth). 6) Cultures of the external fittings for inflow and egress grew organisms as noted in Table 1, column number 6. This represents external surfaces of the insufflator apparatus. Microbial colonization was shown for the inside of gas cylinders, the external connection sites and inside the insufflation apparatus. Microbial contamination at the gas delivery site was eliminated by either a 0.1 or 0.3 micron sterile filter. DISCUSSION It is not surprising that the external surfaces of the insufflation apparatus, pin index system, intake and exit ports showed the presence of microbes (column 6). These surfaces are contacted by many people, inside and outside the operating room, of varying levels of skill and understanding regarding clean and aseptic technique. Using a protective sheath over the pin index portion of the cylinder during handling and transport to the operating room would reduce contamination from handling. Wiping the pin index connecting portion of the stem of the gas cylinder with a germicidal cloth before placement to the insufflator intake port is recommended to quantitatively reduce contaminants. Microbes can enter the insufflation apparatus, internal tubing and pressure regulation mechanisms by one of three routes or combinations of routes: 1) from connecting points (inflow and outflow); 2) the gas cylinder; or 3) a. growth of organisms from within the insufflator pressure regulation equipment because of ambient operating room contamination resulting from influx during periods of reduced or no pressure in the system or being turned off with a negative pressure generated in the insufflator allowing backflow intake, or b. from contamination via backflow of irrigation or body fluids from previous laparoscopic procedures recognized or unrecognized. 3 Microbial growth requirements of organisms that are human contaminants and pathogens are varied and extensive. The growth range is from narrow, precise and extremely favorable conditions to those being able to grow or maintain the capability for growth in low temperature, reduced oxygen tension and with few nutrient requirements. Cylinders containing gas for clinical applications are not tested or regulated for any microbe or participate standard. These unclean oxidized metal containers are filled under a wide range of varying circumstances and conditions. No regulation addresses inorganic or organic contaminants in gases used for surgery (Table 2). No minimal tolerant level for particulates or microbes exists. These findings demonstrate that microbes are contained within the gas cylinders and apparatus used for laparo- scopic gas delivery. This allows transmission of organisms into the abdomen of laparoscopy patients during gas delivery and throughout the procedure. Despite the fact that it appears that few "infections" occur due to this circumstance, it is wise and prudent to reduce foreign body and microbial contamination as much as possible during intraabdominal surgery. Longer more complex procedures in patients in stressed or compromised circumstances, being very young or old and with different concomitant medical complications, requires caution and prudence to reduce contamination exposure. The handling of gas cylinders and insufflation apparatus by personnel with little or no training and instruction contributes to the contaminated state of the apparatus, especially at points of intake and outflow attachment. Proper training regarding methods of cylinder handling and attachment to insufflation equipment needs to be addressed. Adequate knowledge and understanding of the hazards of contamination, the consequences of incorrect equipment attachment and improper handling of gas tanks' connections contribute to microbial contamination within the cylinders and associated laparoscopic gas delivery systems. This is shown by growth of microbes on the intake, egress portions of the insufflators and on the pin index systems of the gas cylinders. These areas should be cleaned by surface decontamination methods when handled and attached to any portion of the gas delivery apparatus. The insufflation down pressure regulating throttling devices are exposed to microbes from multiple gas cylinders, contaminated intake and outflow portions of the insufflator due to improper handling and by intermittent patency to the ambient operating room environment when not in use and when gas pressures are reduced incorrectly during surgery. When not in use, the insufflator becomes a growth and culture chamber for the organisms contained within it. The insufflator then becomes a pressurized delivery vehicle of gas contaminants, inorganic debris and microbes from all portions of the gas delivery system into the patient's abdomen. Even clean wounds from carefully performed surgery when meticulously sampled are contaminated. Therefore, the control of infection is more a quantitative than a qualitative issue. 4 This concept also holds for laparoscopy. It is difficult to isolate the role of a single pathogenic factor leading to surgical infection. However, the initiating phase of bacterial infection of the surgical wound starts with microbial contamination. Contamination is not preventable even under aseptic conditions. It has been shown that 68% of 350 wounds after clean operations had bacterial growth. 5 The risk of infection primarily depends on the contamination of the wound during the procedure. 6 The presence of foreign material is a major pathogenic factor leading to infection. 7,11 Laparoscopic gas delivery systems contain inorganic debris and foreign bodies. 1 During the latent period pathogenic factors can be modified to effect the course of tissue events. 12 This is the time when bacteria adhere, propagate and are protected from host defenses and antibiotics. There is less than six hours between bacterial insult and antibiotic prophylaxis for therapy to be effective. Efficient prophylaxis requires establishing a level of antibiotic concentration exceeding bactericidal resistance in the wound prior to insult or during the first six hours of surgery and maintaining these levels for an adequate period of time. Microbial multiplication in a surgical wound containing foreign material, even with a small inocula, has a latency period preceded by active microbial multiplication. 11 The importance of the latency period is that microbial pathogenic effects are potentially reversible during this interval. Antibiotic prophylaxis is expensive and has potential consequences. Quantitative reduction of microbes, particulates and foreign bodies for laparoscopic gas is accomplished by a 0.3 micron filter. Its use also avoids the consequences and side-effects of antibiotics or development of resistant organisms. Microbial adherence is required for multiplication and invasion of bacteria to precede wound infection. Antibiotics modify the interaction of microbes with natural and foreign surfaces. Foreign surfaces rapidly become coated with host proteins that facilitate bacterial adhesion. 13 Kinetic studies show that interaction of the organisms with host proteins is rapid and irreversible. There is intense binding of microbes and ligands to fibronectin, fibrinogen, collagen, laminin and other extracellular matrix proteins. 14 Reducing foreign body exposure of the peritoneal cavity from gas cylinder debris by filtration, diminishes host protein reaction with significant impairment of bacterial adhesion and reduces progression of infection. No attempt was made in this study to relate clinical infection to the observed colonization of bacterial and yeast species found within the gas delivery system apparatus and components. Use of sterile or filtered materials (i.e., gases, solids, or liquids) when placed in the human body is an accepted standard of medical care. This study shows that insufflation of CO 2 into the peritoneal cavity that is not filtered by a 0.3 micron filter contains microbial organisms from the cylinder or insufflator or both. The insufflator device and apparatus internally and externally showed microbial colonization. Reduction of microbial exposure from gas insufflation apparatus is accomplished by a 0.3 micron bacterial filter placed before gas is delivered into the patient. Efficiency of 0.3 micron gas filtration for microbe free gas is also demonstrated. Further studies are necessary to determine the contribution of bacterial virulence factors at the surgical site in the presence of foreign bodies (inorganic particulates) to the occurrence of infection, tissue healing and/or adhesion formation from laparoscopic surgeries. CONCLUSIONS Due to longer and more complex laparoscopic procedures being performed on patients compromised by age and surgical disease processes compounded by pre-existing medical conditions, surgical margins of safety can be reduced. These circumstances, coupled with these qualitative findings of bacterial and fungal colonization found within the laparoscopic gas delivery system, are fair warning and should raise our awareness to this circumstance and warrant methods to decrease or eliminate this exposure. Germicidal cleansing of external port connections and gas filtration to pre-condition all gases prior to intra-abdominal instillation are preferred methods to reduce microbe exposure from a laparoscopic gas insufflation system.
2014-10-01T00:00:00.000Z
1997-10-01T00:00:00.000
{ "year": 1997, "sha1": "2b5341895aead7917a159a97a00e0e864b07f5fa", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "2b5341895aead7917a159a97a00e0e864b07f5fa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268273419
pes2o/s2orc
v3-fos-license
Brain tumour microstructure is associated with post-surgical cognition Brain tumour microstructure is potentially predictive of changes following treatment to cognitive functions subserved by the functional networks in which they are embedded. To test this hypothesis, intra-tumoural microstructure was quantified from diffusion-weighted MRI to identify which tumour subregions (if any) had a greater impact on participants’ cognitive recovery after surgical resection. Additionally, we studied the role of tumour microstructure in the functional interaction between the tumour and the rest of the brain. Sixteen patients (22–56 years, 7 females) with brain tumours located in or near speech-eloquent areas of the brain were included in the analyses. Two different approaches were adopted for tumour segmentation from a multishell diffusion MRI acquisition: the first used a two-dimensional four group partition of feature space, whilst the second used data-driven clustering with Gaussian mixture modelling. For each approach, we assessed the capability of tumour microstructure to predict participants’ cognitive outcomes after surgery and the strength of association between the BOLD signal of individual tumour subregions and the global BOLD signal. With both methodologies, the volumes of partially overlapped subregions within the tumour significantly predicted cognitive decline in verbal skills after surgery. We also found that these particular subregions were among those that showed greater functional interaction with the unaffected cortex. Our results indicate that tumour microstructure measured by MRI multishell diffusion is associated with cognitive recovery after surgery. imaging (MRI) can provide insightful in vivo information on cancer biology relevant to disease diagnosis, prognosis, and response to treatment 2 .Regional microstructural features extracted from MRI have been found to be related to clinical outcomes. Within the published literature, intra-tumoural microstructural patterns have been shown to predict survival of glioblastoma patients with approaches for determining tumour microstructure quantified by histogram-based measures, shape and volume-based measures and texture analysis 3 .Specifically, a higher proportion of decreased isotropic diffusion and increased anisotropic diffusion regions within the tumour obtained from 2D histogrambased analyses has been associated with lower rates of survival 4 .Similarly, spatial features obtained from specific tumoural subregions are predictive of survival times and survival rates 5,6 .In these approaches, spatial features were either obtained from contrast-enhanced T1-weighted, FLAIR, T2-w images, or diffusion weighted MRI.Additionally, texture analyses based on MRI discriminates between gliomas and metastasis 7,8 , between high-and low-grade gliomas 9 , and between oedema, tumoural and normal brain areas 10 . Among the variety of MR imaging techniques, diffusion imaging is capable of assessing changes in brain microstructural anisotropy from water diffusion by applying several directional gradients during acquisition.For instance, the relationship between apparent diffusion coefficient (ADC) and cellularity can be used to detect dense tumours and even probe the efficacy of treatment over time 1 .In addition, diffusion tensor imaging (DTI) can predict tumour infiltration and potential invasive migration of malignant cells 2,11 .The neurite orientation dispersion and density imaging (NODDI) technique uses multiple varying diffusion gradient strengths to provide more specific measurements of neurite morphology, such as their density and orientation dispersion 12-14.Our previous work indicates that the degree of functional coupling between the residual tumour (i.e., after surgical resection) and healthy functional brain networks is related to cognitive recovery at 12 months follow-up 15 .Specifically, patients that showed a greater reduction in glioma-global signal coupling after surgery were more likely to suffer greater cognitive difficulties in the long-term.High preoperative neurite density in the margins of the tumour, and also within the default mode network, has also been associated with better memory recovery 16 .Together, these findings indicate that tumours may play an active role in brain function, and thus their resection has a direct bearing on cognitive outcomes.However, the role of tumours on the functional brain networks in which they become embedded remains enigmatic. The rationale of this study is to integrate structural and functional pre-operative imaging of brain tumours to investigate the hypothesis that tumour microstructure impacts cognitive recovery following surgery through the functional embedding of tumours within brain functional networks.To test this hypothesis, we first characterized tumour microstructure by dividing the tumour into different subregions.To assess the robustness of the results, we implemented two different methodological approaches.The first used DTI p-and q-images and a two-dimensional histogram-based method for image segmentation 17 that has previously been successful in partitioning and quantifying heterogenous tumours 4,18,19 .The second was a data-driven approach for intratumour segmentation using information from NODDI images in a two-dimensional feature space.Specifically, we used a Gaussian mixture model 20 to characterize feature space density [21][22][23] . Once the tumour subregions were identified in the pre-operative state, we evaluated which had a greater impact on participants' cognitive recovery after surgery.Finally, we examined the role of tumour microstructure in the functional interactions between the tumour and the unaffected brain in the pre-operative state.Tumour subregions demonstrating the highest functional coupling with the unaffected brain were expected to play a leading role in cognitive recovery after surgery. The results of the present work are designed to extend our understanding of the factors influencing cognitive outcomes for patients with brain tumours that could positively impact the onco-functional balance of treatment. Methodological strategy Methodological steps are summarised in Fig. 1.First of all, tumour subregions were identified by analysing the intra-tumoral microstructure, quantified from diffusion-weighted pre-operative MRI scans.We initially pre-processed MRI images and masked the tumour regions (see details in Sects."MRI data acquisition and pre-processing" and "Tumour masking and image co-registration").Two different strategies were used for characterizing tumour microstructure based on: (i) histogram analyses of DTI images (P and Q maps; see Sect."Segmentation from DTI") and: (ii) general mixture modelling of NODDI neurite density and NODDI orientation dispersion index images (see Sect. "Segmentation from NODDI").Subsequently, tumour subregions that had a greater impact on participants' cognitive recovery after surgery were identified.To reduce the impact of the variability across participants prior to the intervention, cognitive recovery was calculated as the change in cognitive function from the pre-operative to the post-operative period.Three participants lacked neuropsychological assessments in the post-operative period and therefore were not included in these analyses.These analyses are described in Sect."Associations of cognitive change with DTI-identified tumour subregions" (DTI approach) and in Sect."Associations of cognitive change with NODDI-identified tumour subregions" (NODDI approach).The similarity level between the identified tumour subregions using both methodological approaches was subsequently assessed. Finally, we studied the role of tumour microstructure in the functional interaction with the unaffected brain during the pre-operative state.We expected that tumour subregions demonstrating the highest functional coupling with the unaffected brain in the pre-operative state would play a key role in cognitive recovery after surgery. Sample This prospective cohort and all experiments were approved by the Cambridge Central Research Ethics Committee (Reference number 16/EE/0151), according with relevant guidelines and regulations.All patients gave written informed consent before participating in the study. Those patients with a typical appearance of a diffuse glioma were identified at adult neuro-oncology multidisciplinary team (MDT) meetings at Addenbrooke's Hospital (Cambridge, UK).A consultant neurosurgeon directly involved in the study identified potential participants based on the outcome of the MDT discussion.Eighteen patients aged 22-56 years (seven females) were finally approached to take part in the study.Inclusion criteria were: (i) participant was willing and was able to give informed consent; (ii) imaging was evaluated and judged to have typical appearances of a diffuse non-enhancing glioma; (iii) Stealth MRI was obtained (a routine neuronavigation MRI scan performed prior to surgery); (iv) World Health Organisation (WHO) performance status was 0 or 1; (v) age was between 18 and 80 years; (vi) tumour was located in or near speech-eloquent areas of the brain, i.e., regions that might be critical for speech comprehension and articulation; and (vii) patients could undergo awake surgical resection of a diffuse glioma.Participants were excluded if any of the following applied: (i) concomitant anti-cancer therapy; (ii) history of previous malignancy (except for adequately treated basal and squamous cell carcinoma or carcinoma in situ of the skin) within 5 years; and, (iii) previous severe head injury.Contrast enhancement and oedema (defined as vasogenic oedema / hypointensity on FLAIR) was excluded enrolment. One participant withdrew due to not being able to tolerate the MRI environment.Since one of the main objectives of the present work was to measure intra-tumoural microstructure , data from a participant with a recurrent tumour was also excluded.Thus, data from sixteen participants were included in the analyses (see Table 1 for demographics, tumour/treatment information and histological assessment of the tumours). Each participant was scanned up to four times: before surgery (pre-operative), within 72 h after surgery, and at 3 and 12 months after surgery.Only pre-operative scans were analysed in this study.The fMRI images from these participants were previously analysed 15,24 . Neuropsychological assessment Participants were given a neuropsychological assessment two weeks before surgery (pre-operative) and between two and five weeks after surgery (post-operative).In two cases, post-operative neuropsychological assessment was performed around six months after surgery (see Table S2).Testing was administered by a neuropsychologist in a clinical setting and took approximately 2-3 h to be completed (more information about the tests can be found in SI).The cognitive domains assessed in this study were: memory (verbal and nonverbal), verbal skills, nonverbal skills, attention, and executive function.Item-level details can be found in Table S1.www.nature.com/scientificreports/ Multishell diffusion sensitive data were acquired with the following parameters: TR = 8200 ms, TE = 95 ms, 2.5mm 3 resolution, 60 slices, FOV = 240 × 240 mm 2 , acquisition of 30 directions with b-value = 800 mm/sec, 60 directions with b-value = 2000 mm/sec, and 10 unweighted B0 images.DTI maps were obtained using the diffusion toolbox of FSL (fsl.ox.ac.uk).Corrections for B0 field inhomogeneity, Gibbs artifacts, and eddy-current distortions were applied using MRtrix v3 (https:// www.mrtrix.org/).www.nature.com/scientificreports/ The NODDI Matlab Toolbox (mig.cs.ucl.ac.uk/index.php?n = Tutorial.NODDImatlab) used the diffusion imaging data for quantification of in vivo microstructural complexity of dendrites and axons 12 .The NODDI multi-compartment tissue model extracts two key contributing factors to fractional anisotropy: the Gaussian contribution from water molecules located in the extracellular space, and the restricted non-Gaussian diffusion that takes place in the intracellular space.The apparent intracellular volume fraction was used in this work as a measure of neurite density (ND).The orientation dispersion index (ODI), a measure of the orientation coherence of neurites, was also calculated. Resting-state (eyes closed) fMRI was acquired with a BOLD-sensitive sequence: TR = 1060 ms, TE = 30 ms, acceleration factor = 4, FA = 74°, 2 mm 3 resolution, FOV = 192 × 192 mm 2 .Pre-processing of fMRI images included slice timing correction, bias field correction, rigid body motion correction, normalisation by a single scaling factor and smoothing to 5 mm full-width half-maximum.Independent component analysis (ICA) was performed with FSL MELODIC after which noise components were identified and removed using ICA-FIX 25 with training specific to this dataset 26 .Wavelet filtering was used to retain the BOLD oscillations in the physiologically relevant frequency range 0.03-0.12Hz 27 . Tumour masking and image co-registration Probabilistic masks of each pre-operative tumour were generated using a semi-automated procedure and the anatomical MRI (a detailed description can be found in SI).All tumour masks were binarized by applying a threshold of 0.5 to the probabilistic mask. Intra-tumour segmentation We quantified tumour microstructure to capture intra-tumoral characteristics.Two different approaches were carried out independently using the diffusion images: one based on DTI images, and the other based on NODDI neurite density and NODDI orientation dispersion index images.Since only pre-operative scans were used in the analyses and vasogenic oedema was an exclusion criterion, the presence of oedema was limited. Segmentation from DTI For each participant, DTI-p (isotropic) and DTI-q (anisotropic) components were initially calculated 28 at each voxel within the tumour mask, and a feature space using the p-and q-values generated.Each value was normalized by dividing it by the mean value in the contralateral tumour region such that values approaching 1 represent diffusion patterns similar to unaffected tissue.The joint histogram of p and q values was created with each quadrant around the [1, 1] origin denoting partitions of the feature space, as previously described 4 (see Fig. S1A for a flowchart of the analysis): Group I: Decreased DTI-p/decreased DTI-q; p < 1, q < 1. Group IV: Increased DTI-p/decreased DTI-q; p > 1, q < 1.Finally, the partition labels of each point in the feature space were translated back to the voxels of the DTI space generating four tumour subregions.A binary mask was created for each tumour subregion. Segmentation from NODDI Neurite density and orientation dispersion index images derived from the NODDI modelling were used to perform a feature space analysis (see Fig. S1B for a flowchart of the analysis).In the same way as the analysis of DTI, for both ND and ODI images voxels within the tumour were selected by applying the binarized mask and normalised by the mean values in the contralateral tumour region.The feature space of ND and ODI values was then created from the normalised values. Only 4 participants had values of ND > 1 within the tumoural region.Therefore, a quadrants division around the [1, 1] origin to define tumour partitions was inappropriate.Instead, an unsupervised data-driven approach for intra-tumour partitioning was adopted.Specifically, a Gaussian mixture modelling (GMM) (covariance type: full, shared covariance: true, 5 replicates) was applied to the joint 2D ND-ODI histogram to generate a data-driven partition of the feature space.The parameters of the model were estimated by the expectation maximization (EM) algorithm.Outliers were removed before applying GMM 29 using Matlab.The number of partitions in the GMM (range: 1-7) was set to that maximizing the silhouette index.One half of the participants had two partitions with the remainder having three partitions.Unlike the p-q approach which forces a 4-class solution, tumours did not present with the same number of partitions, nor were the partitions in similar locations of the feature space.To identify common partitions across participants, a matching procedure was required to find those partitions among all the tumours that had the minimum Euclidean distance between the centroids of their distributions.To ensure a good match, an additional requirement of Eucledian distance between centroids < 0.3 was imposed to find a match.This step was performed in an iterative manner matching all the partitions across all tumours.No satisfactory match was found for two partitions of two different tumours, and therefore these two partitions were excluded from further analysis.In total, five partitions were identified across tumours, denoted: C1, C2, C3, C4 and C5. Finally, partition information from the feature space was translated into the NODDI space generating tumour subregions for each participant.A binary mask was created for each tumour subregion. Tumour microstructure and patients' cognition To mitigate the potential bias arising from relying solely on post-operative cognitive state scores and to account for baseline variations, we opted to address individual differences in cognitive performance by subtracting the Vol:.(1234567890 www.nature.com/scientificreports/post-operative z-score from the pre-operative z-score for each cognitive domain (attention, non-verbal skills, memory, verbal skills and executive function).In this way, the longitudinal trajectory of the assessments (Δ) was tracked, where positive scores represent a deterioration in performance relative to pre-operatively (i.e., cognitive decline).Mean overall cognitive change was calculated for each participant as the average of the z-scores obtained across all cognitive domains. Associations of cognitive change with DTI-identified tumour subregions Linear regression analyses were carried out to test the association between mean overall cognitive decline and the percentage of occupancy of the four tumour subregions identified within the tumour.Separate analyses were performed for each of the p-q subregions (I, II, III, IV), with the mean overall cognitive decline as the dependent variable and the percentage occupancy of the tumour by the evaluated subregion as the independent variable.Tumour volume was included as a covariate. If the percentage occupancy of one tumour subregion was a significant predictor of mean overall cognitive decline, we then conducted univariate regression analyses for each cognitive domain.This allowed identification of which cognitive domain had the greatest effect size associated to that specific subregion. We also investigated whether the percentage occupancy of a specific subregion changed significantly with the presence (or not) of IDH mutation using a Mann Whitney statistical test. Associations of cognitive change with NODDI-identified tumour subregions As tumours did not have the same number or type of partitions, we determined what effect the presence of a certain subregion had on cognitive decline.Regression analysis including the percentage occupancy of a specific subregion was not appropriate in this case since some of the partitions were only present in a reduced number of tumours (i.e.C5 in 2 tumours and C3 and C4 in 5 tumours).For this reason, the association between cognitive change and tumour microstructure was tested, for each of the subregions (C1, C2, C3, C4 and C5), by comparing the cognitive change in the group of tumours that presented with a certain subregion with the cognitive change of those participants in which it was not present.Non-parametric Mann-Whitney tests were used for these analyses.We also investigated whether the presence of a specific subregion was significantly associated with the presence (or not) of an IDH mutation (Pearson's chi-squared test).Both in DTI and NODDI approaches, all p-values were corrected for the Benjamini-Hochberg false discovery rate (FDR < 0.05) to reduce the likelihood of false positives. Given our reduced sample size, we avoided including too many covariates in our models to prevent overfitting.However, variables such as years of education, age, pre-and post-surgical tumour/lesion volume, tumour grade, pre-surgical hippocampal volume or pre-surgical hippocampal activity may be associated with cognitive decline.To assess whether any of these variables were potential predictors of cognitive decline, we calculated the Pearson correlation coefficient between these variables and mean overall cognitive decline scores. Spatial similarity between DTI-and NODDI-identified subregions To assess the spatial similarity of the NODDI and p-q subregions that were significantly associated with cognitive change, the Sørensen-Dice similarity coefficient and the percentage occupancy (%Occ) were calculated for each NODDI-identified subregion within the specific p-q subregion (and vice versa).%Occ is a metric used to quantify the extent to which one specific subregion occupies or overlaps with another subregion.Its calculation involves determining the percentage of voxels within a particular subregion that coincide with or are shared by another subregion.%Occ was computed by taking the number of overlapping voxels between two specified subregions and expressing it as a percentage of the total voxels in the subregion of interest.This measurement provides insight into the spatial relationship and extension of one subregion in relation to another.This value was calculated as a complementary measure to the DICE coefficient to provide a more comprehensive evaluation of the spatial relationships between two specified subregions. Time-series analyses To understand the relationship between tumour microstructure and functional interactions, the functional coupling between the BOLD time-series of each tumour subregion and the time-series from the rest of the brain was calculated. First, the binary masks corresponding to each tumour subregion were linearly transformed from DTI to T1-weighted anatomical space and then from this space to fMRI space using advanced normalization tools (ANTs).BOLD signals were then extracted and averaged across voxels for: (1) each of the tumour subregions; (2) the tissue contralateral to the tumour; and, (3) in all grey matter (GM) excluding the whole tumour.The average BOLD signal extracted from healthy GM constituted the global signal (GS). Time-series analysis was performed separately for DTI-and NODDI-identified tumour subregions.Outlying images based on framewise displacement were removed (see detailed description in SI).The functional coupling between the time-series of the subregions and the GS, and the association between the time-series of the contralateral tumour region and the GS were measured.This association, β, was calculated as the slope relating two BOLD time-series in a linear regression analysis using two different approaches: (i) a per-voxel functional analysis that calculated β between GS and each voxel within the tumour; and (ii) a subregion-wise analysis that averaged timeseries within each tumour subregion, and then calculating β between the GS and the averaged BOLD signal. Subsequent analyses were performed with the (ii) subregion approach.Non-significant β values (p-FDR corrected > 0.05) were found in eight cases for DTI-derived tumour subregions and in three cases for NODDIderived tumour subregions.In these cases, β association values were set to 0, since there was no evidence that the β value was significantly different from 0. The β values corresponding to the association between the time-series www.nature.com/scientificreports/ of the contralateral tumour region and the GS were considered as an average measure of functional coupling between healthy tissue and the GS.In this case, the GS was calculated excluding the voxels of the tumour as well as those of the contralateral tumour region.Non-parametric Friedmann statistical tests were used to compare β values across the different p-q subregions and contralateral tumour.Post-hoc analyses were performed with non-parametric Wilcoxon signed-rank tests.FDR correction was applied to post-hoc comparisons.A Friedmann comparison test was not carried out using β values of NODDI-derived subregions due to the fact that not all the subregions were present in all the tumours, and because of the low sample size of the subregions C3 (n = 5), C4 (n = 4) and C5 (n = 2). Intra-tumour segmentation Segmentation from DTI The percentage of each subregion's occupancy within the tumour (i.e., the percentage that a subregion occupies in relation to the total number of voxels of the tumour) was averaged across participants (see Table 2, DTI approach).Replicating the results of Li et al. 4 , group IV (p > 1, q < 1) occupied the largest proportion of the tumour, followed by group III (p > 1, q > 1). An illustration of the spatial distribution of tumour subregions in two different participants is shown in Fig. 2. Segmentation from NODDI As a result of the data-driven segmentation procedure, not all subregions derived from NODDI modelling were present in all tumours.Table 2 (NODDI approach) presents the percentage of tumours that had specific subregions identified from NODDI modelling: C1, C2, C3, C4 or C5.Less than 13% of the tumours had subregion C5, C3 and C4 were present in 31% of the tumours, and most of the tumours had subregions C1 (87.5%) and C2 (75%).Where present, the mean occupancy of each subregion within the tumour was also calculated.C4 and C5 were associated with the highest occupancy rates.An illustration of the different spatial distributions of subregions within the tumour in two different participants is displayed in Fig. 2. Associations of cognitive change with DTI To assess the relationship between tumour microstructure and cognition, we evaluated first whether the percentage occupancy of a specific subregion within the tumour was associated with the participant's mean overall cognitive change after surgery (Fig. 3A).We found that mean overall cognitive change was significantly predicted by the proportion of group III voxels present within the tumoural region. The results of the regression indicated that the model explained 60% of the variance (R 2 = 0.6041, F(2,10) = 7.63, p-FDR = 0.039) and that higher proportions of group III within the tumour contributed significantly to greater overall cognitive decline (B = 0.022, p = 0.004).Tumour volume was not a significant predictor (p = 0.970) of overall cognitive change.Therefore, the proportion of group III within the tumour predicted pre-to postsurgical change in cognitive status.The models that included percentage occupancy of groups I, II and IV did not contribute significantly to overall cognitive change (Group I: F(2,10) = 1.02, p-FDR = 0.526; Group II: F(2,10) = 0.54, p-FDR = 0.600; Group IV: F(2,10) = 3.15, p-FDR = 0.173).Table 2. DTI approach.Mean percentage occupancy of each DTI-derived subregion within the tumour.DTI: diffusion tensor imaging; p: isotropic component; q: anisotropic component; SD: standard deviation.NODDI approach.Characteristics of each subregion (C1, C2, C3, C4 and C5) defined from NODDI joint histogram analysis: mean ND and ODI values of each Group; percentage of patients that had each Group within the tumour; mean percentage occupancy of each subregion within the tumour, including only the participants that had that subregion.NODDI: neurite orientation dispersion and density imaging; ND: neurite density; ODI: orientation dispersion index; SD: standard deviation. DTI approach DTI Joint histogram subregions Mean (%) ± SD Group I (p < 1, q < 1) 11.2 ± 10.9 Group II (p < 1, q > 1) 4.3 ± 3.3 Group III (p > 1, q > 1) 27.9 ± 11.5 Group IV (p > 1, q < 1) 56.Since the extent of group III was associated with mean cognitive change, we investigated whether this predictive capability was related to a specific cognitive domain; namely, attention, non-verbal skills, memory, verbal skills or executive function.We found that the proportion of group III within the tumour was a significant predictor of a change in verbal skills.The results of the regression indicated that the model explained 62% of the variance (R 2 = 0.620, η 2 = 0.619, F(2,10) = 8.15, p-FDR = 0.04) and that higher proportions of group III within the tumour were associated with greater verbal skills decline (B = 0.037, p = 0.002).Decline in other cognitive domains was not predicted by the percentage occupancy of group III subregion (Fig. 3B).The percentage occupancy of group III did not change significantly with the presence (or not) of IDH mutation (Mann Whitney, z = -1.64,p = 0.100). Associations of cognitive change with NODDI Since not all subregions derived from NODDI modelling were present in all tumours, we assessed whether cognitive change in each domain was significantly different for patients that did or did not have a specific subregion (Fig. 4A).Mean change in overall cognition was not associated with the presence of any subregion.Only subregion C4 had an uncorrected p-value of 0.03 that did not survive FDR correction. Focusing on the cognitive domain where the extent of p-q groups had previously shown significant associations, we next assessed whether verbal skills changed significantly in the presence of specific NODDI subregions (Fig. 4B).In this case, participants that had the subregion C4 within the tumour showed a significant decline in verbal skills compared to those participants without that subregion (η 2 = 0.529, z = −2.62,p-FDR = 0.044).Tumour volume did not vary significantly for participants who had or did not have the subregion C4 (z = 0.309, p = 0.758). We also determined that the presence of C4 subregion was not significantly associated with the presence of IDH mutation (Pearson's chi-squared, χ 2 = 0.428, p = 0.513). To assess whether other demographic or clinical characteristics were also potential predictors of cognitive decline after surgery, the Pearson correlation coefficient between these characteristics and mean overall cognitive decline was calculated.No significant correlations were found between potential predictors and mean overall www.nature.com/scientificreports/cognitive decline (see Table 3), indicating that these variables did not exhibit the predictive power observed for tumour microstructure. Spatial similarity between P-Q and NODDI tumoural subregions The spatial similarity between p-q-and NODDI-derived subregions that had shown significant associations with cognitive change was tested (Table 4).For this purpose: (1) NODDI-derived subregions were spatially compared to the group III subregion from the p-q analysis.The subregions that had the greatest spatial similarity with group III were C4 and C1 (based on DICE and percent occupancy); (2) p-q subregions were spatially compared to subregion C4.The subregions that showed the greatest spatial similarity with C4 were group IV and group III. Time-series analyses of BOLD signal The degree of functional coupling between p-q tumour subregions and GS, and between the contralateral region to the tumour and GS is represented in Fig. 5A.Statistical analyses revealed that the region contralateral to the tumour had a higher functional coupling with the GS in comparison with the tumour subregions (Friedman test: χ 2 (4) = 39.433,p < 0.001, post hoc comparisons based on non-parametric Wilcoxon signed-rank test after FDR correction, p cont-GI,FDR = 0.0033, p cont-GII,FDR = 0.0033, p cont-GIII,FDR = 0.0050, p cont-GIV,FDR = 0.0033).Among the p-q tumour subregions, BOLD signals derived from group III and group IV subregions had significantly higher β values with the GS than those from group I and group II (p GI-GIII,FDR = 0.0063, p GI-GIV,FDR = 0.0063, p GII-GIII,FDR = 0.0063, p GII-GIV,FDR = 0.0063).The remainder of the comparisons were non-significant (p GI-GII,FDR = 0.530, p GIII-GIV,FDR = 0.2756). The degree of functional coupling between NODDI-derived subregions and GS and between the contralateral region to the tumour and GS is represented in Fig. 5B.The contralateral region to the tumour showed a higher functional coupling with GS.Among the subregions, C5 had the greatest functional coupling with GS (Median = 0.587), followed by C4 and C1 subregions (Median = 0.490 and Median = 0.473 respectively).No statistical comparison of functional coupling was performed across the subregions due to the low number of tumours with subregions C3 (n = 5), C4 (n = 4) and C5 (n = 2).www.nature.com/scientificreports/Overall, these results reveal that group III subregions showed an association with cognitive decline and a correlation with GS.The spatial representation of these subregions (Fig. 6, left brains) suggests that they are primarily located in tumour margins where the voxel-wise coupling with GS was high (Fig. 6, right brains). Discussion The present study combines structural and functional imaging of tumours in the pre-operative state and hypothesizes that microstructure impacts cognitive recovery after surgery by the functional embedding of tumours within brain functional networks.To test this hypothesis, we first characterized tumour microstructure by dividing the tumour into different subregions using two different methodological approaches.Once the tumour subregions were identified in the pre-operative state, we evaluated which had a greater impact on participants' cognitive recovery after surgery.Finally, we examined the role of tumour microstructure in the functional interaction between the tumour and the unaffected brain in the pre-operative state.Our results support the notion that tumour subregions demonstrating the highest functional coupling with the GS play a crucial role in cognitive recovery after surgery.Specifically, higher proportions of group III (p > 1/q > 1) within the tumour were associated with worse cognitive outcome after surgical intervention.The extent of group III subregions predicted mean overall cognitive change as well as deficits in verbal skills after surgery.It is worth noting that one of the inclusion criteria in the study was that the "tumour was located in or near speech-eloquent areas of the brain, i.e., regions that might be critical for speech comprehension and articulation".Therefore, in terms of clinical treatment, the extent of the group III subregion is potentially capable of predicting recovery in this cognitive domain (i.e., verbal skills) that was compromised to some extent in all the participants.In addition, sensitivity analyses showed that other potential predictors of cognitive decline, such as years of education, age, tumour volume (pre-and post-surgery), tumour grade, pre-operative hippocampal activity and pre-operative hippocampal volume were not significantly associated with post-operative deficits in mean overall cognition. Similar to that reported for glioblastomas 4 , we found that joint histogram analysis using DTI-p and -q values provided meaningful insights into tumour microstructure in both low-and high-grade gliomas.Following the nomenclature of the original work, the group III subregion was defined as the region that presented increased DTI-p and increased DTI-q values when compared to mean values of contralateral healthy tissue.Increased DTI-p values are thought to reflect lower cell density (i.e., more intercellular space), resulting in more isotropic diffusion.Alternatively, increased DTI-q reflects high anisotropy that might be due to the presence of intact fibres that facilitate tumour migration.An alternative explanation would be that an increased q represents compressed white matter tracts increasing anisotropic diffusion by reducing water diffusion perpendicular to the fibres.As a result of the increased fibre density, these white matter regions are particularly at risk of damage from surgery 4 .We also found that group III was one of the tumour subregions showing the greatest functional coupling with the GS, indicative of an increased interaction between this tumour subregion and the unaffected brain.This suggests that higher proportions of group III within the tumour have a greater negative impact on participants' cognitive recovery after surgery as tumour resection may remove a larger proportion of viable tissue -in terms of functional coupling-from the tumour.In addition, it is worth noting that the group III subregion was primarily BOLD signal derived from grey matter outside the tumour (GS), and 2) BOLD signal derived from the tumour subregions and from the region contralateral to the tumour (tumour (contra)).In the latter case, β values were calculated using a GS estimation that excluded the corresponding voxels contralateral to the tumour (to avoid overlapping between the independent and dependent variables).White dots represent the median of the data in each case; the thick grey line indicates the interquartile range.(A) DTI approach with tumour subregions: groups I, II, III and IV.* indicates significant differences, p-FDR < 0.01.(B) NODDI approach with tumour subregions: C1, C2, C3, C4 and C5.located within the margins of the tumour in all participants (see Fig. 6, the group III partition displayed in dark blue).Altogether, decline in verbal skills might be attributed to surgical resection removing functional neurites in the tumour periphery. Our results align with our prior research in overlapping cohorts 15,24 , indicating that participants' cognitive decline after surgery was associated with a greater reduction (pre-vs post-surgery) in functional coupling between the BOLD signal within the tumour cavity and the GS 15,24 .Expanding on these findings, we initially identified that tumour tissue with distinct microstructural features exhibits varied GS coupling in the preoperative state.Furthermore, our investigation revealed a specific tumour tissue, group III, that is uniquely associated with cognitive decline.This sheds new light on the challenges of understanding and treating eloquent areas of tumours, offering insights on how tumours interact with the neuronal microenvironment, a research question with major clinical relevance 30 .The group III partition may be either more eloquent itself or have a role in the plasticity of the functional network in which it is embedded. The data-driven approach using measures derived from NODDI images showed that the presence of the C4 subregion was significantly associated with verbal skills decline after surgery.In this case, the C4 partition presents with low neurite density and high orientation coherence of neurites, resembling the characteristics of the group III subregion.However, the two subregions showed dissimilarities in their extension within the tumour: the mean percentage occupancy of the C4 partition (when present) was 74.6% (SD = 7.2) whereas the mean percentage occupancy of group III was 27.9% (SD = 11.5).These differences were also reflected in the Dice similarity index that had a mean of 0.49 (SD = 0.18) across participants.Interestingly, the effect sizes in the prediction of participants' cognitive recovery in verbal skills were similar for both methodological approaches (P-Q: η 2 = 0.619; NODDI: η 2 = 0.529).Therefore, the two approaches lead to the general conclusion that tumour microstructure has an impact on cognitive recovery after surgery. Summarizing, our results indicate that tumour microstructure is associated with cognitive recovery after surgery as well as with the extent of functional coupling with the unaffected brain.This work extends our understanding of the factors that determine cognitive outcomes for patients with brain tumours that could positively impact onco-functional balance when considering treatment options. Limitations Accurately estimating the sample size for a study marked by heterogeneous conditions and complex interventions poses a significant challenge.The absence of prior (NODDI) MRI studies utilizing a similar design hinders the estimation of effect sizes required for a precise sample size calculation.Contrary to conventional methodologies, our investigation is structured to study the association between pre-surgical brain alterations, identified through MRI, and the cognitive changes induced by surgery.We anticipated that our patient-focused interventional approach would yield substantial effect sizes, enabling the detection of significant associations even within our www.nature.com/scientificreports/limited sample size and perhaps even on an individual basis.We found that indeed some of the brain-cognition associations were of considerable strength (F-value > 7).It is also worth noting that we are unable to provide histological validation as a ground truth of tumour microstructure.The differences found between both approaches may be due to methodological artifacts (e.g.MRI sequence, modelling error, definition or number of partitions) or to the fact that boundaries between tumour subregions are not distinct.In addition, the tumour subregions found in this work may represent either different tissue types or different progression stages of the subregions towards higher grades.On the other hand, we anticipated that different tumour subregions would show different coupling with the GS.In the case of necrotic tissue with no vascular circulation, we expected a reduced BOLD signal dominated by noise, which would lead to low functional coupling with the GS.However, this could not be formally assessed due to the lack of histological data.Future studies with histological validation and larger samples of patients should help clarify this issue. It should also be noted that the post-operative period (2-5 weeks), where most of the participants underwent their neuropsychological assessment, might comprise transient surgical effects.Therefore, the present results should be validated over an extended period of cognitive follow-up. Regarding the NODDI methodological approach, the implemented matching procedure set a threshold of Eucledian distances between centroids below 0.3 for matching tumour subregions across participants.This threshold was chosen as a trade-off between the number of distinct tumour subregions and their spatial overlap.However, the effect of changing this threshold should also be studied in future work. Figure 1 . Figure1.Flowchart of data processing and analysis.Two methodological approaches (DTI -top-and NODDI -bottom-) were used to identify tumour subregions according to its microstructure.Segmented tumours were used to separately analyse the correlation between BOLD signal within each tumour subregion and the Global Signal from the healthy brain.Additionally, the presence of tumour subregions was also correlated with cognitive decline after surgery. https://doi.org/10.1038/s41598-024-55130-5 Figure 2 . Figure 2. Illustration of the spatial distribution of the different voxel groups within the tumour in two different participants (first row: Participant 1 (S1), second row: Participant 2 (S2)).(Left) T1 weighted image.(Middle) p-q-derived tumour subregions.Voxel group I is displayed in red; Voxel group II: dark blue; Voxel group III: Light blue and Voxel group IV: yellow.(Right) NODDI-derived tumour subregions: C1 is displayed in light blue; C2: yellow; C3: red; C4: green. Figure 3 . Figure 3. (A) Association of mean cognitive decline after surgery with the percentage occupancy of each p-q-derived subregion (groups I, II, III and IV).(B) Association of mean cognitive change after surgery and the percentage occupancy of p-q-derived group III subregion for each of the 5 cognitive domains (attention, nonverbal skills, memory, verbal skills and executive function). Figure 4 . Figure 4. Distribution of decline in mean cognition (A) and verbal skills (B) across participants (represented by individual dots) depending on the presence or not of each NODDI-derived subregion (C1, C2, C3, C4 and C5). Figure 5 . Figure 5. Distribution of β association values across participants (represented by individual dots) between: 1)BOLD signal derived from grey matter outside the tumour (GS), and 2) BOLD signal derived from the tumour subregions and from the region contralateral to the tumour (tumour (contra)).In the latter case, β values were calculated using a GS estimation that excluded the corresponding voxels contralateral to the tumour (to avoid overlapping between the independent and dependent variables).White dots represent the median of the data in each case; the thick grey line indicates the interquartile range.(A) DTI approach with tumour subregions: groups I, II, III and IV.* indicates significant differences, p-FDR < 0.01.(B) NODDI approach with tumour subregions: C1, C2, C3, C4 and C5. Figure 6 . Figure 6.Distribution of group III subregions (left brains) and per-voxel functional coupling with GS (right brains).Participant numbers in the figure match those presented in Table1. Table 1 . Demographic and pathological information of participants included in the study # Table 3 . Pearson correlation coefficients (r in the table) and their associated p-values (p in the table) between participants' characteristics (years of education, age, pre-and post-surgical tumour volume, tumour grade, presurgery hippocampal volume and pre-surgery hippocampal activity) and mean overall cognitive decline values. Table 4 . (Upper rows) Sørensen-Dice similarity coefficient and percentage occupancy (% Occ) of each NODDI-derived subregion in the group III subregion.(Lower rows) Sørensen-Dice similarity coefficient and percentage occupancy (% Occ) of each p-q-derived subregion with the NODDI C4 subregion.Mean values and standard deviation (in parenthesis) across the sample.
2024-03-09T06:17:37.701Z
2024-03-07T00:00:00.000
{ "year": 2024, "sha1": "08e1c282d891f2990e5d987c187a993116837d6b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "963b4dc65f6ca9810cc0e26fcb6e97bdd23af7a0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
43302104
pes2o/s2orc
v3-fos-license
Quantum quench in 1D: Coherent inhomogeneity amplification and 'supersolitons' We study a quantum quench in a 1D system possessing Luttinger liquid (LL) and Mott insulating ground states before and after the quench, respectively. We show that the quench induces power law amplification in time of any particle density inhomogeneity in the initial LL ground state. The scaling exponent is set by the fractionalization of the LL quasiparticle number relative to the insulator. As an illustration, we consider the traveling density waves launched from an initial localized density bump. While these waves exhibit a particular rigid shape, their amplitudes grow without bound. Boris L. Altshuler Physics Department, Columbia University, New York, NY 10027, USA (Dated: June 2, 2010) We study a quantum quench in a 1D system possessing Luttinger liquid (LL) and Mott insulating ground states before and after the quench, respectively. We show that the quench induces power law amplification in time of any particle density inhomogeneity in the initial LL ground state. The scaling exponent is set by the fractionalization of the LL quasiparticle number relative to the insulator. As an illustration, we consider the traveling density waves launched from an initial localized density bump. While these waves exhibit a particular rigid shape, their amplitudes grow without bound. PACS numbers: The shattering of cold glass in hot water is but one of many spectacular effects that can be induced by a rapid thermal quench in classical media. What happens when an isolated quantum phase of matter is subject to a sudden, violent deformation of its system Hamiltonian (a 'quantum quench')? This question is now under vigorous investigation in cold atomic gases [1][2][3][4]. Long-time, out-of-equilibrium physics already observed in gases confined to one [2], two [3], and three [4] spatial dimensions includes oscillatory collapse and revival phenomena [2,4] and topological defect formation [3,5]. In this Letter, we study interaction quenches in onedimensional (1D) quantum many body systems. Prior theory assuming spatially uniform dynamics has considered the post-quench distribution of quasiparticles [6], correlation functions [7,8], thermalization [6,9], quantum critical scaling [10], etc. On the other hand, the stability of homogeneous solutions with respect to the spontaneous eruption of spatial non-uniformity is by no means guaranteed, due to the coupling between modes with different momenta and the extensive quantity of energy injected into the system by the quench. Indeed, homogeneous external perturbations are known to generate large spatial modulations in a variety of physical contexts [5,11]. We show here that quantum quenches can produce strongly inhomogeneous states via a mechanism that is ubiquitous in 1D. We consider quenches across a quantum critical point, with initial (pre-) and final (post-quench) Hamiltonians possessing Luttinger liquid (LL) and Mott insulator ground states, respectively. Specifically, we quench into the insulating phase of the quantum sine Gordon model at the "Luther-Emery" (LE) point [8,10,[12][13][14], where we are able to determine the dynamics analytically. The pre-quench ground state has an inhomogeneous density profile ρ 0 (x), which acts as a "seed" generating fluctuations in the space-time dynamics of local observables [15,16]. We find that an arbitrarily small deviation of ρ 0 (x) from a constant is dynamically amplified by the time evolution, see e.g. Figs. 1 and 2. We argue that the mechanism responsible for the amplification is quasiparticle fractionalization, a generic attribute of gapless interacting particles in 1D [12,17]. We further illustrate the amplification effect for a localized (Gaussian) initial density "bump." This bump gives rise to a pair of nondispersive, non-interacting density waves that exhibit a rigid shape, with amplitudes that grow in time as a power law. We have dubbed these traveling density waves 'supersolitons'; an example is depicted in Fig. 2. Specifically, for the Fourier transformρ(t, k) of the density operator expectation value ρ(t, x), we find the following exact asymptotic result, valid in the long time limit: where A σ is a non-universal, k-independent constant and FIG. 1: Space-time evolution of the right-moving number density ρR after Luttinger liquid to Mott insulator quench, demonstrating the instability of spatially uniform dynamics; fainter (bolder) traces depict earlier (later) times. An infinitesimally small initial density inhomogeneity grows without bound. The figure is obtained from Eq. (1) with σ = 0.8, Aσ = 4.7, and an initial density profile ρ0(x) given by a sum of 150 cosines with random amplitudes, phases, and wavenumbers. Amplification occurs for any non-zero σ, corresponding to a non-zero fractionalization of the initial LL quasiparticles with respect to the insulator. t ′ ≡ t/K, whereK = 1/4 locates the LE point (see below); the quench is performed at t ′ = 0. The exponent σ in Eq. (1) is determined by the relative fractionalization of the LL quasiparticle number with respect to the Mott insulator, where K is the Luttinger parameter characterizing the initial Hamiltonian. Eq. (1) implies that the density splits into non-dispersing left-and right-moving compo- . Interestingly, the long time response is linear inρ 0 and enhanced at shorter wavelengths due to the fractional derivative (|k| σ/2 ) factor. For σ > 0, the fluctuations of ρ R,L are continuously amplified by the quench. The effect is demonstrated in Fig. 1 In the rest of this Letter, we will explain the setup and calculations leading to Eq. (1). Before the quench, our cold atom system is assumed to reside in the ground state |0 ρ0 of the LL Hamiltonian where v is the sound velocity, K is the Luttinger parameter, and ρ 0 (x)/q is an external chemical potential, with q ≡ K/vπ. The Hamiltonian in Eq. (3) governs the low-energy, long-wavelength physics of many gapless 1D cold atomic and condensed matter quantum systems [12,18]; in this paper, we have in mind a 1D optical lattice gas of spin-polarized, neutral Fermi atoms, but other interpretations are possible. The short-ranged interatomic interactions determine v and K; repulsive (attractive) interactions correspond to K < 1 (K > 1), while the free Fermi gas has K = 1 and v equal to the bare Fermi velocity. The boson fieldsφ andθ encode fluctuations of the long wavelength fermion number density :ρ : and current :Ĵ : on top of the filled Fermi sea via √ π :ρ := dθ/dx and √ π :Ĵ := dφ/dx, where :. . . : denotes normal-ordering with respect to the homogeneous ground state |0 ρ0=0 . These satisfy the commutation relations The static chemical potential in Eq. (3) allows us to "write" an arbitrary density profile into |0 ρ0 via the axial anomaly [12,19], With our system initially prepared in the LL ground state |0 ρ0 , we perform the quench at time t = 0. The dynamics for t > 0 are generated by the translationally invariant, "final state" Hamiltonian H f , which favors a gapped, Mott insulating ground state. Specifically, H f is the Hamiltonian of the quantum sine Gordon model, In Eq. (5) we have expressed H f in terms of the canonically rescaled boson variablesΦ ≡ K fφ andΘ ≡ θ/ K f . The Mott gap-inducing interparticle interactions set the parameters M and K f . In the context of a Fermi lattice gas at commensurate filling, the "Luttinger parameter" K f characterizes pure forward scattering, while M gives the strength of backward scattering Umklapp interactions; α is a cutoff-dependent length scale. The ground state of H f is gapped for arbitrarily small M over the regime 0 < K f < 1/2, in which the quantum sine Gordon model is integrable [12]. The solitons and antisolitons of the classical sine Gordon equation appear as massive Dirac fermions in the quantum version [20]. Solitons repel antisolitons for 1/4 < K f < 1/2 and attract them for 0 < K f < 1/4; in the latter case, additional bosonic bound states (breathers) appear in the spectrum. We choose to quench to the boundary between these two regimes, where K f =K ≡ 1/4. At this special "Luther-Emery" point, the interactions between the quantum solitons switch off, and H f can be refermionized [14] in terms of a massive non-interacting soliton field Ψ, In this equation, Ψ is a two-component Dirac fermion that is related to the boson fields in Eq. (5) via the bosonization identity, Ψ (1,2) ∝ exp[i √ π(Φ ±Θ)];σ 2,3 are Pauli matrices in the standard basis. The mass gapM in Eq. (6) is a non-universal, cutoff-dependent quantity. It is instructive to rewrite H i [Eq. (3)] in terms of Ψ, whereṽ ≡ Kv/K. Comparing Eqs. (6) and (7), we see that the quench with K =K is special. For this case only ("non-interacting" quench), the quasiparticles of the initial and final Hamiltonians are in one-to-one correspondence. At any other value of K =K ("interacting quench"), an elementary excitation of the initial state carries a fraction of the final state quasiparticle number; that is, the "quasiparticle" excitations of the initial LL phase carry K/K = 4K of the global U (1) Ψ fermion number charge [17]. When viewed in terms of Ψ, the transition between H i and H f permits a dual interpretation as a LL to band insulator quench. Correlation functions in the homogeneous quench [ρ 0 (x) = 0] have been previously studied in Refs. [8,13]. To characterize the post-quench dynamics, we compute the expectation values of the particle number (ρ), kinetic (K) and potential (U) energy densities (the latter two observables are defined with respect to H f ): where f ↔ ∂g ≡ f ∂g − (∂f )g. In these equations, Ψ(t, x) denotes the Heisenberg picture fermion operator whose dynamics are generated by H f in Eq. (6). U gives the expectation of the cosine operator in the sine Gordon model [Eq. (5)], and can be interpreted as a (squared) order parameter for the Mott phase. We obtain ρ, K, and U by solving the Heisenberg equations of motion for Ψ(t, x) and exploiting the bosonization map. Given an arbitrary initial ρ 0 (x), we have derived exact results for ρ, K, and U at any time t ≥ 0, which will appear elsewhere [21]. The exact post-quench observables in Eq. (8) depend upon ρ 0 (x),M , and the dynamic exponent σ defined via Eq. (2). The non-interacting quench with K =K has σ = 0, while the interacting quench (K =K) has σ > 0. We confine ourselves to the range 0 ≤ σ < 1, for which the ρ 0 (x)-dependent contributions to ρ, K, and U are given by ultraviolet (UV) convergent integrals [21]. At σ = 1, these observables acquire logarithmic UV divergences, suggesting the onset of a sensitive dependence on lattice scale details. We now describe our main results, which concern the ρ 0 (x)-dependent contributions to ρ, K, and U; the be-havior of K and U for the homogeneous quench ρ 0 = ρ(t, x) = 0 will be discussed elsewhere [21]. The exact leading asymptotic expression for ρ(t, x) in the limit t → ∞ was already given by Eq. (1), above. Let us specialize this result to a localized initial density profile. The interacting (σ > 0) versus non-interacting (σ = 0) quenches yield qualitatively different behaviors. For the interacting quench, Eq. (1) implies that ρ(t, x) develops a non-dispersive response to the initial condition for any 0 < σ < 1. For example, a Gaussian density bump, √ π∆ρ 0 (x) = Q exp (−x 2 /∆ 2 ), induces the following asymptotic space-time evolution for t ≫ 1/M : where D ν (x) denotes the parabolic cylinder function, t ′ = t/K, and we have written out the explicit form of the prefactor A σ , which is non-universal for σ > 0 and depends uponM α. The naive continuum calculation gives M α = 15/16. The divergence of the prefactor at σ = 1 indicates the onset of sensitivity to the UV sector of the theory. Eq. (9) implies that an antecedent Gaussian density bump splits into right-and left-moving non-dispersive waves, for generic Q, ∆, and K =K (σ > 0). In the long time limit, the leading response is strictly linear in Q, with an amplitude that grows as t ′σ/2 . Two Gaussian bumps initially separated by a distance d ≫ ∆ can be used to create left-and right-moving waves which pass through each other without changing their form [21]. We dub these rigid, non-interacting density waves 'supersolitons' to distinguish them from the elementary quantum solitons annihilated by the fermion field Ψ. We have confirmed the asymptotic result in Eq. (9) by comparing to numerical integration of the exact bosonization expression for ρ. The supersoliton is exhibited in Fig. 2. Although the precise shape of the supersoliton implied by Eq. (9) deforms continuously with σ, it exhibits the same positive-negative "dipolar" peak profile for any 0 < σ < 1 (see Fig. 2). The negative density dip represents a local evacuation of the filled Fermi sea, which is infinitely deep in the Luttinger model [12]. For any σ > 0, the integral of the second term in Eq. (9) over real x vanishes, consistent with particle number conservation. In the limit of the non-interacting quench σ → 0, the right-hand side of Eq. (9) vanishes; in this case, the response obtains entirely from subleading terms that do not grow with t (and conserve the particle number), but which we have not written here. The same is true in Eq. (1) because A σ → 1 when σ → 0. For comparison, Fig. 3 depicts the number density ρ(t, x) for the case σ = 0, obtained by numerical integration of the exact result. The main message of this figure is that the non-interacting post-quench dynamics are "passive" and dispersive, depending sensitively upon the details of the initial inhomogeneity and showing no amplification phenomena. In the interacting quench, the supersoliton is also observed in the relative kinetic energy density, defined as Fig. 4. By contrast, we find that the potential energy density U(t, x) does not exhibit the supersoliton on top of the homogeneous background it acquires after the quench. The amplification in Eq. (1) does not therefore appear related to a Kibble-Zurek process [5] in the order parameter. The physical mechanism underlying the power law inhomogeneity growth in Eqs. (1) and (9) can be partially elucidated via an analogy to the equilibrium tunneling density of states (TDOS) ν(ω) in a LL [17]. Upon tunneling into a one channel quantum wire at T = 0 characterized by the Luttinger parameter K, the conductance at a bias ω = eV diminishes as ν(ω) ∼ |ω| σ where σ is defined as in Eq. (2), but withK = 1. The physics behind this result is a follows: The independent LL "quasiparticles" carry a fraction K of the electron charge e [17]. The TDOS ν(ω) vanishes as ω → 0 because a "whole" electron must fractionalize into a large number of pieces upon penetrating into the LL, and this process is prohibited by phase space restrictions in the low bias limit. Mathematically, the TDOS result obtains from the Fourier transform of the electron Green's function in the LL. The t σ/2 amplification in Eq. (1) is rendered by a similar mechanism in the quench: an initial LL correlation function is convolved with an oscillatory kernel [a product of Green's functions resulting from the solution to the Heisenberg equations of motion for Ψ(t, x)]. The final state Hamiltonian H f introduces a scaleM , by which the analog of the frequency ω in the TDOS is the evolution interval M 2 t. We might therefore naively expect ρ(t, x) ∼ t σ , with σ defined by Eq. (2). That the leading power is σ/2 in Eqs. (1) and (9) obtains from a cancelation of t σ terms. This suggests that the immiscibility of quantum phases composed of quasiparticles carrying relatively fractional charges may underlie both the equilibrium TDOS and the quench amplification. In conclusion, we have shown that a quantum quench can beget a strongly inhomogeneous state, due to the interplay between quasiparticle fractionalization and the presence of a mass scale in the final state Hamiltonian. Fractionalization is a robust feature of 1D gapless phases, so we expect the inhomogeneity proliferation to occur in many 1D quantum quenches. It would be interesting to consider quenches to final states away from the free fermion LE point where (super?) soliton-soliton interactions can play a role in the dynamics. We would like to thank Leon Balents for helpful discussions of LL physics. This work was supported by the National Science Foundation under Award No. DMR-0547769 and by the David and Lucille Packard Foundation.
2010-05-31T20:35:19.000Z
2010-05-31T00:00:00.000
{ "year": 2010, "sha1": "704eb52dc67c484c2927be221abfb844698c9fd4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1006.0012", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "704eb52dc67c484c2927be221abfb844698c9fd4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
249439848
pes2o/s2orc
v3-fos-license
Active Surveillance of Low-Risk Papillary Microcarcinoma of the Thyroid in Indian Scenario: Are we Ready for it? A Narrative Review Papillary microcarcinoma (PMC) is defined as papillary thyroid carcinoma(PTC) measuring≤1cm, irrespective of the presence or absence of the high‑risk features. PMCs without any high‑risk features referred to as the low‑risk PMCs are generally indolent, and most of them remain latent without progression or with very slow progression. Active surveillance(AS)(observation without immediate surgery) could identify the small minority of PMCs that progress and rescue surgery for these PMCs should be effective resulting in no influence on the patients’ prognosis than performing immediate surgery which might result in more harm than good due to associated morbidity. Hence, with proper patient selection, organization, and patient counseling, AS has the potential to be a long‑term management strategy for patients with PMC. The recent update of the American Thyroid Association guidelines(2015) incorporated AS as an option within the management protocol of PTC, making it a considerable rather than an experimental treatment option. The cost for immediate surgery is higher than the medical costs of AS for 10years in most scenarios. Developing countries like India may have certain limitations such as lack of understanding, financial constraints, and lack of adequate radiology services, hence we propose additional recommendations along with standard surveillance strategy. Case 1 A 51-year-old woman was diagnosed with right breast cancer (infiltrating ductal carcinoma, estrogen receptor/progesterone receptor [ER, PR], and c-erbB2 positive) in September 2015 for which she underwent right breast conservative surgery followed by adjuvant chemotherapy and hormonal therapy along with locoregional radiotherapy. During pretreatment workup, positron emission tomography-computed tomography done outside our institution reported multinodular goiter with increased uptake in the left thyroid lobe. While on active treatment, in December 2015, on further evaluation, the patient had clinically occult thyroid nodules in both the lobes on US. A 9.2 mm × 9.4 mm sized solid, hypoechoic, taller than wider nodule with irregular margins showing minimal vascularity and incomplete peripheral calcification was seen in the left lobe -Thyroid Imaging Reporting and Data System (TIRADS) 5 and Thyroid Multimodal-Imaging Comprehensive Risk Stratification Scoring (TMC-RSS score 8) [ Figure 1]. Another benign-appearing well-defined, wider than taller, 8 mm × 6 mm sized cyst with isoechoic solid component and no internal echogenic foci were seen in the right thyroid lobe (TIRADS 2 and TMC-RSS score 1). No suspicious neck nodes were seen. US-guided fine-needle aspiration cytology (FNAC) reported the left thyroid nodule as suspicious for PTC (Bethesda category V) and the right thyroid nodule as nodular colloid goiter (Bethesda category II). The multidisciplinary tumor board concluded that breast cancer takes precedence over thyroid carcinoma and thyroid surgery can be contemplated after breast cancer treatment is over. After the breast cancer treatment, the option of AS was discussed with the patient and agreed upon, based on the stability of the nodule demonstrated by the ultrasonography (USG) while on treatment. The patient was counseled, and an AS strategy was implemented. The patient was followed up with 6 monthly neck US, which showed stability of the size and features of the thyroid microcarcinoma. No suspicious neck nodes or new suspicious thyroid nodule was detected. As the coronavirus disease 2019 pandemic hit India in 2020, the follow-up was delayed by 1 year. The last follow-up US, done on August 27, 2021, revealed stability of the left thyroid nodule, measuring 8 mm × 9 mm, and no suspicious cervical adenopathy. The patient has completed 6 years of follow-up and continues to stay on AS. Case 2 A 58-year-old woman was diagnosed with multicentric carcinoma left breast (infiltrating ductal carcinoma, ER and PR positive, and c-erbB2 negative) in December 2015. The patient was detected to have thyroid swelling while on workup. On examination, a 2 cm × 2 cm sized diffuse swelling, moving with deglutition, was noted in the midline neck. Thyroid function tests were normal with thyroid-stimulating hormone (TSH) of 4.7 mIU/mL. On neck US, multiple benign-appearing nodules were seen in both the lobes of the thyroid and isthmus while an irregular, solid, markedly hypoechoic, taller than wider, suspicious 7.5 mm × 8.3 mm sized nodule was seen in the left lobe. It showed irregular halo and internal microcalcifications and showed no vascularity. Extrathyroidal extension was absent. The nodule was hard on elastography (Asteria ES 4) [ Figure 2]. This was labeled as TIRADS 5 and TMC-RSS score 9. On FNAC, this subcentimeter suspicious nodule was found consistent with PTC (Bethesda category VI) while the other nodules were benign (Bethesda category II). An indeterminate node at left level IV was seen on USG which was reactive on FNAC. The patient underwent modified radical mastectomy followed by adjuvant chemotherapy and hormonal therapy along with locoregional radiotherapy. Based on the decisions in the multidisciplinary tumor board, while on active treatment for breast cancer, the thyroid microcarcinoma was kept on surveillance with 3 monthly neck US and the patient was put on oral thyroxine to aim at lowering of TSH to 0.5-2 mIU/mL. The US findings were stable until January 2018 when a 20% increase in the size of the nodule (now measuring 10.1 mm × 8.5 mm) and the appearance of two left level IV and three right level VI suspicious nodes were noted [ Figure 2]. After about 3 years of AS, in view of progression, the patient underwent total thyroidectomy with bilateral central compartment and left level II-IV clearance in February 2018. The histopathology was reported as differentiated PTC, classical type, with reactive regional lymph nodes. The tumor was multifocal and extrathyroidal extension was present. Active surveillance over surgery Takebe et al. conducted a screening study for thyroid cancer on women who visited for breast cancer screening, using an US examination and US-guided fine-needle aspiration; it showed an incidence of 3.5% of thyroid carcinoma in otherwise healthy Japanese women aged ≥30 years and that 85% of these thyroid carcinomas measured ≤15 mm. This detected incidence was more than 1000 times the prevalence of clinical thyroid carcinoma in the Japanese women reported at that time. [5] Based on this study, it can be suggested that small thyroid carcinomas are frequently present in healthy adult population which may go unnoticed not manifesting in the lifetime of the individual and are therefore harmless. Based on the above observations, Ito et al. hypothesized that most low-risk PMCs remain latent without progression or with very slow progression. [1,2] Based on this hypothesis, an observational clinical trial for low-risk PMC was proposed in 1993 and subsequently implemented at Kuma Hospital in Kobe, Japan. [6] Ito et al. continued this practice in Kobe, [1,2,6] as did Sugitani et al. in 1995 at the Cancer Institute Hospital in Tokyo, Japan, [7] making these two hospitals have the largest and longest experience in offering AS to patients with PMC. In the trial at Kuma Hospital, 1235 patients were put on AS instead of surgery. After 10 years of observation, only 8% and 3.8% of cases showed size enlargement and new nodal metastasis, respectively. [8,9] The study also showed that the PMC of young patients is more likely to progress than those of the old patients. In another study by the same authors, 50 females with 51 pregnancies were on AS, of which only 8% of the patients showed PMC progression and none develop new node metastasis, with rescue surgery post delivery being successful. In Japan, surveillance was more cost-efficient than immediate surgery. [10] There was no recurrence or death after rescue surgery due to disease progression. The Cancer Institute Hospital (Tokyo, Japan) started a similar observation trial for low-risk PMC in 1995. In the trial at Cancer Institute Hospital, out of 230 patients (300 lesions), 7% and 1% showed size enlargement and new nodal metastasis, respectively. [7] Ito et al. hypothesized that very few PMCs will undergo disease progression and hence AS could be offered to these patients which would identify the PMCs with progression. These patients can then be offered surgery without adversely affecting the prognosis. They believed that offering surgery to all would lead to harm due to associated morbidity. [1] f a e A Korean group also published a retrospective report about its experience with AS of 192 papillary thyroid microcarcinoma (PTMC) patients. Similar results of relatively low rates of tumor growth were noted with 24 patients, undergoing delayed thyroid surgery. No recurrence was noted following surgery. [11] In a study conducted at the Memorial Sloan Kettering Cancer Center in New York, a risk-stratified approach to decision-making in probable or proven PMC was proposed by Brito et al., [12] in which PMC was classified into three categories: ideal, appropriate, and inappropriate as candidates for AS based on tumor/neck US, patients, and medical team characteristics. A review article published by Haser et al. critically analyzed the available data and concluded that with proper patient selection, organization, and patient support, AS has the potential to be a long-term management strategy for select patients in this setting and that the patients' quality of life, cultural differences, and the patients' clinical status should be taken into consideration. [13] Following the data suggesting the effectiveness of AS of low-risk PMC, the recent update of the American Thyroid Association (ATA) guidelines (2015) incorporated AS as an option within the management protocol of these tumors, making it considerable rather than an experimental treatment option, for appropriately selected patients with low-risk thyroid cancers to prevent over treatment for PMCs. [14,15] Attempts have been made to identify the markers for aggressive disease in these cases. Markers such as epidermal growth factor receptor expression, COX-2, V-Raf Murine Sarcoma Viral Oncogene Homolog B (BRAF), telomerase reverse transcriptase (TERT) and and their association with aggressive features such as lymph node metastasis, multifocality, and extrathyroidal extension have been studied on the PTMC specimen after surgical excision. [16,17] However, the role of these markers in disease progression in patients on AS is unclear and still needs to be researched. In the small study, a cohort of 26 patients who underwent surgery after AS for various reasons from Kuma hospital, authors analyzed the presence of BRAF and TERT mutation. These patients were categorized into nonprogressive, increase in size, and with lymph node metastasis. TERT mutation was absent in all the cases, and BRAF mutation was present in 64%, 70%, and 80% of cases, respectively. [18] Another study from the same hospital in patients who underwent surgery after following AS showed that Ki-67 expression of >5% and >10% was present in 50% and 22.2%, respectively, in cases with disease enlargement. This expression was significantly higher than cases with nonenlargement of disease. [19] Although the concept to risk stratify these patients based on molecular markers is attractive, there is a need to identify the markers that can be detected on cytology and validation of these markers before the concept becomes a standardized practice. Workup and management Patient Selection: Patients with very low-risk tumors, i.e., with the absence of high-risk features as proposed by Ito et al.; [1] the high-risk features are as follows [ Figure 3]: • Tumors located adjacent to the trachea • Tumors located on the dorsal surface of the thyroid lobe, possibly invading the RLN • Fine-needle aspiration biopsy findings suggesting high-grade malignancy • Presence of regional node metastasis or presence of distant metastasis (extremely rare). In addition, the ATA 2015 guideline update includes the following criteria as candidates for AS apart from the low-risk PMC: • Patients with multiple comorbid conditions and high surgical risk, OR • Patients with short life expectancy (signif icant cardiopulmonary disease, other malignancies, and advanced age), OR • Patients with concurrent medical or surgical issues that need to be addressed prior to thyroid surgery. Based on the 2015 guideline update, Brito et al. [12] proposed a scheme for stratification of low-risk PTMC patients into ideal, appropriate, and inappropriate candidates for AS based on the fulfillment of the criteria summarized in Figure 4. Active surveillance strategy Patients' eligibility to the AS must be accurately evaluated using mainly the imaging studies such as US and in selected patients, using a CT scan, to determine the location of the lesion and whether nodal metastases are present. The diagnosis of the lesion is established with the US-guided FNAC. This helps to rule high-grade malignancy in which case upfront surgery should be offered. Establishing the diagnosis is important so that the patient complies with the regular follow-up. This also helps in preventing the patients to consult another hospital and undergo unnecessary surgical treatment by nonexperts after being diagnosed as cancer later. [1] If found eligible for AS, the patient is offered management options, i.e., AS and immediate surgery. Patients are counseled about the pros and cons of both the approaches. If patients agree and meticulous follow-up ensured then they may be kept under AS. Patients are followed up by serial US scans at every 6-12 months to look for the red flag signs advocating for a rescue surgery. The red flag signs A rescue surgery is recommended when one or more of the following observations are noted anytime during follow-up: Enlargement in size by ≥3 mm, or • 20% increase in the dimensions or >50% increase in the volume, or • Appearance of node metastasis, or • Discovery of new foci. It is noted that some of the characteristics, such as PMC with rich blood supply, lack of strong calcification, and younger age, are associated with increased risk of developing the red flag signs, hence these features may be considered in deciding interval of follow-up. [20] The AS strategy with the red flag signs is summarized in Figure 5. Practical limitations in resource-constrained countries like India Need for frequent follow-up imaging under AS strategy may not be cordially met in developing countries like India where lack of understanding and financial constraints among a substantial part of the society and lack of radiology services in many parts of the country has been the cause of poor patient compliance. The medical cost of observation significantly differs from that of immediate surgery, varying from country to country, though it is unlikely for surgery to be significantly more cost-effective than observation in any country. In resource-constrained countries like India, advanced imaging modalities may not be easily accessible and may be unaffordable to patients of low socio-economic strata for frequent follow-ups, decreasing patient compliance for AS. Furthermore, a CT scan may be necessary in addition to an US examination for an accurate evaluation in some cases, for example, to accurately evaluate the relationship Although surgery for low-risk PMC is not a difficult undertaking for experienced surgeons, any surgical intervention has its list of plausible complications. Risks of permanent RLN paralysis, permanent hypoparathyroidism, and dependence on L-thyroxine throughout the lifetime are some of the major risks associated with thyroid surgery which can be avoided by adopting an AS strategy in low-risk PTCs. A simple yet detailed explanation of what PMC is, the course of disease and treatment options available should be given to the patient. The rationale of selecting AS over surgery should be explained thoroughly to ensure optimum patient compliance. Recommendations Considering the abovementioned limitations in developing countries like India, we propose the following additional recommendations for patient selection for AS: Other treatment options The available literature supports radiofrequency ablation to be an effective and safe option for low-risk PMC cases that are at high surgical risk or for patients who refuse to undergo surgical intervention. [21] conclusIon PTC is the most common histological type of differentiated thyroid cancer. Multiple qualitative and quantitative US RSS systems for thyroid nodules have been proposed over time. [22,23] With advanced diagnostic modalities, a significant proportion of these nodules fall in the definition of papillary microcarcinoma. Available literature supports AS as the optimal first line of management for patients with low-risk PMCs. Surgery for low-risk PMC is not difficult, but with surgery, there remains a possibility of complications including vocal cord paralysis and permanent hypoparathyroidism. The cost for immediate surgery is higher than the medical costs of AS for 10 years in most scenarios. The lack of understanding, financial constraints, and lack of adequate radiology services can lead to poor patient compliance for frequent imaging follow-ups in resource-constrained countries like India. We propose recommendations that can help improve patient compliance in developing countries like ours. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2022-06-08T15:17:27.684Z
2022-06-06T00:00:00.000
{ "year": 2022, "sha1": "03591f7031e0ce85d97e6e0eecac248df426184f", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijem.ijem_501_21", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "62b1fb7ddb8c326679941257960d5e1b0cd656cf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249835332
pes2o/s2orc
v3-fos-license
Adjuvant Treatments of Adult Melanoma: A Systematic Review and Network Meta-Analysis Multiple treatments of unresectable advanced or metastatic melanoma have been licensed in the adjuvant setting, causing tremendous interest in developing neoadjuvant strategies for melanoma. Eligible studies included those that compared overall survival/progression-free survival/grade 3 or 4 adverse events in patients with unresectable advanced or metastatic melanoma. Seven eligible randomized trials with nine publications were included in this study. Direct and network meta-analysis consistently indicated that nivolumab+ipilimumab, nivolumab, and trametinib could significantly improve overall survival and progression-free survival compared to ipilimumab in advanced melanoma patients. Compared to ipilimumab, nivolumab, dacarbazine, and ipilimumab+gp100 had a reduced risk of grade 3/4 adverse reactions. The nivolumab+ipilimumab combination had the highest risk of adverse events, followed by ipilimumab+dacarbazine and trametinib. Combination therapy was more beneficial to improve overall survival and progression-free survival than monotherapy in advanced melanoma treatment, albeit at the cost of increased toxicity. Regarding the overall survival/progression-free survival, ipilimumab+gp100 ranked below ipilimumab+dacarbazine and nivolumab+ipilimumab, although it had a smaller rate of grade 3 or 4 AEs than other treatments (except nivolumab). Nivolumab is the optimum adjuvant treatment for unresectable advanced or metastatic melanoma with a good risk-benefit profile. In order to choose the best therapy, clinicians must consider the efficacy, adverse events, and physical status. INTRODUCTION Melanoma is a type of skin cancer that arises from melanocytes. Advanced melanoma, including metastatic and unresectable cases, has consistently been one of the most lethal cancers in the world. Patients with metastatic melanoma have a 5-year survival rate of less than 16% (1). Patients with stage IV melanoma have a 6.2-month median overall survival (OS) and a 25.5% 1-year survival rate (2). Over the last several decades, the worldwide incidence of malignant melanoma has risen steadily (3). According to the American Cancer Society's most recent epidemiological data, the number of new invasive melanoma cases detected each year has grown by 31% over the last decade (2012)(2013)(2014)(2015)(2016)(2017)(2018)(2019)(2020)(2021)(2022). Besides, it is predicted that the number of melanoma mortality would rise by 6.5% in 2022 (4). The etiology of melanoma is related to the human body and environment. Ultraviolet radiation, skin phototype, pigmented nevi, pesticide usage, prolonged sun exposure and sunburn, geographical location, heredity, genetic factors, immunosuppressive conditions, and non-melanoma skin cancer are all risk factors for the occurrence of melanoma (3). Given all of that, combined with the recently developed research, the main pathogenesis of melanoma can be considered as excessive ultraviolet exposure, gene mutations (BRAF, NRAS and NF1 gene mutations) and molecular signaling pathways (MAPK pathway and PI3K pathway), etc. (3,(5)(6)(7) Melanoma is cancer that is basically incurable. To date, several treatment options for melanoma have been developed, including surgery, chemotherapy, radiotherapy, hormone therapy, targeted therapy, etc. Cancer cells do not migrate to distant cells and tissues in the early stages of cancer, hence surgery is commonly utilized at this stage. In contrast, surgery is not recommended for advanced cancer due to its invasiveness (8)(9)(10). Adjuvant treatments such as chemotherapy, radiotherapy, hormone therapy and targeted therapy are commonly used in the treatment of advanced melanoma. In addition to these conventional therapies, mesoporous bioactive glasses (MBGs, a special class of bioactive glasses) play a role in innovative cancer treatment methodologies. Due to their outstanding stability and high drug loading capacity, MBGs are an excellent candidate for the development of advanced drug delivery systems with sustained and/or controlled drug release profiles in cancer treatment (11). Moreover, nanotechnology is a unique material with transformation potential in cancer diagnosis, screening and treatment. The application of nanotechnology enables drugs and active biomolecules to identify and target tumor cells more accurately and effectively (12). Another review discusses more research and development of tannic acidincorporated medical applications, with cancer therapy being a particular focus of this article (13). These new advanced materials have bright prospects in the treatment of melanoma. Nevertheless, immune checkpoint inhibitors and targeted therapies remain the most common adjuvant treatments for unresectable advanced or metastatic melanoma.The Food and Drug Administration (FDA) of the United States has authorized several novel adjuvant treatments for unresectable advanced or metastatic melanoma since 2011. For instance, kinase inhibitors (targeting mutant BRAF or MEK) inhibit driving pathways in around half of the melanoma patients (14) and immune checkpoint inhibitors (targeting CTLA-4, PD-1, or PD-L1) can kill melanoma cells. These adjuvant treatments have significantly altered the therapeutic landscape. The most often used checkpoint inhibitors are monoclonal antibodies that block the CTLA-4 (ipilimumab) and PD-1 (pembrolizumab and nivolumab) pathways (15,16). The FDA has approved ipilimumab which can increase the overall survival (OS) rate of advanced melanoma patients, in which approximately 11% of patients have objective responses (17,18). Similarly, pembrolizumab and nivolumab were authorized by the FDA as the first anti-PD-1 (CD279) directed monoclonal antibodies in the treatment of advanced cancers (namely advanced or metastatic melanoma). Nivolumab has been proven to increase progression-free survival (PFS) and OS in unresectable melanoma patients (19). Intriguingly, research has shown that ipilimumab combined with nivolumab had better efficiency with higher response rates and long-term OS rates than monotherapies. Unfortunately, the combination therapy possessed a high toxicity rate, making it ineffective for treating advanced melanoma (20,21). Pembrolizumab had a 4-year OS rate of 37%, while the 3-year OS rate of ipilimumab combined with nivolumab was 58% (22,23). It was observed that the median OS was 72.1 months for the combination of ipilimumab and nivolumab, 19.9 months for ipilimumab, and 36.9 months for nivolumab after 6.5 years of follow-up (24). Overall, ipilimumab combined with nivolumab has become a gold standard for treating metastatic melanoma. In addition, targeted therapies, including BRAF and MEK inhibitors (BRAFi/MEKi), are authorized for patients with BRAF V600mutant melanoma. BRAFi/MEKi is beneficial for significantly prolonging OS, with dabrafenib and trametinib having 44% 3year OS rates (25). Despite the breakthrough advances in treating unresectable advanced or metastatic melanoma, the best course of therapy remains unclear. Interestingly, no previous similar article on patients with unresectable advanced or metastatic melanoma was found. The efficacy and side effects of these medications alone, as well as their combination usage, have not been well assessed, and further research is necessary to confirm the efficacy and safety of adjuvant treatments for advanced melanoma. Additionally, it is difficult to obtain a comprehensive and satisfactory synthesis of current scientific evidence on adjuvant treatments by traditional meta-analysis methods, owing to a paucity of head-to-head trials. Network meta-analysis (NMA) is a statistical approach that assesses numerous treatments in a single study by incorporating direct and indirect evidence from randomized controlled trials in a network of randomized controlled trials (RCT) (26,27). Therefore, we employed an NMA technique for the major adjuvant treatments in terms of OS, PFS, and adverse events (AEs) of grade 3 or 4 and obtained the optimum adjuvant treatment for advanced melanoma. METHODS The PRISMA guidelines (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) are the basis for implementing the network meta-analysis. Details of the methodology and reporting follow the PRISMA guidelines. The PROSPERO registration number is CRD42021291959. Two investigators performed the selection of studies independently, and they included studies with no language restrictions to limit publication bias. Then, the duplicate literature, irrelevant literature, and incomplete articles were excluded. Any disagreements were resolved by the third investigator. The inclusion criteria in this study are as follows (1): Adult patients with no prior systemic therapies, unresectable or metastatic histologically confirmed stage III or IV wildtype BRAF melanoma (2); The patient received adjuvant therapy (at least one treatment arm) (3); reported the OS/PFS/ AE. The exclusion criteria are as follows (1): the included criteria for melanoma mentioned above are not used (2); case reports, reviews, comments, letters, conference reports, duplicate reports, or unfinished studies (3); there was no full text available, and there were inadequate data in the literature (4); trials without a control arm. Data Extraction and Quality Assessment Two independent investigators read all eligible literature, then they extracted the available data, including the name of the study, name of the first author, publication year, trial phase, treatment arms, number and characteristics of enrolled patients, regimens of adjuvant treatments, the number of patients per treatment arm, follow-up period, oncological results, grade AEs results. Afterward, we retrieved the hazard ratios (HRs) and 95% CIs correlated with OS and PFS, and grade 3/4 AEs rate. Two researchers utilized the Cochrane risk-of-bias tool to assess the quality of individual studies. Based on seven quality assessment projects, including random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcomes, selective reporting, and other bias, each of the studies was classified as having a low, high, or unclear risk of bias. Any discrepancies were settled by discussions with a third researcher in this process. Outcomes For the primary outcome, we extracted hazard ratios (HR) for OS and PFS) as well as the 95% confidence intervals (CIs). The secondary outcome was adverse events (AEs) in grades 3 or 4. Data Analyses HR for OS, PFS, and 95% CIs were utilized as summary statistics to assess the efficacy of adjuvant treatment. The OS is known as the period from the beginning of randomization to death from any cause. The PFS is known as the period from the beginning of randomization to tumorigenesis, progression, or death from any cause. We performed an NMA through random and fixed effect models for direct and indirect treatment comparison for each outcome (28). In order to assess PFS and OS, contrast-based analysis was used, with estimated differences in log HR and standard error computed using reported HRs and CIs (29). The HR and 95% credible interval (CI) were used to represent relative treatment effects (28). AEs at high grade (grade 3-4) were reported with odds ratios (ORs) and 95% CI based on the available raw data from the selected studies. The connection of the treatment networks in terms of OS, PFS, and AEs was depicted using network plots. I2 was used to assess heterogeneity when multiple trials were available for a given comparison. All statistical analyses were carried out using R software (Version 4.0.4) and STATA (Version 15.0); P < 0.05 was deemed statistically significant. In addition, the software Revman version 5.3 (Cochrane, UK) is used to describe the deviation risk summary, and deviation risk diagram. The current NMA did not need ethical approval since it just collected and evaluated data from previously published studies. Study Selection We selected 3575 studies in total. Following an eligibility evaluation and a thorough analysis of the full text, 11 studies including 8 different forms of treatment were examined (17-19, 21, 30-36).4 out of 11 trials (18,19,21,30) were short-term outcomes of the clinical trial (trial registration: NCT01844505/ NCT01721772/NCT00324155, respectively), therefore we excluded them and chose the latest trials with longer follow-up time. The latest clinical trial (trial registration: NCT01844505) did not report AEs, and the other (NCT00324155) did not report PFS and AEs (related to therapeutic drugs). Consequently, two short-term trials were retained (18,21). Overall, nine eligible randomized trials were included in this study (18,21,31,36). Figure 1 depicts the PRISMA flow diagram of study selection. Characteristics of Included Trials Trials were conducted with adult patients with no prior systemic therapies, and unresectable or metastatic histologically confirmed stage III or IV wild-type BRAF melanoma. The nine trials evaluated in the study were conducted between 2010 and 2022 and involved a total of 3077 patients. The largest sample size was 945, while the smallest was 72. The median age was 54-67 years, and the percentage of male patients across the trials ranged from 45% to 74.3%. All patients were randomly allocated to receive one of eight treatment approaches: ipilimumab+dacarbazine (IPI+DTIC), dacarbazine (DTIC), ipilimumab (IPI), trametinib (TRAM), ipilimumab+gp100 (IPI+gp100), gp100, nivolumab (NIVO), nivolumab+ipilimumab (NIVO+IPI). Eight of 9 studies were multicentre trials, and 4 trials reported regions of involved centers. Two short-term trials are included in the 9 trials, which were short-term outcomes of the clinical trial (trial registration: NCT01844505/NCT00324155, respectively) (18,21). Seven studies were double-blind, and two studies were open-label. The characteristics of all trials are presented in Tables 1, 2. Quality Assessment of the Included Studies After the quality evaluation by the tool of Cochrane Collaboration, we found that all included studies did not show obvious publication bias in this NMA. As seven of the nine papers provided adequate procedures for generating random sequences, their selection bias was rated as "low risk." Also because the remaining studies only mentioned "random," the selection bias of two of them was rated as "unclear risk." Since all of the studies reported the processes used for allocation concealment, their bias was classified as "unclear risk." Seven studies indicated participant and personnel blinding, hence their bias was rated as "low risk." Two studies did not specify the blinding of participants and personnel, so their bias was assessed as "unclear risk". Six studies mentioned the outcome assessment blinding; thus, their bias was rated as "low risk"; and the risk of bias in the remaining studies was regarded as "unclear risk". For incomplete outcome data, all studies were rated as "low risk". Because all of the studies provided the results mentioned in the method section, the reporting bias was rated as "low risk." The detailed assessment results are shown in Figures 2A and B. NMA In terms of OS, PFS, and AEs, the networks of eligible comparisons were graphically displayed in network plots ( Figures 3A-C). Figure 4A). Based on the treatments ranking analysis, NIVO+IPI had the highest potential of providing the best OS ( Figure 4B). When comparing each intervention, it was found that NIVO was correlated with poorer OS than NIVO+IPI (HR 1.22, 95% CI 1.03-1.44; Figure S1). Moreover, NIVO was significantly more effective for promoting OS (HR 0.87, 95% CI 0.49-1.53; Figure S1) than TRAM. When compared to TRAM, NIVO+IPI considerably enhanced OS (HR 0.72, 95% CI 0.4-1.28; Figure S1). Interestingly, gp100 was not conducive to OS compared with any other therapies. Additionally, we found no statistically significant difference between direct and indirect comparisons (P > 0.05). The heterogeneity of this analysis was low (I2 = 0%). All outcomes of comparisons for OS are presented in Figure 4 and Figure S1. Progression-Free Survival Six studies evaluated eight different agents, which contributed to the PFS analysis. Three treatments clearly stood better than IPI ( Figure 5A) Figure S2). NIVO+IPI was significantly more effective for promoting PFS (HR 0.79, 95% CI 0.67-0.92; Figure S2) than NIVO. Interestingly, gp100 was not conducive to PFS compared with any other therapy. Moreover, we also found that there was no statistical difference between direct comparison and indirect comparison (P > 0.05). The heterogeneity of this analysis was low (I2 = 0%). Figure 5 and Figure S2 provide the full comparative PFS results. DISCUSSION This meta-analysis explores the most effective and safest adjuvant treatments for unresectable advanced or metastatic melanoma, based on the drugs currently available on the market. We conducted a thorough search for qualifying RCTs, critically evaluated trial quality, meticulously synthesized trial data, and finally, classified treatments based on the efficacy and safety demonstrated in randomized clinical trials. The network method attempts to prevent the lack of direct comparison across several available options, particularly the comparison of checkpoint inhibitors to targeted therapies as well as checkpoint inhibitors Therefore, we conducted NMA to evaluate their efficacy and safety indirectly. This method yielded intriguing findings. Our findings revealed that NIVO+IPI was superior to other therapies in terms of increasing OS and PFS in advanced melanoma patients. The combination of NIVO and IPI was considered to have complimentary benefits in the treatment of metastatic melanoma, and our findings were consistent with previous research (37). Although single-agent NIVO was ranked lower than NIVO+IPI, it might still offer more advantages in terms of OS and PFS than any other treatments. Additionally, TRAM (an investigational hot spot in targeted therapy) is a specific allosteric inhibitor of MEK1/2, and many trials with supportive preclinical evidence confirmed its efficacy in non-V600 mutant melanomas (38)(39)(40). In addition to NIVO+IPI and NIVO, TRAM appeared to be more efficacious than other treatments in improving OS, and PFS. Thus, it is reasonable to believe that NIVO +IPI, NIVO, and TRAM targeted therapies remarkably have improved OS and PFS in patients with unresectable advanced or metastatic melanoma. Among the authorized therapy options at the time of this study, NIVO+IPI and NIVO had the longest followup duration. Long-term survival studies have shown a considerable improvement in OS with NIVO alone or NIVO +IPI compared to IPI alone. The median OS of NIVO+IPI was around twice as long as that of NIVO alone, showing that the combination's survival rate was much higher than that of NIVO alone (36). Similarly, for melanoma patients with BRAF mutations, NIVO-containing regimens still outperformed IPI alone in terms of survival (21,31,36). Overall, the NIVO+IPI response characteristics detected in this investigation were similar to previously reported results (41,42). Despite checkpoint inhibitors' dominance, TRAM still improved PFS and OS among metastatic melanoma patients with BRAF V600E or V600K mutation. Therefore, TRAM may be an alternate option for BRAF wild-type or BRAFmutated patients. Although adjuvant treatments have offered significant benefits for advanced melanoma, they still have some limitations. Since chemotherapy cannot differentiate between cancer and healthy cell types, it will damage both (43). Similarly, high radiation doses can also damage surrounding healthy tissues. As hormone treatment alters hormone levels and function, it may cause unwanted side effects including organ dysfunction (44). In addition, the combination of BRAFi and MEKi has shown clinical effectiveness and long-term disease control in metastatic melanoma. However, there may be a drug resistance mechanism during therapy, and around 15% of patients are intolerant to treatment (45). Ipilimumab stimulates T-cell proliferation, which can result in immune-related side effects such as dermatitis, endocrinopathy, and hepatitis, as well as other side effects like pruritus, fatigue, and colitis (46)(47)(48). Our current study highlighted grade 3 and 4 AEs associated with different immune checkpoint inhibitors and targeted therapies. The NIVO group showed the lowest chance of developing grade 3/4 AEs, followed by DTIC. Most notably, NIVO+IPI had the greatest risk of grade 3 or 4 AEs. It indicates that NIVO+IPI was highly efficacious while also having a significant level of toxicity. TRAM also showed a relatively higher probability of grade 3 or 4 AEs. Another study discovered that the high toxicity of checkpoint inhibitors made the development of combination therapy problematic (49). In this study, the combination of IPI and DTIC had high toxicity, but the combination of IPI and gp100 showed a relatively lower rate of grade 3 or 4 AEs. IPI+DTIC and IPI+gp100 were more effective in improving OS/PFS over monotherapy (IPI, DTIC, IPI, gp100). Therefore, we hypothesized that IPI+gp100, rather than IPI+DTIC, appeared to be more suitable for long-term therapy in patients with advanced melanoma. This meta-analysis investigated the optimum adjuvant treatment for advanced melanoma and provided clinical suggestions on different administration regimens. As in our research, NIVO+IPI was the most effective in extending the survival of advanced melanoma patients, although it was associated with an excessive number of AEs. As the result, NIVO+IPI treatment should be carefully considered for advanced melanoma patients with poor physical conditions. Similarly, the side effects of TRAM also limit its use. NIVO dramatically increased the survival of patients with advanced melanoma, ranked second only to NIVO+IPI. Simultaneously, across all treatment modalities, NIVO showed the lowest rate of grade 3 or 4 AEs, and its safety profile was manageable. Undoubtedly, NIVO is the best appropriate treatment in this study for patients with unresectable advanced or metastatic melanoma. This conclusion is consistent with previous research results (50). Compared with monotherapy, combination therapy was more beneficial to improve OS and PFS for advanced melanoma patients. IPI+gp100 ranked below IPI+DTIC and NIVO+IPI with concerning OS/PFS, but IPI+gp100 had a lower rate of grade 3 or 4 AEs compared to other treatments (except NIVO). More importantly, additional innovations need to be explored in the future to alleviate the toxicity associated with combination therapy. However, this analysis has several limitations. Firstly, we were unable to obtain detailed individual patient data, which limited our ability to evaluate outcomes and patient characteristics.Secondly, we conducted our data analysis on a relatively small number of included RCTs. Thirdly, we did not analyze patients with BRAF mutation-positive tumors, because subgroup analysis could not be performed in the metadata. Fourthly, two of the trials included in this study were open-ended, which might introduce unintentional bias. Furthermore, due to the limitation of just including level 3 or 4 AES in this analysis, some outcomes might be inconsistent with reality. According to our results, NIVO had a high therapeutic effect and the lowest toxicity; nevertheless, largescale prospective studies were necessary to provide credible evidence. Despite all of the shortcomings described above, we can properly compare several adjuvant treatments and propose the best therapy for advanced melanoma. At the moment, singleagent NIVO is a suitable option for patients with unresectable advanced or metastatic melanoma. Longer follow-up in those adjuvant treatments, combined with further investigation of combination treatments, may improve outcomes in advanced melanoma. CONCLUSIONS In conclusion, NIVO is the best adjuvant therapy with a promising profile for patients with unresectable advanced or metastatic melanoma. This study offered evidence for the comparison among these adjuvant treatments. NIVO+IPI ranked first in efficacy but had the highest toxicity. TRAM ranked third in efficacy but had high toxicity. Combination therapy is more successful in treating unresectable advanced or metastatic melanoma, although it is associated with a higher risk of adverse events. Conversely, IPI+gp100 had a lower rate of grade 3 or 4 AEs than other treatments (except NIVO). These results might have a significant impact on the individualized therapy of patients with advanced melanoma. However, there are several limitations to this overview. First, we may be missing some information because only SRs published in English are included. Furthermore, the sample size of this study was relatively small. Second, we could not obtain detailed data from each patient, which limited the evaluation of outcomes. Third, we were unable to conduct a subgroup analysis of BRAF mutation-positive tumors patients. Finally, the subjective assessment of the authors may affect the outcome of the quality evaluation process. In the future, for the treatment of patients with advanced melanoma, clinicians must consider the efficacy and safety of monotherapy and combination therapy, and the patients' physical status. MBGs, nanotechnology, and tannic acid-incorporated medical applications have bright prospects in the treatment of advanced melanoma. More new schemes about adjuvant treatments are necessary to provide stronger evidence for definitive conclusions. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS MJ, Yc, JS, and XZ contributed equally to the work. MJ, Yc contributed to the initial design and drafting of the research. MJ, JS and XZ participated in the drafting process process and analyzed the data. FY evaluated the data. BZ and JZ participated in article revision. MX and MC supervised the study. All authors contributed to the article and approved the submitted version.
2022-06-19T15:23:20.972Z
2022-06-17T00:00:00.000
{ "year": 2022, "sha1": "d4e3d7ce08e2c677e17d9bdea751299bebd39f76", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2022.926242/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "1fc1b64f6be80eea4c07eccb5d91ede791aa5949", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251103313
pes2o/s2orc
v3-fos-license
Palaeoproteomics identifies beaver fur in Danish high-status Viking Age burials - direct evidence of fur trade Fur is known from contemporary written sources to have been a key commodity in the Viking Age. Nevertheless, the fur trade has been notoriously difficult to study archaeologically as fur rarely survives in the archaeological record. In Denmark, fur finds are rare and fur in clothing has been limited to a few reports and not recorded systematically. We were therefore given access to fur from six Danish high status graves dated to the Viking Age. The fur was analysed by aDNA and palaeoproteomics methods to identify the species of origin in order to explore the Viking Age fur trade. Endogenous aDNA was not recovered, but fur proteins (keratins) were analysed by MALDI-TOF-MS and LC-MS/MS. We show that Viking Age skin clothing were often composites of several species, showing highly developed manufacturing and material knowledge. For example, fur was produced from wild animals while leather was made of domesticates. Several examples of beaver fur were identified, a species which is not native to Denmark, and therefore indicative of trade. We argue that beaver fur was a luxury commodity, limited to the elite and worn as an easily recognisable indicator of social status. Introduction One of the major characteristics of the Viking Age, the final era of the Scandinavian Late Iron Age spanning from around 800 to around 1050 CE, is extensive international trade and exchange of goods [1,2]. Contemporary written sources describe how fur from wild animals hunted in current day Northern Scandinavia and Russia like fox, beaver, marten, ermine and sable were amongst the pivotal commodities that were brought via the eastern trade routes to the growing Arab fur market in exchange for beads, silver, gold and silk [3,4]. An example of the economic value of imported fur is given by the Arab traveller, geographer, and historian al-Mas´ūdī from Baghdad who in 943 wrote:"The black furs are worn by Arab and non-Arab kings. . . . . .They make hats, caftans and fur coats out of them. There is no king who does not possess a fur coat or a caftan lined with the black fox fur of the Burtās" [5,6]. While the significance of imported fur for the Arab market is well-described, its use and value as a visual marker of status in Scandinavia is less well understood. Fur procurement has been notoriously difficult to study in an archaeological context as fur's organic nature leads to its rapid degradation. In addition, its presence and appearance has not been systematically recorded. In the Viking Age market place Birka in Sweden, fur has been preserved and recorded in connection with penannular brooches in burials, showing that fur was part of the clothing [7]. In Denmark, the textile catalogue by Bender Jørgensen [8] mentions a few examples of preserved fur in Viking Age burials, but no systematic examination has been performed and, until recently, only a few minor studies have been published [9,10]. A wide variety of burial customs were present in Viking Age Denmark, from cremations to inhumation graves with both social, regional and chronological differences related to the introduction of Christianity [11]. There is bias in the survival of fur; the few acknowledged examples of Viking Age fur clothing that have survived are from elaborate burials belonging to the elite. For instance, fur was found in waggon bed burials (Hvilehøj, Fyrkat), graves with wooden constructions (Bjerringhøj, Skindbjerg, Søllested) and in a ship burial (Ladby) (Fig 1). Most of these burials were also covered by large mounds. The complex grave constructions, as well as contact with metal grave goods, have in some cases aided the preservation of organic materials such as fur from clothing, accessories or grave furnishing [12]. If preserved, one of the first serious obstacles in fur studies is the species identification of the hairs which often requires specialized knowledge and/or access to advanced analyses and equipment [13]. Analytical strategies In cases where Viking Age fur has survived, species have been almost exclusively identified through morphological characteristics of the hairs through transmitted light microscopy. In 1933, fibres from some of the Birka graves were identified as squirrel, marten, beaver and possibly bear [14]. Later, as technologies advanced, fibres from other graves have been identified as sheep wool and beaver fur with Scanning Electron Microscopy (SEM) [7]. However, the reproducibility of microscopy relies on extended knowledge and experience to account for intra species variation, as well as suitable reference collections, and is complicated further by diagenesis [15]. As an example of the difficulties of species identification by microscopy, fur from the Bjerringhøj grave in Denmark was first proposed to be beaver or marten [16], then was assigned as marmot after another independent analysis [17]. Similarly, fur from the Danish Hvilehøj grave was also at first described as beaver [18], but later identified as marmot [10,19]. Neither marmot nor beaver would have been local to Denmark in the Viking Age [20] and thus indicate a trade of fur, but would have come by different trade routes, from the south and east or the north, respectively, and therefore accurate identification is still important to the interpretation of the activities and connections of the sites. To counteract difficulties associated with species IDs based solely on hair morphology, developments of ancient DNA (aDNA) technologies over the last decades have led to successful investigations of archaeological hair, wool and fur materials [21][22][23][24]. In addition, zooarchaeology by mass spectrometry (ZooMS) is a commonly employed technique for species ID from collagen containing samples such as skin and bone. ZooMS utilises matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry to generate species specific peptide mass fingerprints from trypsin digested collagen. A similar approach can be used to provide species ID from keratinaceous samples such as nail, hoof, horn, beak, feathers, skin, wool and fur. Analysis of keratins in archaeological textiles made from such materials has, therefore, already provided successful species identifications at many sites [25][26][27][28][29][30][31]. The potential of archaeological hair samples was demonstrated on archaeological pelt and textiles found in connection to copper-alloy artefacts [30] and the clothing of the Neolithic Tyrolean Iceman Ö tzi [27,28]. Validation of PMF species ID markers can be performed by liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS). Through this technique, the PLOS ONE amino acid sequence of the PMF peptides is determined, allowing robust confirmation of phylogenetically informative amino acid substitutions. Whilst LC-MS/MS analysis is the 'gold standard' of proteomics, it is low-throughput in comparison to PMF methods which are the mainstay of screening and large scale studies. When comparing these varied bioarchaeology approaches currently available, perhaps the most significant advantage of aDNA vs protein analyses (especially those that rely on PMF), is species resolution. Genomic DNA sequencing allows access to many sites of variability, particularly intronic regions allowing unparalleled species resolution and the ability to reveal structure in ancient populations and map ancient individuals to geographical regions. The latter was recently demonstrated for Atlantic walruses (27). In contrast, PMF targets only a few highly abundant proteins. For instance, keratin proteins are the product of a few genes, therefore PMF of keratin only allows access to the variation within the exons of the keratin genes. However, DNA has a significant disadvantage: namely its poor preservation in many archaeological environments [24]. In contrast, proteins are inherently more resistant to degradation and may be identifiable in samples that no longer contain amplifiable endogenous DNA [15]. To summarize, if endogenous aDNA is recoverable it provides a superior approach for species identification and phylogenetic analyses. On the other hand, if aDNA is not recoverable/not present, protein analysis allows a robust, albeit lower resolution, approach that remains sufficient for most applications. This article utilizes several biomolecular methods (aDNA, PMF and LC-MS/MS) as well as microscopy (see S2 Table in S1 File) to analyze fur samples from the most extraordinary Viking Age graves from modern-day Denmark (see S1 Table in S1 File) to establish the material use of fur in the Viking Age, shed light on Viking Age fur trade, and examine fur as a visual identifier of elite status. Materials and methods The fur items examined here derive from six richly equipped burials belonging to the very top of society in 10th century Denmark (Fig 1 and S1 Table in S1 File). Based on the archaeological material, the graves contained respectively three women, two men and one of unknown biological sex. Prior to destructive biomolecular analysis, we performed species screening by transmitted light microscopy of hair (See methodology in S1 File and Table 1). Then the selected items were sampled for fur. Fur samples were analysed by ancient DNA using a next generation sequencing shotgun approach (see methodology in S1 File). Subsequently, the sample material was subjected to the analysis of peptide mass fingerprints (PMF) of keratins using MALDI-TOF mass spectrometry and six samples were also verified by liquid chromatography-tandem mass spectrometry (LC-MS/MS). The samples were prepared based on a previous protocol [30 and see further methodology in S1 File]. Results of microscopy While the results of PMF and LC-MS/MS are in agreement in all six cases, microscopy agrees with PMF in four of 12 possible cases. In three cases, PMF failed to provide an identification and the correspondence cannot be evaluated, while in five cases, microscopy disagrees with PMF or PMF and LC-MS/MS ( Table 1). Results of DNA Of the fifteen samples included in this study, thirteen samples were selected for DNA sequencing analysis. All thirteen samples were positively amplified during PCR and all blanks failed in amplification, excluding contamination of reagents and cross contamination. These samples were then successfully sequenced, resulting in between 1,014,051 and 7,351,856 trimmed reads per sample (S3 Table in S1 File). The number of reads mapping to any reference was extremely low, with the highest fraction of unique reads mapping (see S3 Table in S1 File) ranging from 0.004% (44 reads) to 0.00005% (1 read). These mapping results testify to poor endogenous DNA preservation in the samples, and no reliable DNA analysis or interpretation can be made. Results of PMF of keratin Fourteen different items of fur from the six sites were submitted to PMF analysis. The fur sample from Fyrkat had two different extraction methods attempted, resulting in 15 analyses in total. Of the 15, 12 gave some indication of species identification (Table 1). Unfortunately, the samples from Skindbjerg, Søllested, and one from Ladby all were unable to provide meaningful results. Of the successful samples, five samples were identified as beaver based on the presence of the peak m/z 1669, which is unique to this species [31]. The identifications were all supported by additional peaks m/z 2050 and 2179, currently only observed in beaver as well as peaks m/z 2088 and 2163 limiting the identification to rodents, opossum or carnivores (S5 Table in S1 File and Fig 2). In addition, two samples (Hvilehøj C4273-97, fragment 60 and Hvilehøj C4280c) have their closest match to beaver, but may be other closely related species based on the peak m/z 1518 which is currently only recorded in muskrat and a minor m/z 1669 in Hvilehøj C4280c. Currently only a few species within Rodentia are listed as references, why other closely related species could also be possible candidates for these samples. Hvilehøj C4273-97, fragment 1 and Bjerringhøj C150, fragment 3 could be assigned to the families of either bovids or cervids based on the peak m/z 1834 (S5 Table in S1 File). One sample from Ladby (C30238, L4 504, A) is also tentatively assigned to bovids or cervids, but the peak m/z [34]. � European beaver (Castor fiber) sequences are not publically available and not present in the reference database. Therefore, the samples were assigned to North American beaver (Castor canadensis). ‡ Another subsample of the same object was identified as cattle by ZooMS (Zooarchaeology by Mass Spectrometry) in [35]. Therefore this sample can also be assigned to cattle. https://doi.org/10.1371/journal.pone.0270040.t001 1669 confuses this identification as this is unique to beaver. The two extractions from Fyrkat were assigned to either Mustelidae or Ursidae based on the peaks m/z 2035? and 2164 in combination with m/z 2088. Several peaks had clear signs of deamidation indicating that the markers are more likely to be genuine and not contamination [see S1 File and 32]. A few peaks that may be human contamination can be observed. This is unsurprising as many of the samples have been handled for more than a century (See S5 Table in S1 File). Results of LC-MS/MS The five samples (six analyses with the two Fyrkat extractions) examined by LC-MS/MS either confirmed or narrowed down the identifications by PMF (Table 1). Two samples (Hvilehøj C4273-97, fragment 19 and Bjerringhøj AdC143) were confirmed as very likely to be beaver. Hvilehøj C4273-97, fragment 60, with a closest match to beaver in PMF, was actually identified to the family Sciuridae, with the most likely assignment as Sciurus vulgaris (red squirrel). Hvilehøj C4273-97, fragment 1 was narrowed down from the PMF identification of Bovid/cervid to originating from sheep. The two extractions of the fur from Fyrkat suggest that it derives from the genus Mustela (assigned to Mustelidae or Ursidae by PMF). However, it should be noted that these identifications are based on current, and somewhat limited, databases (see Discussion). Discussion Amplification of aDNA was chosen for a large part of the samples because of its potential high resolution, and because previous studies have demonstrated hair as an excellent substrate for aDNA preservation [22,23,36,37], even in one of the contexts sampled for this study [24]. Despite this, none of the analysed samples yielded endogenous aDNA, demonstrating a high degradation of endogenous DNA in the samples. Many factors involved in the preservation of aDNA are still poorly understood, but the level and character of DNA damage is probably largely determined by age and by the environment [38]. Likewise, treatment processes from turning the raw pelts into fur, including pickling, tanning and dyeing, may also explain the advanced aDNA degradation [24,39,40]. Keratin showed to be better preserved than aDNA, as seen in the much higher success rate in identifying these samples (12/15 extractions). Though the resolution of PMF is lower than for aDNA, it often allows identification to at least genus level. The five samples re-analysed by LC-MS/MS were also all successful, in some cases to species level. This serves as an important example of the limitations that often occur when working with archaeological samples and how the use of multiple methods can supplement each other. In addition, the morphological analyses for species identification performed by microscopy (S7 Table and S18 Fig in S1 File) were compared to the biomolecular identifications. Given the known limitations of these methodologies due to obfuscation caused by decay, it is unsurprising that disagreements in species ID were common (~42% of the samples). Both protein based methods were in very close agreement, and therefore a greater weight was given to those identifications as being the most accurate and least susceptible to misinterpretation. A particular example was Bjerringhøj AdC143 which PMF and LC-MS/MS both identified as beaver while microscopy identified it as Canidae. Although improvements to publicly available protein databases are needed (see next paragraph), proteomics was demonstrated to be a superior and objective approach for archaeological samples with high degradation of endogenous DNA compared to microscopy and aDNA analyses and our results may guide future studies of archaeological fibres. Current limitations of protein databases for wild fauna Unfortunately, current protein databases are heavily biased towards model organisms, and are nowhere close to containing all proteins of all species. Therefore, publicly available databases are not complete with regards to species that could potentially be found in the Viking Age. This is the case both for PMF and LC-MS/MS analyses. The current lack of data is critical, as it limits the ability for complete confidence in the taxonomic identification of ancient samples, as species not in the databases cannot be completely ruled out. This is especially important for the LC-MS/MS analyses, as it often has the high resolution required to get to species level, which may look like those identifications are particularly solid. For example, from the Sciuridae keratin sequences available in public databases, Hvilehøj C427397, fragment 60 was originally identified as most likely an 'exotic' alpine marmot (Marmota marmota marmota). However, upon closer examination, there were several species not present in the database, including red squirrel (Sciurus vulgaris), arguably a more likely Sciuridae identification due to its presence in the local environment [20] and records of squirrel fur objects in the Viking Age [5,6]. Additionally, a few peptide sequences recovered differed from the available Marmota sequences. After the initial searches, it was fortunate that some keratin sequences from the grey squirrel (Sciurus carolinensis, native to North America) became available which were added to the search database to represent the Sciurus genus. In addition, we were able to search the S. vulgaris translated nucleotide database on NCBI for keratin proteins discovered with the original Sciuridae search, allowing for sequence fragments of potential red squirrel keratins to also be searched. While these fragments did not cover the entire protein sequences, it allowed for some recovery of peptides not represented in the publicly available sequences. With the new database, we were able to assign new and previously less species-specific proteins to S. carolinensis, and also identify sequence variants that could come from S. vulgaris instead (Dataset SI). Therefore, Hvilehøj C427397, fragment 60 has been assigned to Sciuridae, with the most likely species as the local red squirrel instead of the alpine marmot, which greatly changes the interpretation of the object in respect to trade of the raw materials. This highlights considerations of database limitations in the interpretation of LC-MS/MS results for ancient fur samples. Similar problems occured with the beaver samples, as European beaver (Castor fiber) keratins are not represented in the public databases. Therefore, the North American species (C. canadensis) was used as a surrogate. These two animals are the same genus, but still may have slightly different protein sequences which make identification less certain. Unfortunately, the nucleotide searching method could not be repeated, as no Eurasian beaver genome is currently available and, therefore, beaver assignments were based on C. canadensis alone. The identifications are to the best of our ability at the current time, based on currently available knowledge. Therefore, we encourage further research and database generation in this area, both for protein sequence databases and for peptide mass fingerprints. More accurate identifications will allow better interpretations of past production and trade activities, and therefore a greater understanding of people in the past. Identified species in Danish Viking age fur The 15 samples analysed from six different graves showed the presence of fur from wild animals: beaver, squirrel and a mustelid. Amongst these, the furs from Bjerringhøj (C143) and Hvilehøj [18] previously identified to marmot by microscopy, are now identified as beavers. A domesticated species, sheep, was also identified in the Hvilehøj grave in addition to previously reported goatskin in the same grave and cattle skin in the Søllested grave [35]. To discuss the use and purpose of animal species for fur and skin we integrate a few previously identified objects from the same contexts and an additional and contemporary site, Yholm [35]. Based on the original purpose of these items we suggest that they can be divided into three categories: clothing, accessories (including shoes and bags), and grave furnishing. Several of the tested items from Hvilehøj can be clearly identified as clothing, and fragments of several items from Bjerringhøj and Fyrkat are most likely also related to clothing. The shoes from Hvilehøj and Skindbjerg, as well as a purse from Yholm, fall within the category of accessories. Also a fragment from Hvilehøj (E) with a clear seam has been classified as an accessory. Finally, three rolls of fibres from respectively Hvilehøj, Bjerringhøj and Ladby have been identified as deriving from the furnishing of the burials, possibly used as caulking materials for the wooden structures. A few pieces of rolled-up skin might also belong to unidentified fragments of the interior decoration of the burials. The classification of the objects in this study and the previously reported objects are summarised in Table 2. As seen from Table 2, the animal resources of fur and skin used for the three suggested categories have a standardised pattern. Fur clothing is exclusively made by fur from wild animals while the accessories and furnishing materials are exclusively made from skin or fibres from domesticated animals. Except for the wool rolls possibly used as caulking and the skin fragments from Ladby, these are all non-hair items. The two materials; leather and fur, thus seem to have been used for different purposes. Fur was a limited, expensive, and in some cases an imported resource, which was only accessible to the few. Therefore it makes sense that it has only been found in clothing where its visual properties could be displayed. The material was also too precious to dehair and turn into leather where its beautiful appearance and exclusiveness could not be admired. In clothing, fur would have acted as an example of conspicuous consumption [41] i.e. as a recognisable luxury product and visible evidence of the high status, which would differentiate the wearer socially and economically. For leather, domesticated animals were common and local and made up an easily accessible resource. The skins of cattle, sheep and goat were moreover well suited for leather objects based on their properties [42] and easier to replace when worn out. For the purpose of caulking, a material known from textile production, sheeps wool, would have been well suited for insulation and sealing of surfaces. Fur therefore seems to have been mainly used in cases where it could be displayed, while leather was used for more everyday objects. Samples marked in blue are reported in [35]. Purple bars mark samples which derive from wild animals, while orange bars mark samples from domesticated animals. � European beaver (Castor fiber) sequences are not publically available and therefore were not present in the reference database. Therefore, the samples were assigned to North American beaver (Castor canadensis). ‡ Fur and skin were sampled from the same fragment. Therefore, although the sample of fur was inconclusive, we can conclude that it is also cattle. https://doi.org/10.1371/journal.pone.0270040.t002 Local or imported resources? Domesticated animals such as sheep and cattle were abundant in Viking Age Denmark and would probably have come from the local area. Several species within the genus of Mustela as well as red squirrel were also present in Denmark during Viking Age [20] and, therefore, it is possible that the fur identified in the Fyrkat grave and in Hvilehøj (C4273-97, fragment 60) came from locally hunted, but still attractive, fur animals. However, the identification of beaver tells another story. Based on the current Danish animal bone assemblages, the European beaver (Castor fiber) had gone extinct in the Danish area already in the Early Bronze Age [20]. Only one later find of beaver dated to the Late Iron or Viking Age comes from the settlement Mysselhøjgård in Lejre, Denmark [43]. The humerus found here does not immediately support imported beaver fur as this bone would have been removed from the fur during the skinning process, but even if small local populations of beavers were present in Denmark, they would not have been able to support a production of fur garments which would require many animals for one single garment. Therefore, beaver skins are expected to be imported in the Viking Age. Viking Age fur trade As previously noted, several contemporary written sources describe the Viking Age fur trade, including the trade routes, the traders, and the specific species of fur traded. According to these sources, fur came from a variety of species including mustelids (such as sable, marten and ermine) squirrel, fox, wolf, beaver, and hare, in addition to skins from domesticated species such as sheep, goat and cattle [3,5,6]. Rus´, or Scandinavian Vikings who settled in eastern Europe [44], are described as central stakeholders in the fur trade in Arab sources from the 10th century CE. In the 9th and 10th centuries, the Rus´brought their fur to the center for fur trade, Bulgar, located on the Volga, from where fur was distributed to the Arab world, Central Asia and Northern Africa [6]. Ibn Hawqal described this trade in 965: "The honey, wax, and furs exported from their country come from the territories of the Rūs and the Bulghār. This is also the case with the beaver pelts, exported throughout the world, for they are only found on the northern rivers of the territory of the Rūs, the Bulghār and Kiev" [5]. Where the Rus´in eastern Europe seem to be central for the fur trade in the 9th to 10th centuries, the role of homeland Scandinavia is much more obscure. In Scandinavia, written sources suggest that fur derived from different sources. Some were local, some were paid in tribute by the Lapps of northern Scandinavia and sold on various Scandinavian markets [45], some came from Iceland [46] and some arrived via trade with the Rus´ [6]. Icelandic sagas [46] and the History of the Archbishops of Hamburg-Bremen by Adam of Bremen [47] testify to Scandinavian trade with eastern Europe and Novgorod. Adam of Bremen for instance mentions that Danes, under favourable conditions, could sail to Novgorod in one month [47]. Birka, in Sweden, was one of the most important Scandinavian market places and is in particular associated with the eastward trade in the 9th to 10th centuries [6]. It is not unlikely that the fur of wild animals that were no longer locally available were traded through Birka to the rest of Scandinavia. In the future, the analysis of stable isotopes may shed light on the relative prevalence of these routes and determine the provenance of individual objects [48]. Fur, splendour and status The importance of exclusive fur as part of elite splendour is well-known from both Arab and other western literary sources. Einhard's description from the middle of the 9th century CE of the clothing of Charlemagne, for example, states: ". . . he protected his shoulders and chest in winter by a close-fitting coat of otter or marten". [49]. In addition to the quote in the introduction by al-Mas´ūdī, this phenomenon is also described by Ibn Fadlān who, in 922 under a diplomatic travel to the Bulgars, witnessed a Viking ship burial and described how the noble Viking was buried in an elaborate outfit which included a cap covered with sable fur [5]. In the case of the beaver fur from the Hvilehøj and Bjerringhøj graves, there can be little doubt that these finds of clothing represent true traded luxury products aimed at displaying the magnificence of their owners. Probably most exported fur of wild species was so expensive, that it was only accessible to the elite. Beaver fur visually stands out from local furs by its sheen (S2 Fig in S1 File). It is moreover a heavy, very warm and water resistant fur. Local furs from marten and squirrel may have been more accessible and the local production of marten fur is also demonstrated through finds of marten bones with cut marks from flaying in bone assemblages from Fredshøj in Lejre and Ribe in Denmark [43]. Marten and squirrel fur is very light, but still warm, and based on its properties, not to mention the labour connected to catching the animals and preparing their much smaller skins, these furs must have still been exclusive and in high demand. Exclusive fur, such as the beaver fur found in Hvilehøj and Bjerringhøj, were easily recognisable compared to local wild and domestic resources and the wealth and power it signalled would have been understood by all the people in the Viking Age and placed the wearer into an international elite [43] with common markers of power. How large a proportion of the population in the Viking Age who could wear clothing with luxury fur is still debatable. The Danish archaeological source material is heavily biased towards elite contexts where the burial styles have aided the preservation of organic materials, while graves that mirror more common people generally lack preserved fur. This bias forces us to pose the question of whether the lack of fur simply represents a taphonomic process or whether it was never there in the first place. Remains of textiles are found in graves belonging to lower socioeconomic individuals from the Scandinavian Viking Age [8], and thus one can argue that fur should also have been preserved if it was used in this context. Another possibility is that fur and skin materials in Danish Viking Age graves have not been systematically recorded and we currently lack knowledge of all fur objects recovered. In the well recorded sites of Birka and Hedeby, however, analysis shows that only few of the graves with high status markers as textile, silk and feathers have fur [50]. So based on this, the current lack of fur in more common graves seems genuine and shows an interesting facet of the use and trade of animal furs in Viking Age Denmark. Conclusion The two proteomics methods applied in this paper were found to be in agreement and be superior to both the analysis of DNA and morphology through microscopy. In this study we identify five examples of fur from one non-local species, beaver, in elite graves, providing evidence that fur trade and exotic furs played an important role in Viking Age Denmark. Importantly, this study also highlights that biomolecular identifications are limited by the publically available sequence databases, and efforts should be made to include more non-domesticated species to enable and increase the accuracy of future studies. We show that fur from wild species in the chosen elite graves is connected to the clothing, whereas skin and leather from domesticated animals was mainly used for accessories including footwear and grave furnishing. The two materials thus seem to represent two different value chains. Based on the results presented here, fur was worn by the elite and by both sexes. It is also evident that fur and skin from several species were used for the same outfit, as seen in the Hvilehøj grave. This demonstrates extensive knowledge of the functionality and visual capacity of different skins, but also the wish to show off exclusive furs. Probably, wearing exclusive and imported fur placed the man from Bjerringhøj and the woman from Hvilehøj in the category of an international environment with shared social markers of wealth and power. Thus, it is possible for the first time to see a more differentiated use of fur in the Viking Age, especially from wild animals which are connected with high status and luxury, which corresponds to values shown in the well-known Viking Age hunt for silver and other precious materials.
2022-07-28T06:18:22.390Z
2022-07-27T00:00:00.000
{ "year": 2022, "sha1": "32839da2404f4ad0f41f3928f995865df6d42226", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0270040&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cc211c17484b0848c06fb00537d059ad8c2eb0ff", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
227915172
pes2o/s2orc
v3-fos-license
A scoping review of worldwide studies evaluating the effects of prehospital time on trauma outcomes Background Annually, over 1 billion people sustain traumatic injuries, resulting in over 900,000 deaths in Africa and 6 million deaths globally. Timely response, intervention, and transportation in the prehospital setting reduce morbidity and mortality of trauma victims. Our objective was to describe the existing literature evaluating trauma morbidity and mortality outcomes as a function of prehospital care time to identify gaps in literature and inform future investigation. Main body We performed a scoping review of published literature in MEDLINE. Results were limited to English language publications from 2009 to 2020. Included articles reported trauma outcomes and prehospital time. We excluded case reports, reviews, systematic reviews, meta-analyses, comments, editorials, letters, and conference proceedings. In total, 808 articles were identified for title and abstract review. Of those, 96 articles met all inclusion criteria and were fully reviewed. Higher quality studies used data derived from trauma registries. There was a paucity of literature from studies in low- and middle-income countries (LMIC), with only 3 (3%) of articles explicitly including African populations. Mortality was an outcome measure in 93% of articles, predominantly defined as “in-hospital mortality” as opposed to mortality within a specified time frame. Prehospital time was most commonly assessed as crude time from EMS dispatch to arrival at a tertiary trauma center. Few studies evaluated physiologic morbidity outcomes such as multi-organ failure. Conclusion The existing literature disproportionately represents high-income settings and most commonly assessed in-hospital mortality as a function of crude prehospital time. Future studies should focus on how specific prehospital intervals impact morbidity outcomes (e.g., organ failure) and mortality at earlier time points (e.g., 3 or 7 days) to better reflect the effect of early prehospital resuscitation and transport. Trauma registries may be a tool to facilitate such research and may promote higher quality investigations in Africa and LMICs. Supplementary Information The online version contains supplementary material available at 10.1186/s12245-020-00324-7. Introduction Trauma is a time-sensitive condition which accounts for approximately 12% of the global burden of disease [1]. Trauma has significant health and economic implications that disproportionally affect populations in low-and middle-income countries (LMICs). Globally, over one billion people sustain traumatic injuries, and over six million die annually [1]. The injury mortality rate in LMICs (9-12%) is double the proportion seen in high-income countries (5.5%), and up to 16% of all disabilities globally are attributed to injury [1][2][3][4][5][6]. The median cost of direct medical expenditures related to injury in a study of LMICs was 15% of GDP per capita annually [7]. Despite advances in trauma care and expansion of prevention programs, injury and associated mortality rates continue to rise [1,4,8]. The US Military, for example, has policies and training based on research in prolonged field care; however, trauma care research focused on the resource-limited setting is necessary to reduce civilian trauma mortality and disability in these regions [5,[9][10][11]. Timely prehospital care is key to improving outcomes in time-sensitive injuries [12,13]. The concept of timely prehospital trauma care and rapid transport has been a mainstay in prehospital teaching since Dr. R. Adams Cowley identified the preponderance of mortality within 1 h of traumatic injury [14]. There are relatively few published studies reporting patient outcomes directly due to prehospital care, and even fewer studies assessing the independent effects of prehospital time on patient mortality [15][16][17][18]. The relationship between prehospital time and patient outcomes remains unclear and conflicting [19,20]. A 2014 systematic review focused on prehospital time and outcomes, performed by Harmsen et al., included 20 level III evidence articles and concluded a decrease in odds of mortality for the undifferentiated trauma patient when response time or transfer time are shorter, but conversely, there was an increased odds of survival with increased onscene time and total prehospital time [18]. This conflict may be explainable by the heterogeneous nature of prehospital care and broad spectrum of disease pathophysiology in trauma. Additionally, most prehospital studies are conducted in high-income country (HIC) urban settings with limited generalizability to rural and LMIC environments. In rural and LMIC settings, where prehospital times can be very prolonged, understanding the impact, efficacy, timing, and effect size of specific prehospital interventions could lead to improved patient outcomes. Findings from additional research can help identify opportunities to improve systems and care, ultimately optimizing morbidity and mortality outcomes [13]. Many published trauma studies include aspects of prehospital care and time; however, this is typically not the primary focus of the study. We seek to appraise the global scope of contemporary trauma literature focused on prehospital time and trauma patient outcomes in order to identify trends and gaps, which can directly inform recommendations on areas in need of further research. Methods A scoping review of published literature was performed to critically appraise the relationship between trauma outcomes and prehospital time. A comprehensive literature search of MEDLINE, Embase, and Web of Science Core Collection databases was performed in January 2020. A combination of index terms and keywords including traumatic injury, prehospital time, and time to treatment were used to identify publications from 2009 to 2020 (Additional file 1: table 1). Results were limited to adult age group and exported to, and deduplicated in EndNote X9 (Clarivate Analytics, Philadelphia, PA). The Covidence systematic review software (Veritas Health Innovation, Melbourne, Australia) was used for screening and full text review. For the first review, article abstracts were independently screened by two trained reviewers (AB, FM), blinded to each other's reviews. Each reviewer read article titles and abstracts to determine if they satisfied inclusion criteria and to ensure they did not meet any exclusion criteria (see Table 1). Discrepant reviews of abstracts were adjudicated by a senior reviewer (NM). Articles included after abstract review were divided between two reviewers (AB, LM) for a full text review and critical synthesis. The following key elements were assessed during each full text review: research questions, country, study design, injuries and populations studies, choice and definitions of independent and dependent variables, and level of evidence using GRADE criteria [21]. If any exclusion criteria were identified during full text review, the article was excluded with specific reason(s) provided (with approval from the senior reviewer). All included full text articles were coded into a summary table. Articles were grouped, based on common research categories, and one representative article from each category was summarized in a prose (paragraph) format. Articles not belonging to a specific category were individually summarized. From the table of coded articles, key trends were descriptively reported using frequencies and percentages. Investigators independently appraised, then collectively discussed, all findings to reach consensus regarding key findings, conclusions, and recommendations which are presented qualitatively. Results We reviewed a total of 809 articles and included 96 after full text review (Fig. 1). Study characteristics Of 96 articles included, the overwhelming majority (90, 94%) were observational with a few (6, 6%) being interventional in design (Table 2) [69,78,85,88,95,98]. The six interventional studies evaluated the effects of prehospital blood product transfusion (plasma and packed red blood cells), and TXA administration on mortality, and used time (from injury to intervention) as a covariate. The largest proportion of articles originated from North America (42, 44%). Additional regions of origin included Europe (23, 24%), Asia (13, 14%), Australia (7, 7%), Africa (3, 3%), and South America (2, 2%). There were 6 (6%) articles of research simultaneously conducted in multiple geographic regions. We found 8 (8%) studies performed in LMICs, specifically Kenya, Malawi, Afghanistan, Iran, Iraq, and India. Of these, one study, conducted in Kenya, used a trauma registry as a data source [32]. The two studies in Afghanistan involve the US military patients only, as opposed to local trauma patients [72,102]. The Iraqi studies, on the other hand, evaluated local prehospital trauma care and outcomes, aligning them more closely with other LMIC studies [86,87]. Trauma mechanism and bodily injuries Most studies included any trauma mechanism, commonly defined as external force to the body not including bites, stings, burns, or drownings. A specific mechanism of injury was stated in the inclusion criteria in relatively few studies, and mechanism was often either "blunt" [49,66,98,109] or "penetrating" [58,97,101], though some did look at motor vehicle collisions as a specific mechanism [48,77]. There were several studies that focused on isolated torso injuries [25,79], but overall, the majority of articles (73, 76%) included any trauma mechanism to any body part. The notable exceptions were 17 (18%) studies of head-injured patients, which assessed the effect of prehospital interventions and/or prehospital time on neurologic outcomes [29,34,35,55,57,61,75,80,90,92,94,103]. Main outcomes Mortality was a primary outcome in the majority (90, 94%) of articles. Other frequently used primary outcomes included neurologic decline among head-injured patients [29,54,55,90,92], duration of trauma resuscitation [74], and EMS response times [62]. For most studies, in-hospital mortality was the most frequently used mortality outcome measure and was most often defined as all-cause death during hospital admission. Several articles assessed mortality within a specified period of time, starting as early as prehospital or ED mortality, and as far out as 3-months post-injury [35], although follow-up periods beyond 3 months were less commonly used. In traumatic brain injury (TBI) and spinal cord injury studies, neurologically focused outcomes were often the primary outcome while mortality was a secondary outcome [35,54]. In neurologic trauma studies, survivors' outcomes were assessed at discharge or long after admission (often 3 to 6 months) using neurologic functional outcome measures (e.g., Glasgow Outcome Scale score). Secondary outcomes Secondary outcomes varied widely across articles, with the five most frequently used being hospital length of stay, intensive care unit (ICU) length of stay, days on mechanical ventilation, neurologic outcomes (most frequently Glasgow Outcome Scale), and EMS transport times ( ). There were only a few studies that measured organ failure as a secondary outcome-four (4%) articles used multiple organ failure as a secondary outcome [27,63,64,85] assessed by the Sequential Organ Failure Assessment (SOFA) score, and two (2%) studies specified acute renal failure as the organ failure outcome [69,81]. Prehospital time as a key exposure Prehospital time, the primary variable of interest of this scoping review, was used as a key exposure (independent variable) in 48 (50%) articles. Prehospital time was most commonly defined as crude time from EMS notification to hospital arrival time. A common objective of these studies was to assess the effect of prehospital time (total time, or seldom, time intervals) on pre-or in-hospital mortality. Studies reported mixed (negative, neutral, and positive) associations with mortality with shorter prehospital times. Fatovich et al., in their study of urban and rural trauma patients in Western Australia, found that the risk of death was two times higher among the rural population when compared to urban trauma patients (rural population experienced significantly longer times to definitive care with median times of 11.6 h versus 59 min, respectively). They also identified no difference in mortality outcomes when the rural trauma patient survived to admission to a tertiary trauma center, when compared to the urban trauma patient [52]. Bagher prehospital time "of one hour and 30-day mortality" (adjusted OR 1.1, 95% CI 0.71-1.69), but did find association between scene times and longer hospital lengths of stay, with each additional minute of on-scene time associated with 1.16 times longer length of hospital stay (95% CI 1.03-1.31) [36]. Finally, when total prehospital time was sub-divided into intervals (response time, scene time, and transport time), Brown et al. found that there was an association (OR 1.21; 95% CI 1.02-1.44, p = 0.03) between prolonged scene time and mortality, regardless of transport modality (air or ground) [37]. Therefore, the reported association between prehospital time and outcomes was mixed in these studies with similar patient inclusion criteria. Prehospital time as a covariate Prehospital time was used as a covariate in 38 of 96 (40%) full-text articles reviewed. For example, Pakkanen et al. evaluated the differences in outcomes in severe TBI patients based on the exposure of a paramedicstaffed response unit versus a physician-staffed model [73]. Other examples of the use of prehospital time as a covariate were among studies with prehospital interventions as a primary exposure (e.g., Chiang, et al. [46]). Prehospital time as an outcome Prehospital time was used as an outcome measure in 10 (10%) studies [2,61,62,74,75,83,85,88,94,98]. Four of these studies evaluated the time resultant from one of the following independent factors: prehospital endotracheal intubation, chest tube insertion, needle thoracostomy, tourniquet application, cricothyroidotomy, and advanced cardiac life support [61,74,75,83]. For instance, Haltmeier et al. evaluated outcomes based on prehospital intubation in severe TBI patients (due to blunt trauma), comparing those to outcomes in patients that were not intubated in the prehospital setting. They found that there were associations between prehospital intubation and longer scene times (median 9 vs. 8 min p < 0.001), transport times (median 26 vs. 19 min, p < 0.001), days on a ventilator (mean 7.3 vs. 6.9, p = 0.006), ICU (median 6 vs 5 days, p < 0.001) and hospital length of stay (median 10 vs 9 days, p < 0.001), and higher inhospital mortality (31.4 vs. 27.5%, p < 0.001) [61]. Meanwhile, three articles (corresponding to two research studies) investigated the effect on prehospital time due to initiation of prehospital plasma infusion and tranexamic acid (TXA) administration [85,88,89]. Lastly, three studies looked at prehospital time, measured as dispatch time to definitive care, as an outcome resultant from different system-based variables, including trauma "deserts" in an urban area [2], a physician-staffed vs paramedic-staffed regional rotary wing aeromedical (helicopter) EMS system [62], and indirect vs direct transfer of TBI patients [94]. Of note, the article by Hesselfeldt et al. was not primarily a direct versus indirect transfer investigation, but the need for secondary transfer to a tertiary trauma center from an outside facility was listed as an outcome. Level of evidence A vast majority (90, 94%) of full-text studies reviewed were observational and had corresponding "low" levels of evidence, per the GRADE criteria. There were few articles (19, 20%) that reached a "moderate" or "high" level of evidence based on large sample sizes, more rigorous study designs (e.g., interventional trials), and/or the ability to compare randomized interventional versus control arms. Full article summaries are available in Additional file 2. The articles with the largest numbers of enrolled subjects were derived from registry data from 3 main sources: the National Trauma Data Bank (NTDB) (e.g., [45]), the Department of Defense Trauma Registry (e.g., [73]), Germany's Trauma Register DGU (e.g., [63]), or a regionally developed trauma registry (e.g., [32]). Discussion Trauma continues to be a leading and growing cause of morbidity and mortality across the world. EMS systems provide the earliest opportunity for the trauma care system to initiate resuscitation and rapidly deliver patients to definitive care facilities. Prehospital trauma care and priorities are time-driven, so it is necessary to understand the relationship between time and outcomes to help identify opportunities to optimize prehospital care and improve trauma outcomes. Yet, experts state there is an inadequate evidence base to support EMS practice [113]. Our scoping review specifically assessed the types of published studies regarding the effect of prehospital time on trauma outcomes. We identified 96 relevant articles and several key trends. First, we found a disproportionate minority (8%) of articles representing studies from LMICs, despite that over 90% of the global burden of injury originates from LMICs. Second, in-hospital mortality measured late in the clinical course, often at 30 days, was the most commonly used primary outcome measure, notwithstanding that these studies were prehospital-focused. For secondary outcomes, many studies measured length of stay (a process indicator) and only a minority of studies reported morbidity measures (e.g., organ failure). Third, the preponderance of studies was observational in design, many of which used trauma registries as the data source. Interventional prehospital trauma studies on this topic were rare. Last, studies primarily assessing the association of prehospital time and in-hospital mortality reported mixed (i.e., positive, negative, and neutral) associations, with conflicting conclusions [28,30,36,40,41,56,65,68,70,77,114]. Even though most of the trauma morbidity and mortality across the world arises from LMICs, and the fact that more than half of deaths in LMICs can be treated with prehospital and emergency care, LMICs are significantly underrepresented in this cohort of studies [13,115] This finding supports prior statements by the World Health Organization that prehospital emergency care in LMICs is a neglected area of research. The reasons are multifactorial, likely due to a combination of limited in-country research resources, relative paucity of formal EMS systems, limited prehospital research expertise, and a hospital-centric focus on trauma outcomes in LMICs. Research from LMICs may help fill important scientific gaps. First, strong and consistent trends between time and outcomes may be found in lower income settings because higher trauma caseloads may yield higher sample sizes and fewer resuscitative interventions may limit confounding factors. Second, a large criticism of prehospital trauma studies in HICs, supported by findings in our scoping review, is that the majority are conducted in urban trauma systems with short (< 30 min) prehospital times which is not reflective of the longer times to definitive care experienced in the rest of the world. Hence, prehospital trauma research from LMICs may help fill the evidence gap on outcomes from prolonged care. In-hospital mortality, often at 30 days, was the most commonly used trauma outcome. However, the median time from admission to hemorrhagic death is 2.0 to 2.6 h, according to several higher income country urban studies [116]. Consequently, military and civilian experts have urged the use of earlier time points, especially in resuscitation studies of time-sensitive, emergent injuries such as hemorrhagic shock [116]. Prehospital resuscitation and ambulance transport occur relatively early in the overall spectrum of a patient's care and more likely to be reflected in proximal time points, within 1 to 7 days [116]. Longer term outcomes (e.g., 30-day mortality or hospital survival) are more likely to reflect the effects of on-going hospital care. Twenty-eight-and 30-day mortality have historically been a standard in hospitalbased trauma research, which is beneficial by allowing comparisons of outcomes among studies. We also noted that few studies evaluated physiologic-based secondary outcomes, specifically single or multi-organ failure (MOF). MOF is a significant cause of post-injury morbidity and mortality and is impacted by early resuscitation [117]. MOF often starts around day 3 after injury and often peaks around day 7 [118]. Yet, we found a paucity of studies assessing MOF. We postulate that conducting prehospital trauma studies assessing MOF outcomes is relatively complex, as it requires the meticulous merging of prehospital data with in-hospital laboratory and clinical information, which is cost-and resource-prohibitive for most researchers, especially those without substantive research grants or infrastructure. Instead of physiologic outcomes, we found that many studies assessed secondary outcomes using process indicators (e.g., length of stay and mechanical ventilation days). While helpful, these are health system process indicators which limit comparability and generalizability of findings. TBI-focused studies often reported functional outcome measures assessed farthest from the date of injury, which is expected as neurologic outcomes usually evolve over weeks to months (e.g., Glasgow Outcomes Score at 6 months). The majority of studies we reviewed were observational (mostly retrospective) in design. Prospective and interventional studies, often more complex and expensive to conduct, comprise the minority of all trauma research studies, and our scoping review noted this same trend reported in prior literature [119]. We found four prehospital trauma clinical trials corresponding to six articles, all related to administration of TXA and blood products to improve outcomes. Clinical trials in trauma are particularly challenging, considering the unpredictable nature of trauma which adds to the logistic and clinical difficulties [119]. The addition of the prehospital context further complicates the regulatory and practical aspects of trauma trials, partly explaining why prehospital trauma trials are especially rare. Hurdles encountered by prehospital trauma interventional studies include regulatory issues, informed consent, practitioner compliance, standardizing delivery of interventions, and EMS protocols that may conflict with trial protocols [119,120]. We also found that a large proportion of observational studies were based upon trauma registry data. Most trauma registries are primarily developed to inform trauma quality improvement and for benchmarking care, as opposed to research [121]. Interestingly, the registrybased studies we reviewed often had a slightly higher level of evidence than non-registry based studies, likely resulting from larger sample sizes, use of well-defined and standardized data, and ability to control for relevant variables in statistical modeling [39]. An additional benefit of trauma registries is that they may represent larger and more diverse populations (e.g., state-based or regional registries), and conclusions drawn may better inform regional trauma system design, practices, and protocols. We do acknowledge that implementing trauma registries is challenging, especially in resourceconstrained settings. There are limitations in registries even in higher-income settings, including variability in quality of data, consistent data collection, and difficulties in standardization of data, all of which would require mitigation if implemented in the LMIC setting [122]. A recent scoping review found 28 articles that reported challenges implementing trauma registries in LMICs, with the most significant barriers being ensuring data quality, lack of resources, inadequate prehospital care, and difficulty with administrative duties and hospital organization [121]. Last, there were conflicting results regarding the relationship between prehospital time and patient outcomes, especially mortality. As a scoping review, we did not quantitatively explore this; however, we do offer several possible explanations for this observation. First, trauma is a heterogeneous group of diseases, yet most studies we reviewed included all-comer (undifferentiated) trauma patients and often grouped patients by penetrating vs blunt injury. While important, mechanism of injury alone is inadequate to separate distinct physiologic subgroups of injuries (e.g., hemorrhagic shock vs tension pneumothorax vs TBI), which have competing physiologic derangements and resuscitative priorities. Accurate subgrouping by specific injuries may require hospitalbased diagnoses, which adds complexity to prehospital study design and may deter investigators. Second, specific prehospital time intervals were often, but not always, reported, except for a minority of studies that controlled for the effect of response, scene, or transport durations on outcomes which may have caused conflicting findings across studies. Third, we found no studies that controlled for outcomes based on traumatic conditions, or body parts injured, that EMS practitioners can directly intervene upon to significantly influence patient outcomes. For example, limb amputations are directly intervenable by prehospital tourniquet application, whereas directly controlling abdominal hemorrhage is non-achievable by EMS practitioners. However, many studies we reviewed included both populations within the category of "hemorrhage," which may help explain why some studies showed no benefit of EMS interventions, despite time, on hemorrhagic outcomes. Last, specific body parts or mechanism of injury was not assessed by many studies which may render the interpretation of results to be challenging considering the heterogeneity in trauma. We should note that most studies of undifferentiated patients performed subgroup analyses of blunt versus penetrating injuries, or head versus non-head injuries-while commendable, this approach is likely still inadequate considering the heterogeneity of injuries within subgroups. The notable exceptions were TBI and a few studies on torso injuries, which excluded cases with irrelevantly injured body parts. Based on these findings, we offer several recommendations. Foremost, additional studies are needed to further investigate the effect of prehospital time and resuscitative interventions at shorter end-points (e.g., 72 h or 1 week) post-injury. Such approaches may better elucidate the specific impact of time and interventions on patient outcomes attributable to prehospital trauma care. Additionally, studies should place a heavier focus on morbidity measures (e.g., organ failure scores), especially via prehospital interventional trials, which can be more appropriately designed to assess causation of early prehospital interventions on hospital morbidity outcomes such as organ failure. Finally, there appears to be great need and potential benefit from conducting more prehospital trauma studies in LMICs, especially settings with high-prevalence and prolonged durations of care, which may more equitably address the worldwide burden of trauma-we recognize there are substantive challenges with resources and expertise that need to be overcome to accomplish this. Limitations Searches in this scoping review were limited to more contemporary studies published between 2009 and 2019. Expanding search criteria to a wider time frame would have yielded a more comprehensive list of articles, though this would have challenged the relevance of the review due to the inclusion of aged studies. Another limitation is that we excluded articles solely focusing on special trauma sub-populations (i.e., incarcerated, pediatric, and pregnant patients) and certain injury patterns (i.e., electrocution and drownings). While methodologically beneficial to focus this work, our findings are less relevant to less common trauma populations and uncommon mechanisms of injury. We also limited our search to English language studies which likely limited our yield, given the worldwide focus, but was methodologically important to the Englishspeaking authors' ability to evaluate the rigor and depth of reviews. Last, as a scoping review, we did not conduct a quantitative synthesis of study data, statistical techniques, or analytic limitations. Conclusion Our scoping review evaluated 96 articles published on the relationship of prehospital time and in-hospital outcomes. Nearly all were observational in design, in which prehospital time was often used as a key exposure with in-hospital mortality, at 30 days, as a primary outcome. Relatively few studies were available from LMICs, despite LMICs contributing the largest share of injury morbidity and mortality globally. Trauma registries provided a robust data set for evaluation in many higher quality studies and would be a valuable tool in future international, prehospital trauma research in resource-limited settings. We recommend more interventional prehospital trials, which use short-term trauma outcomes to better reflect the effect of prehospital time and interventions, with substantively more investigations needed in LMICs. We encourage that future studies include more specific morbidity outcome
2020-12-09T14:46:45.643Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "2491c551b9aa74c79805bb75508b6a2a45af5b44", "oa_license": "CCBY", "oa_url": "https://intjem.biomedcentral.com/track/pdf/10.1186/s12245-020-00324-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2491c551b9aa74c79805bb75508b6a2a45af5b44", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235706866
pes2o/s2orc
v3-fos-license
Catastrophic Sequalae Following Percutaneous Intervention in Case of Sigmoidectomy for Sigmoid Volvulus Injury to the inferior epigastric artery is infrequent and iatrogenic in most cases, which can be fatal and life-threatening in some cases due to unnoticed excessive hemorrhage. We present a 23-year-old male who underwent sigmoidectomy, end-to-end colorectal anastomosis with covering loop ileostomy for sigmoid volvulus. He developed intra-abdominal pus collection one week following surgery, for which ultrasound-guided aspiration was attempted. Post aspiration, the patient developed abdominal distension, pain with a significant drop in hemoglobin. Imaging showed active bleed from the branch of the inferior epigastric artery with massive intra-abdominal hematoma. The hematoma was evacuated, and the bleeding artery was identified and ligated. Postoperatively, there was no further drop in hemoglobin, and the patient was stable and hence discharged. Introduction The inferior epigastric artery is a branch from the external iliac artery in most people; however, in few patients, the indirect origin from the external iliac artery has been documented [1,2]. Percutaneous needle aspiration is usually done to drain the intra-abdominal collection. Percutaneous aspirations are usually done under image guidance; however, there may be an injury to the vascular structure in some patients. Few patients may develop severe abdominal pain, tachycardia, sweating, and profound shock. Computed tomography angiography (CTA) is usually helpful for the diagnosis. Such patients can be treated by percutaneous vascular intervention like angioembolization or an open surgical approach and ligation of the bleeding vessel. Case Presentation A 23-year-old male patient presented to the emergency surgery team with complaints of diffuse abdominal pain, distension, bilious vomiting, and obstipation for two days. Radiological investigations were suggestive of large bowel obstruction due to Sigmoid Volvulus. The patient underwent sigmoidectomy and end-to-end colorectal anastomosis with covering loop ileostomy. On the third postoperative day, he was started on orals which he was tolerating well. One week following surgery, he started developing multiple fever spikes. Ultrasound abdomen was done, which showed a localized collection of size 5.2 cm x 3.6 cm in the left iliac fossa for which ultrasound-guided percutaneous aspiration was attempted, and 10 mL of thick pus was drained. The patient started developing abdominal distension, severe abdominal pain, tachycardia, hypotension, and sweating few hours after the procedure. Blood investigations showed a significant drop in hemoglobin (from 10.6 g/dL to 7.1 g/dL). CTA showed 8.3 cm x 6.2 cm x 16.3 cm intraperitoneal hematoma extending from the pelvis, reaching up to the level of the lower pole of the right kidney. Hematoma appeared to be displacing the small bowel loops posteriorly. There was active contrast extravasation from the branch of the inferior epigastric artery suggestive of active bleed ( Figures 1A, 1B). Hence the patient underwent an emergency laparotomy. Intraoperatively, there was a massive intraperitoneal hematoma of approximately measuring 900 mL, which was evacuated. There was active spurting from a branch of the inferior epigastric artery from the anterior abdominal wall ( Figure 2). Bleeder was identified and ligated. The postoperative period was uneventful; hence the patient was discharged. Discussion The mechanism of injury to the Inferior epigastric artery is iatrogenic in most cases, with paracentesis being the most common mechanism of injury [3]. Although paracentesis is considered to be a safe procedure, it is known to be associated with few complications [4]. Injury to the inferior epigastric artery or any branch arising from it can lead to significant bleeding, which can be life-threatening [5]. Another possible cause of inferior epigastric artery injury is the percutaneous fine-needle biopsy of abdominal organs, a beneficial, perhaps indispensable, diagnostic procedure done routinely in most hospitals. Added guidance utilizing radio-imaging modalities has ensured accurate placement of the biopsy needle and has led to widespread acceptance of the procedure. The rate of complications for percutaneous biopsy or aspiration depends not only on the size and type of needle but also on the number of attempts and blood coagulopathy [6][7][8]. There is an absolute difference in bleeding when one compares needles with various calibers in a patient without any coagulopathy [8]. Female patients are at double risk than males because of more retropubic vascular variations than males [9]. Patients of higher age are also at increased risk of vascular damage and bleed. The presence of the malignant disease has also been associated with higher risk [10]. Our patient underwent ultrasound-guided percutaneous aspiration of the intra-abdominal abscess following sigmoidectomy. Patients develop pain abdomen and vomiting. Depending upon the amount of bleed, patients may develop tachycardia, hypotension, sweating, and a few may develop profound hemorrhagic shock. Clinically significant hemorrhage is defined as bleeding causing a drop of at least a 10-point hematocrit value and with hypotension or tachycardia, or there is a need for blood or blood products transfusion [3]. Imaging studies like CTA are helpful to localize the bleed as active contrast extravasations will be seen. There was active extravasation of the contrast from the branch of the inferior epigastric artery in our patient. Treatment of a patient who presents with inferior epigastric artery injury depends on the stability of the patient. If the patient is not stable, immediate laparotomy is mandated. Then, the active bleeder has to be identified and ligated. Another minimally invasive approach is angioembolization of the bleeding vessel. Our patient underwent immediate surgical exploration and ligation of the bleeding vessels to evacuate the intra-abdominal hematoma. This is the probable first case where the inferior epigastric artery got injured while doing percutaneous aspiration for intra-abdominal collections. Conclusions Inadvertent puncture of any major vessel following percutaneous intervention for intra-abdominal collections in a postoperative patient can lead to life-threatening conditions. Therefore, early diagnosis has to be made based on the immediate clinical features of the patient. Early diagnosis and early intervention for the bleeding can reduce morbidity and fatality and prevent major interventions from being taken for such disasters. Emergent surgical intervention in such patients is life-saving and should be done at the earliest. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2021-07-03T05:20:56.968Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "6a6196c82281398ab00fdeb9c78383e4d354513f", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/61493-catastrophic-sequalae-following-percutaneous-intervention-in-case-of-sigmoidectomy-for-sigmoid-volvulus.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6a6196c82281398ab00fdeb9c78383e4d354513f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1775715
pes2o/s2orc
v3-fos-license
Music and Its Inductive Power: A Psychobiological and Evolutionary Approach to Musical Emotions The aim of this contribution is to broaden the concept of musical meaning from an abstract and emotionally neutral cognitive representation to an emotion-integrating description that is related to the evolutionary approach to music. Starting from the dispositional machinery for dealing with music as a temporal and sounding phenomenon, musical emotions are considered as adaptive responses to be aroused in human beings as the product of neural structures that are specialized for their processing. A theoretical and empirical background is provided in order to bring together the findings of music and emotion studies and the evolutionary approach to musical meaning. The theoretical grounding elaborates on the transition from referential to affective semantics, the distinction between expression and induction of emotions, and the tension between discrete-digital and analog-continuous processing of the sounds. The empirical background provides evidence from several findings such as infant-directed speech, referential emotive vocalizations and separation calls in lower mammals, the distinction between the acoustic and vehicle mode of sound perception, and the bodily and physiological reactions to the sounds. It is argued, finally, that early affective processing reflects the way emotions make our bodies feel, which in turn reflects on the emotions expressed and decoded. As such there is a dynamic tension between nature and nurture, which is reflected in the nature-nurture-nature cycle of musical sense-making. INTRODUCTION Music is a powerful tool for emotion induction and mood modulation by triggering ancient evolutionary systems in the human body. The study of the emotional domain, however, is complicated, especially with regard to music (Trainor and Schmidt, 2003;Juslin and Laukka, 2004;Scherer, 2004;Juslin and Västfjäll, 2008;Juslin and Sloboda, 2010;Coutinho and Cangelosi, 2011), due mainly to a lack of descriptive vocabulary and an encompassing theoretical framework. According to Sander, emotion can be defined as "an event-focused, two-step, fast process consisting of (1) relevance-based emotion elicitation mechanisms that (2) shape a multiple emotional response (i.e., action tendency, autonomic reaction, expression, and feeling" (Sander, 2013, p. 23). More in general, there is some consensus that emotion should be viewed as a compound of action tendency, bodily responses, and emotional experience with cognition being considered as part of the experience component (Scherer, 1993). Emotion, in this view, is a multicomponent entity consisting of subjective experience or feeling, neurophysiological response patterns in the central and autonomous nervous system, and motor expression in face, voice and gestures (see Johnstone and Scherer, 2000 for an overview). These components-often referred to as the emotional reaction triad-embrace the evaluation or appraisal of an antecedent event and the action tendencies generated by the emotion. As such, emotion can be considered as a phylogenetically evolved, adaptive mechanism that facilitates the attempt of an organism to cope with important events that affect its well-being (Scherer, 1993). In this view, changes in one of the components are integrated in order to mobilize all resources of an organism and all the systems are coupled to maximize the chances to cope with a challenging environment. Emotions-and music-induced emotions in particular,are thus difficult to study adequately and this holds true also for the idiosyncrasies of individual sense-making in music listening. Four major areas, however, have significantly advanced the field: (i) the development of new research methods (continuous, real-time and direct recording of physiological correlates of emotions), (ii) advanced techniques and methods of neuroscience (including fMRI, PET, EEG, EMG and TMS), (iii) theoretical advances such as the distinction between felt and perceived emotions and acknowledgment of various induction mechanisms, and (iv) the adoption of evolutionary accounts. The development of new research methods, in particular, has changed dramatically the field, with seminal contributions from neuropsychology, neurobiology, psychobiology and affective neuroscience. There is, however, still need of a conceptual and theoretical framework that brings all findings together in a coherent way. In order to address this issue, we organize our review of the field on three broad theoretical frameworks that are indispensable for the topic, namely an evolutionary, embodied and reflective one (see Figure 1). Within these frameworks, we focus on the levels and emphasis of the processes involved and connect the types of emotion conceptualizations involved to these frameworks. For instance, the levels of processes are typically divided into low-level and high-level processes, the emphasis of the emotion ranges from recognition to experience of emotion, and the types of emotions involved in these frameworks are usually tightly linked to the levels and emphases. Emotion recognition, e.g., is typically associated with utilitarian emotions, whereas higher level and cognitively mediated reflective emotions that are largely the product of emotion experience might be better conceptualized by aesthetic emotions. The embodied framework does break these dichotomies of high and low and recognition and experience in postulating processes that are flexible, fluid and driven through modality-specific systems that emphasize the interaction between the events offered by the environment, the sensory processes and the acquired competencies for reacting to them in an appropriate fashion. In what follows, we will start from an evolutionary approach to musical emotions-defining them to some extent as adaptations-, looking thereafter toward the contributions from affective semantics and the embodied framework for explaining musical emotions from a neuroscientific perspective. We then move onto some psychobiological claims to end with addressing the issue of modulation of emotions by aesthetic experience. In doing so we will look at some conceptual challenges associated with emotions before moving onto emotional meanings in music with the aim to connect experience and meaning-making in the context of emotions to the functions of emotions within an evolutionary perspective. The latter, finally, will be challenged to some extent. EVOLUTIONARY CLAIMS: EMOTIONS AS ADAPTATIONS The neurosciences of music have received a lot of attention in recent research. The neuroaesthetics of music, however, remains still somewhat undeveloped as most of the experiments that have been conducted aimed at studying the neural effects on perceptual and cognitive skills rather than on aesthetic or affective judgments (Brattico and Pearce, 2013). Psychology and neuroscience, up to now, have been preoccupied mostly with the cortico-cognitive systems of the human brains rather than with subcortical-affective ones. Affective consciousness, as a matter of fact, needs to be distinguished from more cognitive forms which generate propositional thoughts about the world. These evolutionary younger cognitive functions add an enormous richness to human emotional life but they neglect the fact that the "energetic" engines for affect are concentrated sub-neocortically. Without these ancestral emotional systems of our brains, music would probably become a less meaningful and desired experience (Panksepp and Bernatzky, 2002;Panksepp, 2005). In order to motivate these claims, there is need of bottomup evolutionary, and mainly adaptationist proposals in search of the origins of aesthetic experiences of music, starting from the identification of universal musical features that are observable in all cultures of the world (Brattico et al., 2009(Brattico et al., -2010. The exquisite sensitivity of our species to emotional sounds, e.g., may function as an example of the survival advantage conferred to operate within small groups and social situations where reading another person's emotional state is of vital importance. This is akin to privileged processing of human faces, which is another highly significant social signal that has been a candidate for evolutionary selection. Processing affective sounds, further, is assumed to be a crucial element for the affective-emotional appreciation of music, which, in this view, can arouse basic emotional circuits at low hierarchical levels of auditory input (Panksepp and Bernatzky, 2002). Music has been considered from an evolutionary perspective in several lines of research, ranging from theoretical discussions (see Brattico et al., 2009Brattico et al., -2010Cross, 2009Cross, -2010Lehman et al., 2009Lehman et al., -2010Livingstone andThompson, 2009-2010;Honing et al., 2015), to biological and cross-cultural , and cross-species evidence (Merchant et al., 2015). Although these various accounts have not fully unpacked the functional role of emotions in the origins of music, certain agreed positions have emerged. For instance, music is conceived as a universal phenomenon with adaptive FIGURE 1 | Conceptual framework of emotion processes involved in music listening. power (Wallin et al., 2000;Huron, 2003;Justus and Hutsler, 2005;McDermott and Hauser, 2005;Dissanayake, 2008;Cross, 2009Cross, -2010. Neuroscientists as LeDoux (1996) and Damasio (1999) have argued that emotions did not evolve as conscious feelings but as adaptive bodily responses that are controlled by the brain. LeDoux (1989LeDoux ( , 1996, moreover, has proposed two separate neural pathways that mediate between sensory stimuli and affective responses: a low road and a high road. The "low road" is the subcortical pathway that transmits emotional stimuli directly to the amygdala-a brain structure that regulates behavioral, autonomic and endocrine responses-by way of connections to the brain stem and motor centers. It bypasses higher cortical areas which may be involved in cognition and consciousness and triggers emotional responses (particularly fear responses) without cognitive mediation. As such, it involves reactive activity that is pre-attentive, very fast and automatic, with the "startle response" as the most typical example (Witvliet and Vrana, 1996;Błaszczyk, 2003). Such "primitive" processing has considerable adaptive value for an organism in providing levels of elementary forms of decision making which rely on sets of neural circuits which do the deciding (Damasio, 1994;Lavender and Hommel, 2007). It embraces mainly physiological constants, such as the induction or modification of arousal as well as bodily reactions with a whole range of autonomic reactions. The "high road, " on the contrary, passes through the amygdala to the higher cortical areas. It allows for much more fine-grained processing of stimuli but operates more slowly. Primitive processing is to be found also in the processing of emotions, which, at their most elementary level, may behave as reflexes in their operation. Occurring with rapid onset, through automatic appraisal and with involuntary changes in physiological and behavioral responses (Peretz, 2001), this level is analogous to the functioning of innate affect programs (Griffiths, 1997), which can be assigned to an inherited subcortical structure that can instruct and control a variety of muscles and glands to respond with unique patterns of activity that are characteristic of a given affect (Tomkins, 1963). Defined in this way, affect programs related to music should be connected to rapid, automatic responses caused by sudden loud sounds (brain stem reflex in the BRECVEMA model, see below). However, a broader interpretation of affect programs as being embodied and embedded in body states and their simulations would put the majority of the emotions into this elementary level (Niedenthal, 2007). In our view, such a broadened embodied view may be a more fruitful way of mapping out the links between the stimuli and emotions than the rather narrow definition of affect programs. Musically induced emotions, considered at their lowest level, can be conceived partly as reactive behavior that points into the direction of automatic processing, involving a lot of biological regulation that engages evolutionary older and less developed structures of the brain. They may have originated as adaptive responses to acoustic input from threatening and nonthreatening sounds (Balkwill and Thompson, 1999) which can be considered as quasi-universal reactions to auditory stimuli in general and by extension also to sounding music. Dealing with music, in this view, is to be subsumed under the broader category of "coping with the sounds" (Reybrouck, 2001(Reybrouck, , 2005. It means also that the notion of musicality, seen exclusively as an evolved trait that is specifically shaped by natural selection, has been questioned to some extent, in the sense that the role of learning and culture have been proposed as possible alternatives (Justus and Hutsler, 2005). From an evolutionary perspective, music has often been viewed as a by-product of natural selection in other cognitive domains, such as, e.g., language, auditory scene analysis, habitat selection, emotion, and motor control (Pinker, 1997; see also Hauser and McDermott, 2003). Music, then, should be merely exaptive, which means that is only an evolutionary by-product of the emergence of other capacities that have direct adaptive value. As such, it should have no role in the survival as a species but should have been derived from an optimal instinctive sensitivity for certain sound patterns, which may have arisen because it proved adaptive for survival (Barrow, 1995). Music, in this view, should have exploited parasitically a capacity that was originally functional in primitive human communication [still evident in speech, note the similarity of affective cues in speech and music (Juslin and Laukka, 2003)] but that fell into disuse with the emergence of finer shades of differentiation in sound pattern that emerged with the emergence of music (Sperber, 1996). As such, processes other than direct adaptation, such as cultural transmission and exaptation, seem suited to complement the study of biological and evolutionary bases of dealing with music (Tooby and Cosmides, 1992;Justus and Hutsler, 2005, see also below). A purely adaptationist point of view has thus been challenged with regard to music. In a rather narrow description, the notion of adaptation revolves around the concepts of innate constraint and domain specificity, calling forth also the modularity approach to cognition (Fodor, 1983(Fodor, , 1985, which states that some aspects of cognition are performed by mental modules or mechanisms that are specific to the processing of only one kind of information. They are largely innate, fast and unaffected by the content of other representations, and are implemented by specific localizable brain regions. Taken together, such qualities can be referred to as "domain specificity, " "innate constraints, " "information encapsulation" and "brain localization" (see Justus and Hutsler, 2005). Several attempts have been made to apply the modular approach to the domain of music. It has been shown, e.g., that the representation of pitch in terms of a tonal system can be considered as a module with specialized regions of the cortex (Peretz and Coltheart, 2003). Much of music processing occurs also implicitly and automatically, suggesting some kind of information encapsulation. It can be questioned, however, whether the relevant cortical areas are really domain-specific for music. The concept of modularity, moreover, has been critized, as different facets of modularity are dissociable with the introduction of the concept of distributivity as a possible alternative (Dick et al., 2001). One way in which this dissociation works is the discovery of emergent modules in the sense that predictable regions of the cortex may become informationally encapsulated and/or domain specific, without the outcome having been planned by the genome (Karmiloff-Smith, 1992). The debate concerning the innateness of music processing, however, is not conclusive. A lot of research still has to be done to address the ways in which a domain is innately constrained (Justus and Hutsler, 2005). Most of the efforts, up to now, have concentrated on perception and cognition, with the importance of octave equivalence and other simple pitch ratios, the categorization of discrete tone categories within the octave, the role of melodic contour, tonal hierarchies and principles of grouping and meter as possible candidate constraints. Music, however, is not merely a cognitive domain but calls forth experiential claims as well, with many connections with the psychobiology and neurophysiology of affection and emotions. Affective neuroscience has already extended current knowledge of the emotional brain to some extent (Davidson and Sutton, 1995;Panksepp, 1998;Sander, 2013), but a lot of work still has to be done. Dealing with musically induced emotions, further, can be approached from different scales of description: the larger evolutionary scale (phylogeny) and the scale of individual human development (ontogeny). An abundance of empirical evidence has been gathered from developmental (newborn studies and infant-directed speech) (Trehub, 2003;Falk, 2009) and comparative research between humans and non-human animals (referential emotive vocalizations and separation calls). It has been shown, e.g., that evolution has given emotional sound special time-forms that arise from frequency and amplitude modulation of relatively simple acoustic patterns (Panksepp, 2009(Panksepp, -2010. As such, there are means of sound communication in general which are partly shared among living primates and other mammals (Hauser, 1999) and which are the result of brain evolution with the appearance of separate layers that have overgrown the older functions without actually replacing them (Striedter, 2005(Striedter, , 2006. By using sound carriers, humans seem to be able to transmit information such as spatial location, structure of the body, sexual attractiveness, emotional states, cohesion of the group, etc. Some of it is present in all sound messages, but other kinds of information seem to be restricted to specific ways of sound expression (Karpf, 2006). The communicative accuracy of these sets of information, however, has been rarely if at all studied except for emotion states. This is the case even more for singing, as a primitive way of music realization that was probably previous to any kind of instrumental music making (Geissmann, 2000;Mithen, 2006) and which contains different degrees of motor, emotional and cognitive elements which are universal for us as a species. Generalizing a little, there are special forms of human sound expression that allow communication with other species and reactions to sound stimuli that are similar to those of animals. On the other hand, there seems to be a set of specific sound features belonging exclusive to man-music features such as, e.g., tonality and isometry-, which are strongly connected with emotion expression but which are absent in other kinds of human sound communication (see Gorzelañczyk and Podlipniak, 2011). This is obvious in speech and music and even in some animal vocalizations. The acoustic measures of speech, e.g., can be subdivided into four categories: time-related measures (temporal sequence of different types of sound and silence as carriers of affective information), intensity-related measures (amount of energy in the speech signal), measures related to fundamental frequency (F 0 base level and F 0 range; relative power of fundamental frequency and the harmonics F 1 , F 2 , etc.), and more complicated time-frequency-energy measures (specific patterns of resonant frequencies such as formants). Three of them are linked to the perceptual dimensions of speech rate, loudness and pitch, the fourth is related to the perceived timbre and voice quality (Johnstone and Scherer, 2000). Taken together, these measures have made it possible to measure the encoding of vocal affect, at least for some commonly studied emotions such as stress, anger, fear, sadness, joy, disgust, and boredom with most consistency in the findings for arousal. The search for emotionspecific acoustic patterns with similar arousal, however, is still a subject of ongoing research (Banse and Scherer, 1996;. AFFECTIVE SEMANTICS AND THE EMBODIED FRAMEWORK Music can be considered as a sounding and temporal phenomenon, with the experience of time as a critical factor for musical sense-making. Such an experiential approach depends on perceptual bonding and continuous processing of the sound (Reybrouck, 2014(Reybrouck, , 2015. It can be questioned, in this regard, whether the standard self-report instruments of induced emotions (Eerola and Vuoskoski, 2013) are tapping onto the experiential level or whether that experiential level is inaccessible by such methods, although it may be partially accessible by introspection and verbalization. To address this question, a distinction should be made between the recognition of emotions and the emotions as felt. The former can be considered as a "cognitive-discrete" process which is reducible to categorical assessments of the affective qualia of sounds; the latter calls forth a continuous experience which entails a conception of "music-as-felt" rather than a disembodied approach to musical meaning Schubert, 2013). Though the distinction has received already some attention, there is still need of a conceptual and theoretical framework that brings together current knowledge on perceived and induced emotions in a coherent way. Ways of handling time and experience in music and emotion research up to now have not been neglected (Jones, 1976;Jones and Boltz, 1989) with a significant number of continuous rating studies (Schubert, 2001(Schubert, , 2004, but the study of time has not been the real strength of this research. It can be argued, therefore, that time is not merely an empty perception of duration. It should be considered, on the contrary, as one of the contributing dimensions in the study of emotions in their dynamic form. It calls forth the role of affective semantics-a term coined by Molino (2000)-, which aims at describing the meaning of something not in terms of abstract and emotionally neutral cognitive representations, but in a way that is dependent mainly on the integration of emotions (Brown et al., 2004;Menon and Levitin, 2005;Panksepp, 2009Panksepp, -2010. Musical semantics, accordingly, is in search not only of the lexico-semantic but also of the experiential dimension of meaning, which, in turn, is related to the affective one. Affective semantics, as applied to music, should be able to recognize the emotional meanings which particular sound patterns are trying to convey. It calls forth a continuous rather than a discrete processing of the sounds in order to catch the expressive qualities that vary and change in a dynamic way. Emotional expressions, in fact, are not homogeneous over time, and many of music's most expressive qualities relate to structural changes over time, somewhat analogous to the concept of prosodic contours which is found in vocal expressions (Banse and Scherer, 1996;Scherer, 2003;Belin et al., 2008;Hawk et al., 2009;Sauter et al., 2010;Lima et al., 2013). The strongest arguments for the introduction of affective semantics in music emotion research come from the developmental perspective (Trainor and Schmidt, 2003): caregivers around the world sing to infants in an infant-directed singing style-using both lullaby and playsong style-which is probably used in order to express emotional information and to regulate the infant's state. This style-also known as motherese-is distinct from other types of singing and young infants are very responsive to it. Additional empirical grounding, moreover, comes from primate vocalizations, which are coined as referential emotive vocalizations (Frayer and Nicolay, 2000) and separation calls (Newman, 2007). Embracing a body of calls that serve a direct emotive response to some object or events in the environment, they exhibit a dual acoustic nature in having both a referential and emotive meaning (Briefer, 2012). It is arguable, further, that the affective impact of music could be traced back to similar grounds, being generated by the modulation of sound with a close connection between primitive emotional dynamics and the essential dynamics of music, both of which appear to be biologically grounded as innate release mechanisms that generate instinctual emotional actions (Burkhardt, 2005;Panksepp, 2009Panksepp, -2010Coutinho and Cangelosi, 2011). Along with the evolved appreciation of temporal progressions (Clynes and Walker, 1986) they can generate, relive, and communicate emotion intensity, helping to explain why some emotional cues are so easily rendered and recognized through music. This can be seen in the rare cases, where music expressing particular emotions have been exposed to listeners from distinct cultures, at least concerning basic or primary emotions, such as happy, sad, and angry (Balkwill and Thompson, 1999;Fritz et al., 2009). The case seems to be more complicated, however, with regard to secondary or aesthetic emotions such as, e.g., spirituality and longing (Laukka et al., 2013). As such, there is more to music than the recognition of discrete elements and the way they are related to each other. As important is a description of "music-as-felt, " somewhat analogous to the distinction which has been made between the vehicle and the acoustic mode of sense-making (Frayer and Nicolay, 2000). The latter refers to particular sound patterns being able to convey emotional meanings by relying on the immediate, online emotive aspect of sound perception and production and deals with the emotive interpretation of musical sound patterns; the vehicle mode, on the other hand, involves referential meaning, somewhat analogous to the lexico-semantic dimension of language, with arbitrary sound patterns as vehicles to convey symbolic meaning. It refers to the off-line, referential form of sound perception and production, which is a representational mode of dealing with music that results from the influence of human linguistic capacity on music cognition and which reduces meaning to the perception of "disembodied elements" that are dealt with in a propositional way. The online form of sound perception-the acoustic modeis somewhat related to the Clynes' concept of sentic modulation (Clynes, 1977), as a general modulatory system that is involved in conveying and perceiving the intensity of emotive expression by means of three graded spectra of tempo modulation, amplitude modulation, and register selection, somewhat analogous to the well-known rules of prosody. In addition, there is also timbre as a separate category (Menon et al., 2002;Eerola, 2011), which represents three major dimensions of sounds, namely the temporal (attack time), spectral (spectral energy distribution) and spectro-temporal (spectral flux) (Eerola et al., 2012, p. 49). The very idea of sentic modulation has been taken up in recent studies about emotional expression that is conveyed by non-verbal vocal expressions. Examples are the modifications of prosody during expressive speech and non-linguistic vocalizations such as breathing sounds, crying, hums, grunts, laughter, shrieks, and sighs (Juslin and Laukka, 2003;Scherer, 2003;Thompson and Balkwill, 2006;Bryant and Barrett, 2008;Pell et al., 2009;Bryant, 2013) and non-verbal affect vocalizations (Bradley and Lang, 2000;Belin et al., 2008;Redondo et al., 2008;and Reybrouck and Podlipniak, submitted, for an overview). Starting from the observation that the body usually responds physically to an emotion, it can be claimed that physiological responses act as a trigger for appropriate actions with the motor and visceral systems acting as typical manifestations, but other modalities are possible as well. As such, the concept of sentic modulations can be related to Niedenthal's embodied approach to multimodal processing, surpassing the muscles and the viscera in order to focus on modality-specific systems in the brain perception, action and introspection that are fast, refined and flexible. They can even be reactivated without their output being observable in overt behavior with embodiment referring both to actual bodily states and simulations of the modality-specific systems in the brain (Niedenthal et al., 2005;Niedenthal, 2007). The musical-emotional experience, further, has received much impetus from theoretical contributions and empirical research (Eerola and Vuoskoski, 2013). Impinging upon the body and its physiological correlates, it calls forth an embodied approach to musical emotions which goes beyond the standard cognitivist approach. The latter, based on appraisal, representation and rule-based or information-processing models of cognition, offers rather limited insights of what a musical-emotional experience entails (Schiavio et al., 2016; see also Scherer, 2004 for a critical discussion). Alternative embodied/enactive models of mindsuch as the "4E" model of cognition (embodied, embedded, enactive, and extended, see Menary, 2010)-have challenged this approach by emphasizing meaning-making as an ongoing process of dynamic interactivity between an organism and its environment (Barrett, 2011;Maiese, 2011;Hutto and Myin, 2013). Relying on the basic concept of "enactivism" as a crossdisciplinary perspective on human cognition that integrates insights from phenomenology and philosophy of mind, cognitive neuroscience, theoretical biology, and developmental and social psychology (Varela et al., 1991;Thompson, 2007;Stewart et al., 2010), enactive models understand cognition as embodied and perceptually guided activity that is constituted by circular interactions between an organism and its environment. Through continuous sensorimotor loops (defined by realtime perception/action cycles), the living organism-including the music listener/performer-enacts or brings forth his/her own domain of meaning (Reybrouck, 2005;Thompson, 2005;Colombetti and Thompson, 2008) without separation between the cognitive states of the organism, its physiology, and the environment in which it is embedded. Cognition and mind, in this view, originate in a continuous interplay between an organism and its environment as an evolving dynamic system (Hurley, 1998). Starting from the observation that the body usually responds physically to an emotion, it can be claimed, further, that physiological responses act as a trigger for appropriate actions with the motor and visceral systems acting as typical manifestations. Other modalities, however, are possible as well., as exemplified in Niedenthal's embodied approach to multimodal processing, surpassing the muscles and the viscera in order to focus on modality-specific systems in the brain-perception, action and introspection-that are fast, refined and flexible. They can even be reactivated without their output being observable in overt behavior. Embodiment, then, is referring both to actual bodily states or simulations of the modality-specific systems in the brain (Niedenthal et al., 2005;Niedenthal, 2007). INDUCTION OF EMOTIONS: PSYCHOBIOLOGICAL CLAIMS Music may be considered as something that catches us and that induces several reactions beyond conscious control. As such, it calls forth a deeper affective domain to which cognition is subservient, and which makes the brains such receptive vessels for the emotional power of music (Panksepp and Bernatzky, 2002). The auditory system, in fact, evolved phylogenetically from the vestibular system, which contains a substantial number of acoustically responsive fibers (Koelsch, 2014). It is sensitive to sounds and vibrations-especially those of loud sounds with low frequencies or with sudden onsets-and projects to the reticular formation and the parabrachial nucleus, which is a convergence site for vestibular, visceral and autonomic processing. As such, subcortical processing of sounds gives rise not only to auditory sensations but also to muscular and autonomic responses. It has been shown, moreover, that intense hedonic experiences of sound and pleasurable aesthetic responses to music are reflected in the listeners' autonomic and central nervous systems, as evidenced by objective measurements with polygraph, EEG, PET or fMRI (Brattico et al., 2009(Brattico et al., -2010. Though these measures do not always differentiate between specific emotions, they indicate that the reward system can be heavily activated by music (Blood and Zatorre, 2001;Salimpoor et al., 2015). But other brain structures can be activated as well, more particularly those brain structures that are crucially involved in emotion, such as the amygdala, the nucleus accumbens, the hypothalamus, the hippocampus, the insula, the cingulate cortex and the orbitofrontal cortex (Koelsch, 2014). Emotional reactions to music, further, activate the same cortical, subcortical and autonomic circuits, which are considered as the essential survival circuits of biological organisms in general (Blood and Zatorre, 2001;Trainor and Schmidt, 2003;Salimpoor et al., 2015). The subcortical processing affects the body through the basic mechanisms of chemical release in the blood and the spread of neural activation. The latter, especially, invites listeners to react bodily to music with a whole bunch of autonomic reactions such as changes in heart rate, respiration rate, blood flow, skin conductance, brain activation patterns, and hormone release (oxytocin, testosterone), all driven by the phylogenetically older parts of the nervous system (Ellis and Thayer, 2010). These reactions can be considered the "physiological correlates" of listening (see Levenson, 2003, for a general review), but the question remains whether such measures provide sufficient detailed information to distinguish musically induced physiological reactions from mere physiological reactions to emotional stimuli in general (Lundqvist et al., 2009). Recent physiological studies have shown that pieces of music that express different emotions may actually produce distinct physiological reactions in listeners (see Juslin and Laukka, 2004 for a critical review). It has been shown also that performers are able to communicate at least five emotions (happiness, anger, sadness, fear, tenderness) with this proviso that this communication operates in terms of broader emotional categories than the finer distinctions which are possible within these categories (Juslin and Laukka, 2003). Precision of communication, however, is not a primary criterion by which listeners value music and reliability is often compromised for the sake of other musical characteristics. Physiological measures may thus be important, but establishing clear-cut and consistent relationships between emotions and their physiological correlates remains difficult, though some studies have received some success in the case of few basic emotions (Juslin and Laukka, 2004;Lundqvist et al., 2009). Music thus has inductive power. It engenders physiological responses, which are triggered by the central nervous system and which are proportional to the way the information has been received, analyzed and interpreted through instinctive, emotional pathways that are ultimately concerned with maintaining an internal environment that ensures survival (Schneck and Berger, 2010). Such dynamically equilibrated and delicately balanced internal milieu (homeostasis), together with the physiological processes which maintain it, relies on finely tuned control mechanisms that keep the body operating as closely as possible to predetermined baseline physiological quantities or reference set-points (blood pressure, pulse rate, breathing rate, body temperature, blood sugar level, pH, fluid balance, etc.). Sensory stimulation of all kinds can change and disturb this equilibrium and invite the organism to adapt these basic reference points, mostly after persisting and continuous disturbances that act as environmental or driving forces to which the organism must adapt. There are, however, also short term immediate reactions to the music as a driving force, as evidenced from neurobiological and psychobiological research that revolves around the central axiom of psychobiological equivalence between percepts, experience and thought (Reybrouck, 2013). This axiom addresses the central question whether there is some lawfulness in the coordination between sounding stimuli and the responses of music listeners in general. A lot of empirical support has been collected from studies of psychophysical dimensions of music as well as physiological reactions that have shown to be their correlates (Peretz, 2001(Peretz, , 2006Scherer and Zentner, 2001;Menon and Levitin, 2005;van der Zwaag et al., 2011). Psychophysical dimensions, as considered in a musical context, can be defined as any property of sound that can be perceived independently of musical experience, knowledge, or enculturation, such as, e.g., speed of pulse or tempo. A distinction should be made, however, between the psychophysics of perception and the psychobiology of the bodily reactions to the sounds. The psychophysics features suggest a reliable correlation between acoustic signals and their perceptual processing, with a special emphasis on the study of how individual features of music contribute to its emotional expression, embracing psychoacoustic features such as loudness, roughness and timbre . The psychobiological claims, on the other hand, are still subject of ongoing research. Some of them can be subsumed under the sensations of peak experience, flow and shivers or chills (Panksepp and Bernatzky, 2002;Grewe et al., 2007;Harrison and Loui, 2014) as evidence for particularly strong emotional experiences with music (Gabrielsson and Lindström, 2003;Gabrielsson, 2010). Such intensely pleasurable experiences are straightforward to be recorded behaviorally and have the additional advantage of producing characteristic physiological markers including changes in heart rate, respiration amplitude, and skin conductance (e.g., Blood and Zatorre, 2001;Sachs et al., 2016). They are associated mainly with changes in the autonomic nervous system and with metabolic activity in the cerebral regions, such as ventral striatum, amygdala, insula, and midbrain, usually devoted to motivation, emotion, arousal, and reward (Blood and Zatorre, 2001). Their association with subcortical structures indicates also their possible association with ancestral behavioral patterns of the prehistoric individual, making them relevant for the evaluation of the evolutionary hypothesis on the origin of aesthetic experience of music (Brattico et al., 2009(Brattico et al., -2010. Such peak experiences, however, are rather rare and should not be taken as the main starting point for a generic comparative perspective on musical emotions. Some broader vitality effects, such as those exemplified in the relations between personal feelings and the dynamics of infant's movements and the sympathetic responses by their caregivers in a kind of mutual attunement (Stern, 1985(Stern, , 1999 see also Malloch and Trevarthen, 2009), as well as the creation of tensions and expectancies may engender also some musicspecific emotional reactions. The general assumption, then, is that musically evoked reactions emerge from "presemantic acoustic dynamics" that evolved in ancient times, but that still interact with the intrinsic emotional systems of our brains (Panksepp, 1995, p. 172) AN INTEGRATED FRAMEWORK OF MUSIC EMOTIONS AND THEIR UNDERLYING MECHANISMS What are these presemantic acoustic dynamics? Here we should make a distinction between the structural features of the music which induce emotions and their underlying mechanisms. As to the first, musical cues such as mode, followed by tempo, register, dynamics, articulation, and timbre seem to be important, at least in Western music. Increases in perceived complexity, moreover, has been shown also to evoke arousal (Balkwill and Thompson, 1999). Being grounded in the dispositional machinery of individual music users these features may function as universal cues for the emotional evaluation of auditory stimuli in general. Much more research, however, is needed in order to trace their underlying mechanisms. A major attempt has been made already by Juslin and Västfjäll (2008) and Liljeström et al. (2013) who present a framework that embraces eight basic mechanisms (brain stem reflexes, rhythmic entrainment, evaluative conditioning, emotional contagion, visual imagery, episodic memory, musical expectancy and aesthetic judgment-commonly referred to as BRECVEMA). In addition to these mechanisms, an integrated framework has been proposed also by Eerola (2017), with lowlevel measurable properties being capable of producing highly different higher-level conceptual interpretations (see Figure 2). Its underlying machinery is best described in dimensional terms (core affects as valence and arousal) but conscious interpretations can be superposed on them, allowing a categorical approach that relies on higher-level conceptual categories as well. As such, the model can be considered a hybrid model that builds on these existing emotion models and attempts to clarify the levels of explanations of emotions and the typical measures related to these layers of explanations. Although this is a simplification of a complex process, the purpose is to emphasize the disparate conceptual issues brought under the focus at each different level, which is a notion put forward in the past (e.g., Leventhal and Scherer, 1987). The types of measures of emotions alluded to in the model are not merely alternative instruments but profoundly different ontological stances which capture biological reductionism (all physiological responses), psychological (all behavioral responses including self-reports) and phenomenological (various experiential including narratives and metaphors) perspectives. The dimensional perspective on emotions has fostered already a long program of research with objectless dimensions such as pleasure-displeasure (pleasure or valence) and activationdeactivation (arousal or energy). Their combination-called core affect-can be considered as a first primitive that is involved in most psychological events and makes them "hot" or emotional. Involving a pre-conceptual process, a neurophysiological state, core affect is accessible to consciousness as a simple nonreflective feeling, e.g., feeling good or bad, feeling lethargic or energized. Perception of the affective quality is the second primitive. It is a "cold" process which is made hot by being combined with a change in core affect (Russell, 2003(Russell, , 2009. The dimensional approach has been challenged to some extent. Eerola's hybrid model (Eerola, 2017) assigns three explanatory levels of affects, starting from low level sensed emotions (core affect), proceeding over perceived or recognized emotions (basic emotions), and ending with experienced and felt emotions (high-level complex emotions). It takes as the lowest level core affect, as a neurophysiological state which is accessible to consciousness as a simple primitive non-reflective feeling (Russell and Barrett, 1999). It reflects the idea that affects arise from the core of the body and neural representations of the body state. The next higher level organizes emotions by conceiving of them in terms of discrete categories such as fear, anger, disgust, sadness, and surprise (Matsumoto and Ekman, 2009;and Sander, 2013 for a discussion of number and label of the categories). Both levels have furthered an abundance of theoretical and empirical research with a focus on the development of emotion taxonomies which all offer distinct ways to tackle musical emotions. Both the dimensional and basic emotions model, however, seem to overlap considerably, and this holds true especially for artworks and objects in nature which are not always explained in terms of dimensions or discrete patterns of emotions that are involved in everyday survival (Sander, 2013). As such, there is also a level beyond core affects and the perception of basic emotions which is not reducible to mere reactions to the environment, and that encompasses complex emotions that are more contemplative, reflected and nuanced, somewhat analogous to other complex emotions such as moral, social and epistemic ones (see below). While such a hybrid model may reconcile some of the discrepancies in the field, its main contribution is to make us aware of how the conceptual level of emotions under the focus lends itself to different mechanisms, emotion labels and useful measures. The shortcoming of the model is an impression that it offers a way to reduce complex, aesthetic emotions into simpler basic emotions and the latter into underlying core affects. Whilst some of such trajectories could be traced from the lowest to highest level (i.e., measurement of core affects via psychophysiology, recognition of the emotions expressed, and reflection of what kind of experience the whole process induces in the perceiver), it is fundamentally not a symmetrical and reversible process. One cannot reduce the experience of longing (a complex, aesthetic emotion) into recognition of combination of basic emotions nor predict the exact core affects related to such emotional experience. At best, one level may modulate the processes taking place in the lower levels (as depicted with the downward arrows in Figure 2). The extent of such top-down influence has not received sufficient attention to date, although top-down information such as extramusical information has been demonstrated to impact music-induced emotions (Vuoskoski and Eerola, 2015). However, such topdown effects on perception are well known in perceptual literature (Rahman and Sommer, 2008) and provide evidence against strictly modular framework. Despite this shortcoming, the hybrid model does organize the range of processes in a functional manner. EMOTIONS MODULATED BY AESTHETIC EXPERIENCE In what preceded we have emphasized the bottom-up approach to musically induced emotions, taking as a starting point that affective experience may reflect an evolutionary primitive form of consciousness above which more complex layers of consciousness can emerge (Panksepp, 2005). Many higher neural systems are in fact involved in the various distinct aspects of experiencing and recognizing musical emotions, but a great deal of the emotional power may be generated by lower subcortical regions where basic affective states are organized (Panksepp, 1998;Damasio, 1999;Panksepp and Bernatzky, 2002). This lower level processing, however, can be modified to some extent by other variables such as repeated encounters with the stimulus-going from mere exposure, over habituation and sensitization-, co-occurrence with other stimuli (classical and evaluative conditioning) and varying internal states such as, e.g., motivation (Moors, 2007(Moors, , p. 1241. A real aesthetic experience of music, moreover, can be defined as an experience "in which the individual immerses herself in the music, dedicating her attention to perceptual, cognitive, and affective interpretation based on the formal properties of the perceptual experience" (Brattico and Pearce, 2013, p. 49). This means that several mechanisms may be used for the processing, elicitation, and experience of emotions (Storbeck and Clore, 2007). Musical sense-making, in this view, has to be broadened from a mere cognitive to a more encompassing approach that includes affective semantics and embodied cognition. What really counts in this regard, is the difficult relationship between emotion and cognition (Panksepp, 2009(Panksepp, -2010. Cognition, regarded in a narrow account, is contrasted mainly with emotion and cognitive output is defined as information that is not related to emotion. It is coined "cold" as contrasted with "hot" affective information processing (Eder et al., 2007). Recent neuroanatomic studies, however, seem to increasingly challenge the idea of specialized brain structures for cognition versus emotion (Storbeck and Clore, 2007), and there is also no easy separation between cognitive and emotional components insofar as the functions of these areas are concerned (Ishizu and Zeki, 2014). Some popular ideas about cognition and emotion such as affective independence, affective primacy and affective automaticity have been questioned accordingly (Storbeck and Clore, 2007, pp. 1225-1226: the affective independence hypothesis states that emotion is processed independently of cognition via a subcortical low route; affective primacy claims precedence of affective and evaluative processing over semantic processing, and affective automaticity states that affective processes are triggered automatically by affectively potent stimuli commandeering attention. A more recent view, however, is the suggestion that affect modifies and regulates cognitive processing rather than being processed independently. Affect, in this view, probably does not proceed independently of cognition, nor does it precede cognition in time. (Storbeck and Clore, 2007, pp. 1225-1226. As such, there is some kind of overlap between music-evoked complex and/or "aesthetic emotions" and so-called "everyday emotions" (Koelsch, 2014). Examples of the latter are anger, disgust, fear, enjoyment, sadness, and surprise (see Matsumoto and Ekman, 2009). They are mainly reducible to the basic emotions-also called "primary, " "discrete" or "fundamental" emotions-which have been elaborated in several taxonomies. Examples of the former are wonder, nostalgia, transcendence (see Zentner et al., 2008;Trost et al., 2012;Taruffi and Koelsch, 2014). They are typically elicited when people engage with artworks (including music) and objects or scenes in nature (Robinson, 2009;see Sander, 2013 for an overview) and can be related to "epistemic emotions" such as interest, confusion, surprise or awe (de Sousa, 2008) though the latter have not yet been the focus of much research in affective neuroscience. As explained in the hybrid model (Eerola, 2017), however, they tend to be rare, less stable and more reliant on the various other factors related to meaning-generation in music (Vuoskoski and Eerola, 2012). Related topics, such as novelty processing, have been investigated extensively-with a key role for the function of the amygdala-as well as the role emotions, which are not directed at knowing, can have for epistemic consequences. Fear, for instance, can lead to an increase in vigilance and attention with better knowledge of the situation in order to evaluate the possibilities for escape (Sander, 2013). The everyday/aesthetic dichotomy, further, is related also to the distinction between utilitarian and aesthetic emotions . The latter occur in situations that do not trigger self-interest or goal-directed action and reflect a multiplicative function of structural features of the music, listener features, performer features and contextual features leading to distinct kinds of emotion such as wonder, transcendence, entrainment, tension and awe . It is possible, however, to combine aesthetic and nonaesthetic emotions when asked to describe retrospectively felt and expressed musical emotions. As such, nine factors have been described-commonly known as the Geneva Emotional Music Scale or GEMS (see Zentner et al., 2008), namely wonder, transcendence, tenderness, nostalgia, peacefulness, power, joy, tension and sadness. Awe, nostalgia, and enjoyment, among the aesthetic emotions, have attracted the most detailed research with aesthetic awe being crucial in distinguishing a peak aesthetic experience of music from everyday casual listening (Gabrielsson, 2010;Brattico and Pearce, 2013, p. 51), although studies that induce a range of emotions in laboratory conditions may fail to arouse the special emotions such as awe, wonder and transcendence . CONCLUSION AND PERSPECTIVES: NATURE MEETS NURTURE In this paper, we explored the evolutionary groundings of music-induced emotions. Starting from a definition of emotions as adaptive processes we tried to show that music-induced emotions reflect ancient brain functions. The inductive power of such functions, however, can be expanded or even overruled to some extent by the evolutionary younger regions of the brain. The issue whether an emotional modulation of sensory input is "top-down" and dependent upon input from "higher" areas of the brain or whether it is "bottom-up, " or both, is up to now an unresolved question (Ishizu and Zeki, 2014). Affect and cognition, in fact, have long been treated as independent domains, but current evidence seems to suggest that both are in fact highly interdependent (Storbeck and Clore, 2007). Although we may never know with certainty "the evolutionary and cultural transitions that led from our acoustic-emotional sensibilities to an appreciation of music" it may be suspected that the role of subcortical systems in the way we are affected by music has been greatly underestimated (Panksepp and Bernatzky, 2002, p. 151). Music establishes affective resonances within the brain, and it is within an understanding of the ingrained emotional processes of the mammalian brain that the essential answers to these questions will be found, which could imply that affective sounds are related to primitive reactions with adaptive power and that somehow music capitalizes on these reactive mechanisms. In this view, early affective processing-as relevant in early infancy and prehistory-, should reflect the way the emotions make our bodies feel, which in turn reflects on the emotions expressed and decoded. Music-induced emotions, moreover, have recently received considerable impetus from neurobiological and psychobiological research. The full mechanisms behind the proposed induction mechanisms, however, are not yet totally clear. Emotional processing holds a hybrid position: it is the place where nature meets nurture with emotive meaning relying both on pre-programmed reactivity that is based on wired-in circuitry for perceptual information pickup (nature) and on culturally established mechanisms for information processing and sensemaking (nurture). It makes sense, therefore, to look for mechanisms that underlie the inductive power of the music and to relate them with evolutionary claims and a possible adaptive function of music. Especially important here is the distinction between the acoustic and the vehicle mode of listening and the related distinction between the on-line and off-line mode of listening. Much more research, however, is needed in order to investigate the relationship between music-specific or aesthetic emotions and everyday or utilitarian emotions Reybrouck and Brattico, 2015). The latter are triggered by the need to adapt to specific situations that are of central significance to the individual's interests and well-being; the former are triggered in situations that usually have no obvious material effect on the individual's well-being. Rather than relying on categorical models of emotion by blurring the boundaries between aesthetic and utilitarian emotions we should take care to reflect also the nuanced range of emotive states, that music can induce. As such, there should be a dynamic tension between the "nature" and the "nurture" side of music processing, stressing the role of the musical experience proper. Music, in fact, is a sounding and temporal phenomenon which has inductive power. The latter involves ongoing epistemic interactions with the sounds, which rely on low-level sensory processing as well as on principles of cognitive mediation. The former, obviously, refer to the nature side, the latter to the nurture side of music processing. Cognitive processing, however, should take into account also the full richness of the sensory experience. What we argue for, therefore, is the reliance on the nature side again, which ends up, finally, in what may be called a "nature-nurture-nature cycle" of musical sense-making, starting with low-level processing, over cognitive mediation and revaluing the sensory experience as well (Reybrouck, 2008). AUTHOR CONTRIBUTIONS The first draft of this paper was written by MR. The final elaboration was written jointly by MR and TE.
2017-05-17T20:18:17.180Z
2017-04-04T00:00:00.000
{ "year": 2017, "sha1": "d64ce13cbcb15283fbaee1c8c6edea8b65cf0294", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2017.00494/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d64ce13cbcb15283fbaee1c8c6edea8b65cf0294", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
13094469
pes2o/s2orc
v3-fos-license
Photometric and H$\alpha$ observations of LSI+61303: detection of a $\sim$26 day V and JHK band modulation We present new optical and infrared photometric observations and high resolution H$\alpha$ spectra of the periodic radio star \lsi. The optical photometric data set covers the time interval 1985-1993 and amounts to about a hundred nights. A period of $\sim$26 days is found in the V band. The infrared data also present evidence for a similar periodicity, but with higher amplitude of variation (0\rmag 2). The spectroscopic observations include 16 intermediate and high dispersion spectra of \lsi\ collected between January 1989 and February 1993. The H$\alpha$ emission line profile and its variations are analyzed. Several emission line parameters -- among them the H$\alpha$ EW and the width of the H$\alpha$ red hump -- change strongly at or close to radio maximum, and may exhibit periodic variability. We also observe a significant change in the peak separation. The H$\alpha$ profile of \lsi\ does not seem peculiar for a Be star. However, several of the observed variations of the H$\alpha$ profile can probably be associated with the presence of the compact, secondary star. Introduction The early-type star LSI+61 • 303 (V 615 Cas) is the optical counterpart of the variable radio source GT 0236+610, discovered during a galactic plane radio survey (Gregory & Taylor, 1978). Taylor & Gregory (1982) found that this object exhibits strong radio outbursts with a 26.5 d period. Further observations (Taylor & Gregory, 1984) established the currently accepted value of 26.496±0.008 d. Typically, radio outbursts Send offprint requests to: J. M. Paredes peak around phases 0.6-0.8 (Paredes et al., 1990). The spectroscopic radial velocity observations of Hutchings & Crampton (1981), hereafter HC81, are in agreement with the radio period, and give support to the presence of a companion. In addition, they also conclude that the optical spectrum corresponds to a rapidly rotating B0 V star, with an equatorial disk and mass loss. All the radio data available to date on the outburst peak flux density provide evidence for a strong modulation, over a time scale of 4 yr, in the amplitude of the 26.5 d periodic radio outbursts (Gregory et al., 1989;Paredes et al., 1990;Estalella et al., 1993). The dependence of radio outbursts flux density on frequency, the peak time delay, and the general shape of the radio light curves can be modeled as continuous relativistic particle injection into an adiabatically expanding synchrotron emitting source (Paredes et al., 1991). Furthermore, recent VLBI observations have provided the first high resolution map of LSI+61 • 303 showing a double sub-arcsec structure (Massi et al., 1993). The physical parameters derived from these VLBI observations and those of Taylor et al. (1992) are in agreement with this model. The system was detected as an X-ray source by Bignami et al. (1981) and has also been proposed (Gregory & Taylor, 1978;Perotti et al., 1980) to be the radio counterpart of the COS B γ-ray source CG135+01 (Hersem et al., 1977). However, this last association is still doubtful due to the large γ-ray error box. Paredes & Figueras (1986) based on UBVRI photometric observations detected optical variability roughly correlated with the radio light curve. The amplitude was about 0. m 1. A model based on deformations of the primary star by a compact companion in a eccentric system was initially applied by Paredes (1987) to explain it. Optical variability with time scales of days has also been reported by Lipunova (1988) who, in addition, found short term nightly fluctuations of some hundredths of a magnitude. These short time fluctuations were first observed by Bartolini et al. (1983). More recently, Mendelson & Mazeh (1989) reported an optical modulation with amplitude similar to that found by Paredes & Figueras (1986) and with a period of 26.62 ± 0.09 d, near to the radio value, in the Johnson I band. However, their data set was not sufficient to show clearly a similar periodicity at shorter wavelengths. The photometric results presented in this paper confirm that a ∼26 d periodicity is actually present in the V band. In addition, the general shape of the visual light curve is very similar to that observed by Mendelson & Mazeh (1989) in the I band. In the JHK near infrared bands, we find clear evidence that a similar periodic modulation is also present, with amplitude of ∼0. m 2. On the other hand, intermediate resolution spectroscopic data of LSI+61 • 303 suitable for an analysis of the Hα emission line profile are available in literature, at the time of writing, only from the early papers of Gregory et al. (1979) and HC81. These papers outlined the variations of the Hα profile, but were far from reaching any firm conclusion regarding the mechanism responsible for such variations. In this work, we present 16 spectra (9 of them of high resolution, 0.2-0.44Å FWHM) obtained with linear detectors. The new spectra allow a more detailed description of the Hα line profile and of its variation. We confirm many of the early findings of Gregory et al. (1979) and of HC81 concerning line shifts and Hα EW variations. Moreover, we find that other intriguing changes in the Hα line profile (noticeably the width of the red hump) occur at or close to radio outburst. Photometric observations and results The Johnson photometric observations were made at Calar Alto (Almería, Spain) with the 1.23 m telescope of the Centro Astronómico Hispano-Alemán (CAHA) and the 1.52 m telescope of the Observatorio Astronómico Nacional (OAN) and at the Observatorio del Roque de los Muchachos (ORM, La Palma, Spain), using the 1 m Jacobus Kapteyn telescope (JKT). They cover the period 1985-1993 and amount to one hundred independent photometric measurements. Both Calar Alto telescopes are equipped with a one channel photometer with a dry-ice cooled RCA 31034 photomultiplier. The JKT observations were made using the People's photometer, with two channels, which is equipped with EMI 9658AM photomultipliers. The differential photometry was performed using SAO 12319 (V=8.79 and I=7.76), SAO 12327 (V=8.15 and I=6.90) and BD+60 • 493 (V=8.41 and I=7.04) as comparison stars. The majority of measurements were obtained in the Johnson V filter, although some simultaneous I filter observations were also taken and will be reported here. Differences of magnitude between comparison stars themselves are constant within 0. m 02. Further details of the observing technique are reported in Paredes & Figueras (1986). The infrared observations were made at the Teide Observatory, (Tenerife, Spain), using the 1.5 m Carlos Sánchez telescope (TCS) equipped with the continuously variable filter (CVF) photometer. The data were corrected for atmospheric extinction and flux-calibrated by comparison with an adequate sample of standard stars. The results of our Johnson photometric observations are given in Table 1. First column indicates the Julian date, second and third column are, respectively, the Johnson V and I band magnitudes of LSI+61 • 303. A similar format has been used for the infrared observations, whose results are presented in Table 2. Figure 1 shows, with the same scale, the optical and infrared full data set folded with the radio period of 26.496 d. V Photometric periodicity In order to try to confirm independently the periodic optical modulation reported by Mendelson & Mazeh (1989), a period analysis was carried out over this entire data set, amounting to 105 photometric V measurements over the time interval [1985][1986][1987][1988][1989][1990][1991][1992][1993]. The period analysis of the data was performed by using the phase dispersion minimization (PDM) technique (Stellingwerf, 1978). This method consists of assuming a trial period and then constructing a phase diagram. The phase interval is divided into bins and the variance of the data points is computed in each bin. The weighted mean of the variances is divided by the total variance of the data. It can be shown that local minima of this function correspond to periods present in the data or to multiples of such periods. Considering that our minimum sampling rate is about one day, our period search was made up to a frequency of 0.5 c d −1 . The result of PDM analysis of the V band data is shown in Fig. 2. The most significant minimum in the explored frequency range (from 0.01 to 0.5 c d −1 ) occurs on 0.0387 c d −1 and the uncertainty that we associate with this frequency is the frequency resolution of the complete data set, given by ∼ 1/T , where T is the total length of the data span and is equal to 0.0004 c d −1 . This corresponds to a period of 25.8±0.3 d. From this analysis, it appears evident that a modulation with period of ∼26 d is actually present in the Johnson V photometry of LSI+61 • 303. This period is similar to that of 26.62 d found by Mendelson & Mazeh (1989) in the Johnson I band and to the 26.496 d radio periodicity (Taylor & Gregory, 1984). In Fig. 3 we have plotted our 105 photometric V points folded on the 26.496 d radio period and binned into 10 bins. Error bars indicate the formal estimate of the uncertainty of the mean within each bin. For clarity, the data are plotted twice. From this figure we note the similar shape with the optical light curves presented by Mendelson & Mazeh (1989). In particular, the presence of a broad brightness maximun near radio active phases 0.5-0.9, and a clear minimum around phase 0.3 can be appreciated. Once the existence of an optical modulation with a period near to 26 d has been established in an independent way from Mendelson & Mazeh (1989), it is worth carrying out an analysis of all long term photometric data available today for LSI+61 • 303. In this way, we have searched for periodicities, in the range from 20 to 30 d, the ensemble consisting of both our data and the photometric points published by Bartolini et al. (1983), Lipunova (1988) and Mendelson & Mazeh (1989), amounting to 204 nights. Using a frequency step of 2 × 10 −7 c d −1 and a bin structure (5,2), the deepest PDM minimum corresponds to 26.5±0.2 d, although it is not very prominent. This period value is coincident with the radio period. I Photometry Our set of Johnson I band observations, listed in Table 1, amount to 43 nights only. With this small data set, it is not possible to carry out a feasible periodicity search. In Fig. 1, we (Taylor & Gregory, 1982). From top to bottom, the V, I, J, H, and K band are plotted. The dots at phase 0.7 are from Elias et al. (1985). All observations are plotted twice. (Taylor & Gregory, 1982). Errors bars indicate the formal estimate of the uncertainty of the mean within each bin. The continuous line is plotted for visual aid. All data is plotted twice. show the I band observations folded with the 26.496 d radio period. The available data covers the phase interval 0.2-0.9 . However, this partial light curve presents the same trends as that of V band observations. In particular, a maximum near the central radio phases and low emission level near phase 0.2 is clearly seen. JHK bands photometric periodicity Our JHK photometric observations are plotted in Fig. 1 as a function of radio phase. Also, we have included two points (dots) at phase ∼ 0.7 observed by Elias et al. (1985). The infrared data available have a good coverage over the full radio period, and indicates that the infrared light curves of LSI+61 • 303 also present a modulation similar to that of V and I bands. However, the infrared high emission state (orbital phases ∼0.6-0.9) is broader than in the optical, while the minimum emission state (orbital phases ∼0.2-0.4) is deeper and narrower. For a single infrared band, the amount of data accumulated by us, 18 nights, could not suffice to carry out a significant period search. In order to overcome this problem, we have merged all the JHK photometric points after subtracting their respective mean and dividing by the corresponding r.m.s. dispersion in each filter. This process is roughly equivalent to having a very broad bandpass filter, about 1µm wide. The two points observed by Elias et al. (1985) have also been included. This provides a data set of relative normalized infrared magnitudes, with 60 measurements over which we have carried a PDM period analysis. The minimum sampling rate is about one day. So, the PDM search was carried out up to a frequency of 0.5 c d −1 . The result of PDM analysis of the merged JHK band data is shown in Fig. 4. The most prominent minimum occurs on 0.0370 ± 0.0003 c d −1 , corresponding to a period of 27.0 ± 0.3 d. Another nearby deep minimum, with comparable significance, is found at 0.0376 ± 0.0003 c d −1 , corresponding to a period of 26.6 ± 0.3 d. This implies that an infrared modulation, with period similar to the radio period, is also present in LSI+61 • 303. The 60 normalized points used are plotted in Fig. 5 as a function of radio phase, computed using the radio period value of 26.496 d. Photometric discussion The JHK infrared light curves of Fig. 1, showing a deep minimum at phase ∼0.3 and a rather flat maximum centered around phase ∼0.8, are reminiscent of light curves from eclipsing variables. From a rotation velocity value of v sin i ≃ 360±25 km s −1 , HC81 suggest that orbital inclination of LSI+61 • 303 is close to 90 • . So, this makes the eclipse possibility a rather reasonable interpretation. In addition, the existence of an IR excess in LSI+61 • 303 has been reported by D' Amico et al. (1987) and Elias et al. (1985). This is, however, a rather common situation in Be stars, where the IR excess at micron wavelengths is attributed to a dense circumstellar envelope (Slettebak, 1979). This envelope can be also partially responsible for the Hα emission. Due to the presence of this envelope, its free-free and freebound opacity is also very likely to absorbe the infrared radiation from any orbiting companion, thus strongly influencing the observed light curve beyond a simple geometrical eclipse. A theoretical modelling of the JHK light curves, based on this eclipse-attenuation scenario, could yield a determination of the system orbital parameters (Martí & Paredes, 1994). On the other hand, when considering the optical light curves, one sees that the minimum is wider and lasts for about half an orbital cycle (see Fig. 3). Then, as suggested by Mendelson & Mazeh (1989), the eclipse explanation at these wavelengths is more difficult to accept. However, due to the approximate frequency dependence τν ∝ ν −3 , the optical freefree and free-bound extinction is actually very low. So, any attenuation effects will be difficult to appreciate in the optical band, meaning that optical variability should be accounted for by a different physical mechanism, probably involving X-ray heating of parts of the normal star facing the X-ray source. Spectroscopic observations and results Sixteen spectra covering the Hα spectral range were collected during several observing runs from January 1989 to February 1993. Table 3 reports the dates of observation and basic information on the instrumental setup. The first column indicates the spectrum identification, the second and third the observatory and telescope used, the fourth, fifth and sixth contain the date, UT time and Julian day of observation, respectively. Seventh column is the radio phase. Finally, the eigth and ninth columns indicate the dispersion and covered spectral range. All spectra were recorded employing CCD detectors. They were bias subtracted and flat field corrected using the IRAF or FI-GARO package, with the exception of Rozhen spectra which have been reduced using the pcIPS software package (Smirnov et al., 1992). Only the spectrum obtained on Dec. 27, 1990 (LP1, see Table 3) at La Palma was flux calibrated. In Fig. 6 we show our normalized Hα record of LSI+61 • 303 ordered sequentially with radio phase and drawn on the same scale with an arbitrary offset. For days with multiple measurements, only the best profile is shown. The continuum underlying Hα was rectified to unity employing a spline fitting. This normalization procedure is somewhat arbitrary for Rozhen Observatory spectra, since the spectral range covered is very small, and since most of the Hα wings is probably lost in noise. The Hα line profile of LSI+61 • 303 shows broad wings as well as a double peaked core. The Full Width Zero Intensity (FWZI) of Hα measured on the Mt. Palomar spectra is ≈ 3100 km s −1 . In addition, in the Asiago spectra, the red hump is nearly flat topped and shows a broad shoulder to the red. Line parameters were measured on each normalized spectrum and are reported in Table 4. First column gives the spectrum identification. Second, third and fourth columns list the heliocentric radial velocity of the blue peak, central dip, and red peak, respectively. Fifth and sixth columns are the FWHM of the blue and red hump, corrected for instrumental profile, while seventh column provides the ratio between the blue and red peak intensity above continuum. Eighth column gives the total Hα EW and the ninth column lists the EW of the wings. Finally, the tenth column contains the EW ratio of the red and blue humps. The radial velocity, the FWHM and the peak height of the B and R humps were measured employing a gaussian fitting. In the high resolution spectra, the fitting was done after rebinning to a dispersion of ∼1Å/pixel. We estimated the contribution of the wings by subtracting from the Hα profile a model of the core composed of two gaussians. Radial velocities In Fig. 7, we show the radial velocity difference between the R and B peaks as well as the central dip radial velocity, both as a function of radio phase. The velocity difference between the Hα peaks reaches a maximum close to the radio outburst, during an interval of two tenths of radio phase. On the other hand, Gregory et al. (1979) noted already that the Balmer central dip varied within the range −30/−80 km s −1 over the period February-March 1978. Our data confirm this variation of radial velocity. We also find that the central dip has vr > 0 km s −1 , but only in two low resolution spectra. A global shift of the Hα line is probably < ∼ 60 km s −1 , if we neglect the three "outsider" points that come from low resolution, low significance observations. A weighted average of the radial velocities over the inverse square of dispersion yields vr(B) ≈ −208 km s −1 , vr(dip) ≈ −60 km s −1 . The variation in the peak separation may be due to an increase in vr of the red peak and a decrease in the vr of the blue peak. Line width and B/R variability In Fig. 8 we represent the FWHM(R) and FWHM(B) as a function of radio phase, while Fig. 9 illustrates the B/R peak ratio and EW(Hα) as a function of radio phase also. The Hα profile of LSI+61 • 303 is not peculiar with respect to those observed in other Be stars (we can, for instance, com-pare LSI+61 • 303 to the Be stars studied by Slettebak et al. (1992)). The EW(Hα) of LSI+61 • 303 is < ∼ 20Å. This value sets LSI+61 • 303 among the Hα-weak Be stars. However, the value of the Hα FWZI is, to the best of our knowledge, among the largest values observed in Be stars. Gregory et al. (1979) described LSI+61 • 303 as having Balmer line emission of variable profile intensity and decrement. Later, HC81 showed that the radial velocity of the central dip, the emission intensity, and the peak ratio were all phase related quantities. The Hα line profile did not change in its main features over 14 years. The stability of the Hα profile is undoubtly remarkable, since Be stars sometimes exhibit strong Hα profile changes over timescales of several months. Our data confirm that the Hα EW changes strongly in correspondence of the radio outburst. The minimum value of EW (6Å) and the maximum value of peak ratio B/R are observed between radio phase 0.7-0.8 (see Fig. 9). Table 3. As can be seen in Fig. 8, the FWHM(R) increases from a value of ∼6Å (∼250 km s −1 ), at radio phase ∼0.5, to a value of ∼8Å (∼350 km s −1 ) at radio phase ∼0.7 . So, the red hump seems to become substantially broader near the time of radio maximum. On the contrary, the FWHM(B) appears to decrease slightly at the same time. In the Asiago spectra (obtained close to the radio maximum), the FWHM of the red hump is visibly larger by ≈ 100 km s −1 than that of the blue one. The exact values of the line width depends somewhat on the method employed for the measurement. To ascertain that this effect is real we also measured directly the half width of the two humps. Although the numbers are somewhat different, the same effect is evident in Fig. 8. Also, we have computed the flux ratio of the blue and red hump, which happens to be < ∼ 1 at all epochs of observation. This implies that the change in peak intensity ratio is mainly due to a variation in width of the red hump. The flux ratio between line core and line wings is relatively constant in all spectra (∼ 0.15) but LP3. The Hα profile in LP3 displays a prominent blue wing, with EW(wings)/EW(core) ∼ 0.26, and asymmetry index of the line wings AI∼ −0.37, where AI is defined as [EW(Red Wing) -EW(Blue Wing)]/EW(Both Wings). In the other low resolution observations, the Hα line wings are symmetric within the uncertainties. Spectroscopic discussion The most widely accepted explanation of the Hα line profiles in Be stars involves a circumstellar disk-like envelope that produces the double-peaked line core. Electron scattered Hα photons are expected to produce extended line wings. The FWZI of LSI+61 • 303 Hα is ≈ 3100 km s −1 ≫ 2v sin i ∼ 780-960 km s −1 (HC81). The excess in the line wings is especially evident if we model the core as a sum of two gaussians (FWZI(core) ≈ 1000 km s −1 ). If we assume that the line core is emitted in a disk, and that the velocity field in the disk is ke-plerian, we can estimate the ratio between the inner and outer radius of the disk. We obtain Rout/Rin ≈ 9.2, for FWZI(core) ≈ 1000 km s −1 . The electron scattering optical depth τes for a disk-like geometry can also be computed. The density was assumed to depend upon r as ne = ne,0(r/Rin) −α , and to fade exponentially above and below the symmetry plane of the disk. For ne,0 = 10 12 cm −3 , and for α = 2, we find that τes > ∼ 0.3, and, if the density ne,0 is > ∼ 10 12 cm −3 , τes ∼ 1. If τes is so large, and if the temperature of the gas is Te > ∼ 10 4 K, as likely, extended line wings of FWZI ∼ 3000 km s −1 can be produced (Poeckert & Marlborough, 1979). The circumstellar disk around the B star should have Rout ∼ 5 × 10 12 cm, a value similar to the length of the semimajor axis of the orbit estimated by HC81. If the eccentricity is ≈ 0.75, the secondary star should cross the circumstellar disk and, if the circumstellar disk itself is nearly coplanar to the plane of the orbit, even sweep across it while close to periastron. The resultant accretion could produce the radio outburst. A similar model has been proposed for the Be star/X-ray binary systems A0538−66 and V0332+52 (e.g., Slettebak, 1988). The increase in width of the red hump could be due to a nonaxisymmetric perturbation in the circumstellar disk, occurring close to the outer edge of the disk. For instance, the secondary may be crossing the circumstellar shell at that time, close to its outer radius (it is interesting to note that the blue hump is probably wider than the red one at radio phase ≈ 0.4). However, the orbital solution of HC81 suggests that the broadening is occurring when the star is close to apoastron, where the secondary is probably not in contact with the circumstellar disk of the Be star, if the radius of the circumstellar disk is Rout ≈ 5 × 10 12 cm. Alternatively, most of the Hα emission could be associated with the compact secondary. An accretion disk around a ∼1 M⊙ compact object would have Rout ∼ 1 × 10 12 cm, without considering broad wings produced by electron scattering. The strong decrease in the Hα EW at radio maximum can be explained either in terms of obscuration of the disk by the Be star (Mendelson & Mazeh, 1989), or in terms of reduced emissivity in the disk. The JHK light curves point toward an eclipsing binary. Hence, the plane of the orbit should be close to the line of sight. Since the secondary has probably a mass of only 1/6 or less the mass of the primary, we expect also a large radial velocity oscillation in the peak positions (∼ 300 km s −1 ). Our radial velocity data are not consistent with this, nor are the data obtained by the previous investigators. Less clear is the interpretation of the variation of the wings; we think that a set of homogeneous observations of sufficiently high S/N and resolution are needed to finally reject the possibility that the line wings might be emitted by the accretion disk of the secondary. Even if the accretion disk around the compact star does not emit the bulk Hα luminosity, the increase in FWHM(R) and in ∆vr = vr(R) − vr(B) may be caused by an unresolved component whose radial velocity reaches a maximum in correspondence to the radio maximum. We may expect this if, for instance, a cloud of line emitting gas would be ballistically ejected along the secondary disk axis. This suggestion is appealing, since the radial velocity of this unseen component should be > ∼ 200 km s −1 at radio maximum, consistent with the velocity of the bipolar ejecta of radio plasma partially resolved with VLBI techniques by Massi et al. (1993). A strong analogy could be envisaged with the model proposed by Martin & Rees (1979) for SS433. Present data on radial velocity and FWHM variation can be explained by a combination of global line displacement due to the orbital motion of the Be star, and of the vr variation of this unresolved component. The red shoulder often present in the Hα profile could be a related, higher velocity, feature. Conclusions Our Johnson photometric monitoring of LSI+61 • 303 has shown that this object presents a ∼26 d periodic modulation in the V band. The shape and amplitude of this modulation are similar to those found by Mendelson & Mazeh (1989). In addition, the J, H and K observations reported in this paper have revealed, for the first time, new evidence of infrared variability with similar trends as seen in the optical, but with higher amplitude (0. m 2). We have established also that the merged JHK light curve exhibits a modulation with period similar to the radio period. A possible interpretation of this periodicity could involve the eclipse and attenuation of the secondary star emission by the Be primary and its envelope. It is unclear whether an accretion disk around a compact, degenerate companion may contribute to the Hα emission, or whether the variations observed are due to perturbations produced by the companion on the circumstellar disk of the Be primary. This problem persists also because Be stars, as a class, are far from being well understood. LSI+61 • 303 clearly deserves more observations from ground and space. Monitoring of the Hα profile at high and intermediate resolution will help to solve the main ambiguities left by the present investigation. It is also desiderable to obtain a new spectroscopic orbital solution.
2014-10-01T00:00:00.000Z
1994-02-07T00:00:00.000
{ "year": 1994, "sha1": "738761b120ea75e96b365737a653f146750215cc", "oa_license": null, "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/44CE3370B672D513E6C667AABBC80601/S0074180900214885a.pdf/div-class-title-photometric-and-h-observations-of-lsi-61-303-div.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "de7788a8fc922e052c001a24aef6c94cd25b21c2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258097285
pes2o/s2orc
v3-fos-license
Intensive care unit presentation of central corneal descemetocele secondary to Methicillin-Resistant staphylococcal infection superimposed by a hypopyon: a case report Introduction: A descemetocele is a rare type of keratopathy that occurs when an intact descemet’s membrane of the eye undergoes a herniation through an overlying stroma. Previous literature has documented corneal damage via bacterial enzymes, especially, Pseudomonas and Neisseria species. Most recent prospective interventional studies showed treatment of these infections. Case presentation: This report presents the first instance of a methicillin-resistant Staphylococcus aureus descemetocele presentation in a 51-year-old African American male, with co-presenting hypopyon sequelae successfully managed conservatively in an intensive care unit setting. Clinical discussion: An instance of a methicillin-resistant Staphylococcus aureus has not yet been documented in the literature. Likewise, a co-presentation with a hypopyon, which is known as a formation of inflammatory debris rich in white blood cells has not been studied. Conclusion: The presence of a hypopyon in the instances of bacterial descemetocele herniation should be further evaluated to see if there are associations with conservative, nonsurgical intervention outcomes. Introduction Keratopathy was reported to occur in 6-57% of ICU patients around the world [1,2] . A particularly rare complication is that of a descemetocele, which occurs when an intact descemet membrane undergoes anterior herniation through an overlying stroma. This is an ophthalmologic emergency due to the risk of perforation [3] . Due to the rarity of this condition, there is limited literature available. Etiologies range from trauma, iatrogenic, immunerelated, neurotrophic keratitis, and microbial keratitis [3] . While there is no established consensus for the management of descemetocele, much of the current literature recommends surgical intervention rather than a conservative management [3][4][5][6] . Bacterial cases reported in the literature have been that of Pseudomonas and Neisseria species [3,7,8] . A hypopyon is an inflammatory condition of the eye that involves a sediment of white blood cells of the anterior chamber. The classic white exudate-rich fluid, usually accompanied by associated conjunctival redness, is constituted from white blood cells secondary to inflammation of the iris and uvea [9] . The current practice is not to drain an accumulated hypopyon due to risks of synechiae and closed-angle glaucoma [10,11] . We present the case of a descemetocele caused by Methicillin-Resistant Staphylococcus aureus (MRSA) and a co-existent hypopon, neither of which have been previously described. Of note, the work has been reported in line with the CARE criteria [12] . A signed written informed consent was obtained from the patient pertaining to the release of protected health information and photographs prior to commission of this case report. seen at an outpatient clinic for ocular pruritus on the right eye and treated with fortified cefazolin and tobramycin ophthalmic solutions. However, the patient's symptoms worsened, prompting his presentation to the emergency department. On physical exam, the patient's pupils were round and equally reactive to light. The intraocular pressure was measured to be 12 and 10 mmHg in the right and left eyes, respectively. Slit lamp examination revealed 3 + conjunctival injection on the right (Fig. 1). The right cornea demonstrated a 2 mm central descemetocele surrounded by 6 mm central corneal infiltrate. Corneal folds were present in the right eye. Additionally, hypopyon was noted in the anterior chamber and occupied 30% of the right eye. The iris of the right eye was round and regular without neovascularization. Pseudophakia was present in both eyes. A dilated fundus exam was difficult to obtain due to corneal ulcer and corneal folds and was not performed. There was no evidence of viritis. Examination of the left eye showed scleral injection but was otherwise unremarkable (Fig. 2). The patient was diagnosed with a central corneal descemetocele on the right eye and was subsequently admitted to the ICU for hourly eye exams. The laboratories at the time of the admission demonstrated an elevated white blood cell count 10.64 (3.10-10.20 k/mm 3 ). The eye culture swab showed growth of MRSA resistant to penicillin, oxacillin, clindamycin, and linezolid. The patient received hourly antibiotic administration with fortified cefazolin, tobramycin, and fortified vancomycin ophthalmic drops. Furthermore, he was treated with systemic intravenous vancomycin for potential systemic source control and treatment of hypopyon for a total of 9 days. After 13 days of treatment, localized swelling around the right eye improved and the leukocytosis resolved (Fig. 3). His need for ophthalmic solutions decreased from every hour to every 3 h. The patient was subsequently downgraded from the ICU. Discussion The pathologic process of bacterial descemetocele involves damage to the cornea secondary to the proteolytic activity of bacterial enzymes and other toxins [13,14] . The first study that elucidated the causes and management of descemetoceles was published in 1984 in the journal of Transactions of the American Ophthalmological Society [15] . The most recent prospective interventional study published in 2022 by Shankar et al. [16] included a total of 24 patients and discussed the importance of fortified antibiotic administration and the possible need for surgical intervention. In case should conservative therapy fail, a surgical intervention may be required in which an amniotic membrane transplant and blepharorrhaphy are performed. Bacterial keratitis descemetocele have been described with Pseudomonas and Neisseria species infections [7,8] . In one case report regarding descemetocele secondary to Neisseria species, the patient failed medical management with an ophthalmic antibiotic therapy and required surgical intervention via a deep anterior lamellar keratoplasty. Despite the intervention, the patient's condition worsened and required lamellar anterior keratoplasty. Our experience of a descemetocele secondary to MRSA infection with hypopyon followed a similar clinical course as those of other microbial descemetocele presentations previously described. However, this case differs from previous reports by a newly reported infection from MRSA, not requiring surgical intervention, and a presence of a hypopyon. By definition, hypopyon is a formation of inflammatory debris rich in white blood cells and may have a clinical correlate of clearing bacterial infection in the instances of a descemetocele. It prevents progression of anterior descemet membrane herniation through an overlying stroma. Although the risks of a hypopyon-related complications (e.g. acute angle glaucoma due to limited space in the anterior chamber of the eye) should be weighed, the presence of a hypopyon in the instances of bacterial descemate herniation should be further evaluated to see if there are associated better outcomes. This work has been reported in line with the Surgical CAse REport (SCARE) 2020 Criteria [12] . Ethical approval Ethical approval was permitted by the hospital Institutional Review Board. Consent Authors received permission from the patient pertaining release of protected health information, photograph, and video release prior to writing of the case study. Sources of funding No external or internal funding influenced the conduction of this study. Author contribution V.P.Z. was involved in concept design, writing the paper, and literature assessment; K.M.T. assisted in writing case presentation; G.N. assisted in writing introduction; K.R. assisted in writing discussion; J.K. is the principal investigator and the guarantor physician for this study. Conflict of interest disclosure Authors have no conflicts of interests to disclose with this case report. Research registration unique identifying number (UIN)
2023-04-13T15:30:59.757Z
2023-04-06T00:00:00.000
{ "year": 2023, "sha1": "46f8c4cc3a2aa8307f167a0d8a7052bc469232ce", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/ms9.0000000000000327", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "12f91521f514556d1568101d7bcec85b94efab19", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53356425
pes2o/s2orc
v3-fos-license
Rapid numerical solutions for the Mukhanov-Sasaki equation We develop a novel technique for numerically computing the primordial power spectra of comoving curvature perturbations. By finding suitable analytic approximations for different regions of the mode equations and stitching them together, we reduce the solution of a differential equation to repeated matrix multiplication. This results in a wavenumber-dependent increase in speed which is orders of magnitude faster than traditional approaches at intermediate and large wavenumbers. We demonstrate the method's efficacy on the challenging case of a stepped quadratic potential with kinetic dominance initial conditions. We develop a novel technique for numerically computing the primordial power spectra of comoving curvature perturbations. By finding suitable analytic approximations for different regions of the mode equations and stitching them together, we reduce the solution of a differential equation to repeated matrix multiplication. This results in a wavenumber-dependent increase in speed which is orders of magnitude faster than traditional approaches at intermediate and large wavenumbers. We demonstrate the method's efficacy on the challenging case of a stepped quadratic potential with kinetic dominance. We further generalise to a novel class of frozen initial conditions which prove capable of emulating a quantised primordial power spectrum. I. INTRODUCTION With only six parameters, the ΛCDM concordance model of the Universe explains the large-scale structure, present state, and evolution of the cosmos to high precision [1]. Two of these parameters phenomenologically describe the amplitude A s and tilt n s of the primordial power spectrum of comoving curvature perturbations. The detection of n s = 1, along with correlated acoustic oscillations in the temperature and polarisation of the cosmic microwave background (CMB) anisotropies [2] constitutes overwhelming evidence for a rapid early accelerated phase. The canonical method for explaining this evolution is the theory of inflation, in which primordial quantum fields drive the accelerated expansion. Models of inflation make predictions about the primordial power spectrum. These predictions may be tested against observations of the CMB, allowing us to probe the physics of this hypothesised embryonic stage of the universe [3,4]. Traditional analyses manifest these predictions in terms of A s and n s conditioned on inflationary model parameters. In many cases, such a brutal phenomenological parameterisation is insufficient. These cases include models that explain large scale features in CMB power spectra [5][6][7][8], axion monodromy models [9], just enough inflation [10] or kinetic dominance [11][12][13]. Occasionally one may have access to analytic expressions or approximations, but in general one must solve the Mukhanov-Sasaki mode equations numerically in order to compute primordial power spectra. In many instances of cosmological interest the numerical computation of primordial power spectra forms the primary bottleneck in a full numerical inference. There have been several attempts at tackling this problem [14][15][16] using a variety of analytical and numerical approaches with varying degrees of generality and efficiency. In this paper we present a novel and general approach to solv- * wh293@cam.ac.uk † wh260@mrao.cam.ac.uk ing the Mukhanov-Sasaki equation. In all cases this approach gives a wavenumber-dependent increase in speed, and for large wavenumbers this method presents the only currently available method for computing fully numerical solutions that integrate across the entire evolution of the mode equations. The format of this paper is as follows: In Sec. II we summarise relevant background theory and establish notation. In Sec. III, we describe the general approach. In Sec. IV we apply our technique to the challenging case of a stepped potential with kinetic dominance initial conditions and compare with traditional solving techniques. We conclude in Sec. V. II. BACKGROUND The simplest inflationary model is provided by a single scalar field minimally coupled to gravity with action where φ is a scalar inflaton field, V (φ) its potential, and we are working in natural units. For a flat Friedmann-Lemaître-Robertson-Walker universe filled with a spatially uniform field φ, extremising this action recovers the Klein-Gordon and Friedmann equations where H =ȧ/a is the Hubble parameter, and dots and primes denote derivatives with respect to cosmic time t and the field φ respectively. For nearly all potentials, the background solutions to Eqs. (2) and (3) self-consistent slow-roll initial conditions [17][18][19] via the constraint:φ allowing a small amount of evolution to remove transient effects resulting from an initial small offset from the true attractor. Alternatively, one may set initial conditions in a kinetically dominated phase. Whilst late-time inflationary evolution is characterised by a slow-moving inflaton, classically at early times the opposite,φ 2 V (φ), is generally true [11]. In this state, the comoving horizon grows until the inflaton is sufficiently slowed by the friction term in Eq. (2). A brief transitional fast-roll periodφ 2 ∼ V (φ) is reached before the field settles into the usual slow-roll phase. Kinetic dominance initial conditions may be set via: where φ p is a constant of integration. Perturbing the action in Eq. (1) around the zerothorder homogeneous solutions and taking scalar components yields the gauge-invariant Mukhanov action [20] where R is the gauge-invariant comoving curvature perturbation. Varying this action and expressing R in terms of its isotropic Fourier components R k , we obtain the Mukhanov-Sasaki (MS) equation As shown in Fig. 1, solutions to these equations are oscillatory within the horizon k aH and freeze out upon horizon exit. For tensor perturbations, the equivalent equation for both polarisations is In the slow-roll paradigm, initial conditions for the perturbations are typically set using Bunch-Davies initial conditions, by matching: In the fast-roll and kinetically dominated paradigms, this condition can never be fulfilled for small k. In these cases, the situation becomes less clear-cut, although alternative initial conditions have been proposed [21][22][23]. Once initial conditions have been chosen, to compute primordial power spectra one must evolve all modes of interest until they are well outside the horizon and evaluate For large values of k many oscillations must be traversed in order to reach horizon exit, causing standard numerical solvers to fail. This may be somewhat ameliorated in the slow-roll case by starting the evolution a small amount before horizon exit, and exploit the fact that for slow-roll, Bunch-Davies conditions may be set anywhere within the horizon [24][25][26]. However, such short-cuts can be harder to apply for certain potentials, particularly ones which yield spectra with moderately high-k features. In this paper, we choose the stepped quadratic potential [8] as a challenging but relevant example where m is the mass of the inflaton field and A, φ 0 , and ∆ are the amplitude, location and width of a step feature. Such step features induce oscillations in the primordial power spectrum, which could be responsible for the lowfeatures seen in CMB power spectra [5][6][7][8]. III. METHODOLOGY We now review our new approach for evolving the Mukhanov-Sasaki mode Eqs. (7) and (8), first in a general context in Secs. III A and III B, and then in application to primordial cosmology in Sec. III C. Contaldi et al. [27] construct an analytic template for the scalar primordial power spectrum with fast-roll initial conditions. They find exact solutions to Eq. (7) in both the kinetic dominance and slow-roll limits and match them together assuming an instantaneous transition between kinetic dominance and slow-roll. This produces an expression in terms of Bessel functions which recovers the key features of the primordial power spectra with cut-offs and oscillations. In our approach, we increase the accuracy of the Contaldi et al. [27] method by adding further transitions, allowing for an approximately continuous matching of fast roll to slow roll, and for the reconstruction of features within slow-roll inflation. A. The transition approach for an oscillator For a general linear second order differential equation of the form one can find suitable dependent variable transformations that cast the differential into the form of a harmonic oscillator. Defining Eq. (11) may be cast as where A condition for the functionality of our method is that the integral q(t)/p(t)dt has an analytic expression or is numerically cheap to calculate and that ω 2 (t) is a reasonably well-behaved function. There is also a freedom in choosing the independent variable which slightly modifies the form of Eq. (12) and ω 2 (t). Thus, the Mukhanov-Sasaki equations may always be cast into the form of a harmonic oscillator. The form of ω 2 (t) is a priori analytically unknown, and typically derived from inflationary background variables which are themselves numerical solutions of their own separate differential equation, as is demonstrated later in Eqs. (25) and (26). However, one may approximate the true frequency as a piecewise interpolation function. Since ω 2 (t) may be negative and can span many decades in scale, we choose a semi-log interpolation function defined on n intervals {[t 0 , t 1 ), . . . , [t n−1 , t n )}. For each interval one chooses either a linear, positive exponential or negative exponential parameterisation: (16) for the linear segments and for the exponential segments. The choice of linear, positive or negative exponential segments is subject to the constraint that ω 2 (t) must be purely positive or negative for the exponential regions. The critical insight in this approach is that when ω 2 (t) takes one of the three forms in Eq. (15), exact analytic solutions can be found in terms of Airy and Bessel functions where C i are constants of integration. The full evolved solutions can be found by matching the value and first derivative of the solution at each transition boundary using matrix multiplication. First, define the following matrices where the superscript ∼, −, + indicates the transition type as linear, negative exponential, and positive exponential respectively, and 4 The evolved solution from t 0 to t n can now be expressed in the compact form where the superscript j i ∈ {∼, +, −} denotes the type of transition for the interval [t i , t i+1 ). The matrix U can be thought of as a linear evolution operator which acts on a state at time t 0 to evolve it to t n . B. Interval choice The above argument in Sec. III A was conditioned on a specific definition of intervals and interval types defining a semi-logarithmic interpolation of ω 2 (t). In the limit of arbitrarily fine intervals, this approach recovers the exact solution. However, in order to minimise computational time, one should choose a coarser distribution of intervals with not necessarily constant width. In this section we outline one possible approach for making such a choice. One may approximate a local error in the solution x across each interval [t i , t i+1 ) by computing solutions at either end, and then repeating the calculation across two adjacent and matched intervals [t i , t m ), [t m , t i+1 ), where t m = (t i + t i+1 )/2 is the midpoint of the original interval. The difference in these two approaches gives a rough quantification of the local error accumulated from t i to t i+1 . If the error is greater than some user-specified tolerance, then the interval is bisected, and the above process is repeated on each of the two segments. For our application, x is in general complex, and we quantify the local error as a relative error between absolute values of the two alternative solutions. To choose initial segments which are then refined by the above procedure, we select t 0 , . . . , t n to be the endpoints of our region of interest, along with the locations of extrema of ω 2 (t). Including extrema ensures that no sharp features are missed. The interpolation type for each transition is selected to give the lowest error in x. C. The Mukhanov-Sasaki equation In order to apply the method outlined in Secs. III A and III B to the computation of the primordial power spectrum of comoving curvature perturbations, we must first recast the Mukhanov-Sasaki Eqs. (7) and (8) in a more appropriate form. As observed by Agocs et al. [23], there is considerable freedom in the form that these equations take, as one may simultaneously transform both the independent and dependent variables. As an independent variable, we choose the number of e-folds N = ln a a0 . Provided H > 0, N constitutes a (26). Both terms have two distinct regions. The first region is kinetically dominated. The second is the slow-roll region which is slowly varying. The feature in the slow-roll region is caused by the step in the potential from Eq. (10). stable temporal coordinate that does not saturate during inflation, and naturally pushes the kinetic dominance singularity to N = −∞. Requiring that there is no friction term, we are forced to choose the following transformations and equations: (24) where ω 2 k is separated into a k-dependent part, which is proportional to the square of the comoving horizon, and a k-independent part: illustrated graphically in Fig. 2. D. Comparison with existing approaches Traditional solvers such as ModeCode [24][25][26] and BINGO [14] are able to avoid computing a full numer-ical evolution by starting the mode evolution for each mode shortly before horizon exit. This proves sufficient for many physical situations, since deep within the horizon k aH many traditional initial conditions reduce to the Bunch-Davies vacuum, allowing careful analyses to skip the large number oscillations required to reach horizon exit. In many ways, our approach can be thought of an automation of this skipping procedure via its switching mechanism. Furthermore, the approach outlined here allows one to investigate a wider variety of initial conditions, such as excited states or alpha-vacua [28][29][30][31][32][33][34][35][36][37][38][39][40], which require full evolution of mode functions from early times. IV. RESULTS A C++ implementation of the approach outlined in Sec. III is publicly available on GitHub [41]. We make use of pre-packaged libraries ODEPACK [42] for solving ordinary differential equations, cephes [43] for special functions and Eigen [44] for vector arithmetic. We compare our approach in speed and accuracy to a full numerical solution of the Mukhanov-Sasaki equation using ODEPACK. This numerical approach is analagous to ModeCode [24][25][26] and BINGO [14], whereby mode evolution is started and stopped a short way before and after horizon exit to minimise run-time. A. Kinetic initial conditions To demonstrate the robustness of our approach, we apply it to the evolution resulting from the stepped potential from Eq. (10). The relative speed increase can be seen in Fig. 3. At high k there are orders of magnitude increase in speed, and the method is approximately constant in cost for each k, in stark contrast to the traditional approach. An example mode evolution along with the transitions the solver chooses can be seen in Figs. 4 and 5. Our solver is able to navigate the oscillatory regions more effectively than an equivalent traditional Runge-Kutta based approach, as used in ODEPACK, and this effectiveness increases with k. Using the notation of Hergt et al. [13], power spectra are phenomenologically characterized by three parameters: N * : e-folds between the moment the pivot scale k * exits the horizon and the end of inflation, N † : e-folds between the start of inflation and the horizon exit of pivot scale, N 0 : e-folds between the time that initial conditions are set and the end of inflation. We initialise the perturbation variables in the mode Eqs. (7) and (8) using Bunch-Davies vacuum initial conditions, and for demonstration purposes choose N † = 7, N * = 55 and N 0 = 63.1. The resultant power spectra can be seen in Fig. 8, and are divided into three regions: Kinetic dominance, stepped feature and a running spectral index. The cut-off and oscillatory behaviour caused by kinetic dominance can be seen for low k-modes. The middle region shows the spectrum at moderately high k after exiting kinetic dominance and settling into slow-roll. An oscillatory feature can be seen at k ∼ 10 7 Mpc −1 caused by the step in the potential of Eq. (10). At high k, running of the spectral index n s can be seen as the spectrum tilts downwards. Whilst these k-modes affect multipoles that are too large to be probed directly by the CMB, they are relevant for example for the study of primordial black holes [46]. B. Frozen initial conditions To demonstrate the power of our approach, we now apply the method to a novel class of "frozen initial conditions". In some sense it would be attractive to remove the parameter N 0 from our initial conditions, and set N 0 → −∞ with initial conditions asymptotically deep within the kinetically dominated phase "at the big bang". In this case, to avoid modes growing backward in time (so that the perturbative approximation remains valid [47]), one must select the initial conditions: These initial conditions are illustrated graphically in Fig. 1, and amount to a white noise pre-primordial power spectrum. A direct consequence of these initial conditions is that the modes are purely real, and therefore exhibit an acoustic oscillation-like effect upon horizon re-entry. The resulting power spectrum is visualised in Fig. 6, which shows heavy oscillations down to zero power. These oscillations in the primordial power spectrum are akin to the quantised primordial power spectra that have recently been examined in [45,47], however [45] showing the quality of fit of a general quantised power spectrum with linear spacing ∆k and starting wavevector k0. The corresponding approximate multipole spacing ∆ and initial multipole 0 are also indicated. Frozen initial conditions predict a class of allowed (k0, ∆k), shown by the dashed line, which independently predicts the best fit point in this class. in this case our initial conditions predict that both the smallest wavevector k 0 and quantisation spacing ∆k are functions of N † . Compellingly, Fig. 7 shows that this class of (k 0 , ∆k) comprise a curve that slides directly through the best-fit point found in [45]. This model therefore provides the possibility of a significantly improved fit in comparison to ΛCDM with the introduction of a only a single additional parameter. These primordial power spectra can only be computed numerically by a solver such as ours which is capable of navigating the many oscillations between horizon entry and exit. Given the completely independent prediction of this best fit point by these initial conditions, "Frozen initial conditions" will form the subject of a future paper which examines the theoretical and full observational implications. V. CONCLUSION In this paper, we described a novel method for the numerical calculation of the primordial power spectra of comoving curvature perturbations. The results were shown to agree well with existing numerical solutions of the Mukhanov-Sasaki equation (0.1% errors) while only requiring a fraction of the computational time. With this fast and efficient method for calculating power spectra, further investigations into vacuum initial conditions can be explored and their effects on CMB power spectra can be tested and compared with observations. We plan to incorporate the code presented in [41] as a CLASS extension [48]. Our approach is analogous to the Runge-Kutta-Wentzel-Kramers-Brillouin method [49], differing in its choice of stepping function and error control. As for RKWKB, there is much scope for extensions to our Power spectra of scalar and tensor perturbations computed using the method described in Sec. III. The background variables were computed using a stepped potential (A = 10 −3 , ∆ = 5 × 10 −3 , φ0 = 12.5) and under kinetic dominance initial conditions with N † = 7 and N * = 55. The spectra were computed under Bunch-Davies initial conditions and the vacuum was set N0 = 63.1 e-folds before the end of inflation. The plot shows three different regions of the two spectra: A kinetic dominance initial region, slow-roll with a stepped potential targeting k = 10 7 Mpc −1 , and a high k region illustrating the running of the spectral index. As seen from Fig. 2, the feature in ω 2 k for the tensor perturbations is much smaller than that of the scalar perturbations. This is reflected in the magnitude difference of the oscillations caused by the step. The fractional error was below 0.1% in both spectra. method, including but not limited to higher-order stepping procedures and the integration of coupled oscillators. On the inflationary physics side there is also scope to extend this work to multi-field inflation, nonminimally coupled inflation, and spatial curvature.
2018-09-28T15:38:24.000Z
2018-09-28T00:00:00.000
{ "year": 2018, "sha1": "8d165339d0d0729b0be866dc6cb24ad0e54069bc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1809.11095", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "530162b5597235f1eb6004d744e4543a4630a6c1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
254290121
pes2o/s2orc
v3-fos-license
Loss of caspase-2 accelerates age-dependent alterations in mitochondrial production of reactive oxygen species Mitochondria are known to be a major source and target of oxidative stress. Oxidative stress increases during aging and is suggested to underlie in part the aging process. We have previously documented an increase in endogenous caspase-2 (casp2) activity in hepatocytes obtained from old (28 months) vs. young mice (5 months). More recently, we have shown that casp2 is activated by oxidative stress and is critical for mitochondrial oxidative stress-induced apoptosis. Since casp2 appears integral to mitochondrial oxidative stress-induced apoptosis, in this study we determined whether loss of casp2 altered the production of mitochondrial reactive oxygen radicals (mROS) as a function of age in intact living hepatocytes. To stimulate mitochondrial metabolic activity, we added a mixture of pyruvate and glutamate to hepatocytes while continuously monitoring endogenous mROS production in the presence or absence of rotenone and/or antimycin A. Our data demonstrate that mROS production and neutralization are compromised in hepatocytes of old mice. Interestingly, casp2 deficient hepatocytes from middle age mice (12 months) had similar mROS neutralization kinetics to those of hepatocytes from old WT mice. Rotenone had no effect on mROS metabolism, whereas antimycin A significantly altered mROS production and metabolism in an age-dependent fashion. Our results indicate that: (1) hepatocytes from young and old mice respond differently to dysfunction of the mitochondrial electron transport chain; (2) age-dependent alterations in mROS metabolism are likely regulated by complex III; and (3) absence of casp2 accelerates age-dependent changes in terms of pyruvate/glutamate-induced mROS metabolism. Introduction With aging, there is a general decrease in liver size and thus in hepatocyte number (Thomas et al. 2002). Rat hepatocytes demonstrate reductions in both proliferative capacity and resistance to oxidative stress as a function of aging (Ikeyama et al. 2003). Previous studies in aged liver have reported oxidant stress-induced damage to mitochondria macromolecules (Rabek et al. 2003, Lopez-Torres et al. 2002, Bakala et al. 2003. We have previously reported that liver of old mice have increased cysteine oxidation (Zhang et al. 2007) that parallels age-related dysfunction of mitochondria and is thought to be a major determinant of this decline in cell function, since these organelles are both the main sources of reactive oxygen species (ROS) and targets for their damaging effects (Lanza and Nair 2012). Age-associated damage to mitochondria is a consequence of increased oxidant production (Calabrese et al. 2011), probably due to changes in the activity of key components of the respiratory chain (Ralph et al. 2011). Recent studies indicate that mitochondria are one of the major sources of cellular ROS and, in turn, are the most adversely affected organelles during aging (Lee and Wei 2012). Liver shows a progressive increase with age in the activity of caspases that mediate apoptosis induced through the mitochondrial pathway (Zhang et al. 2002). In fact, the extent of liver apoptosis is higher in old mice that have reduced levels of MnSOD than in age-matched WT mice (Kokoszka et al. 2001). In addition, hepatocytes isolated from old rats are more sensitive to oxidant-induced cell death than hepatocytes isolated from young rats (Zhang et al. 2002). Casp2 is a member of a family of cysteine proteases that are key regulatory components in apoptosis (Bouchier-Hayes and Green 2012). Casp2 has been found to be expressed in many cell types (Krumschnabel et al. 2009), and localized in several subcellular compartments (Susin et al. 1999, Mancini et al. 2000, Guo et al. 2002, Troy and Shelanski 2003. We have recently shown that casp2 is critical for mitochondrial oxidative stress-induced apoptosis because its absence protects cells treated with mitochondrial complex I and III inhibitors, such as rotenone and antimycin A from apoptosis (Lopez-Cruzan et al. 2005, Tiwari et al. 2011. We also have shown that mitochondria contain oxidative stress activatable casp2 (Lopez-Cruzan et al. 2005). Interestingly, we found increased casp2 activity in aged rat livers that is associated with age-dependent increases in oxidative stress (Zhang et al. 2002). Therefore, casp2 may play a role in oxidative stress-induced apoptosis in aged animals. However, little is known about the function of casp2 in vivo. We have recently found that old male Casp2 -/mice exhibit several traits commonly observed in premature aging animals, including a 10 % shortened maximum lifespan and severe agerelated osteoporosis (Zhang et al. 2007). Casp2 has been placed as a central player in the mitochondrial pathway of apoptosis (Robertson et al. 2002;Braga et al. 2008;Madesh et al. 2009;Bouchier-Hayes and Green 2012;Jiang et al. 2012). Small interfering RNA (siRNA) to casp2 inhibited expression of casp2, prevented cytochrome c and Smac release from mitochondria and apoptosis after treatment with cytotoxic agents that can potentially generate ROS (e.g. etoposide, cisplatin and UV irradiation) (Lassus et al. 2002). In this study, we determined whether loss of casp2 impacted mROS production and metabolism during aging in mouse hepatocytes as a model. Our data indicate that loss of casp2 accelerates age-dependent alterations in mROS production and metabolism, likely through complex III and may in part explain the accelerated aging phenotype that we observe in casp2 null mice (Zhang et al. 2007). Reagents Rotenone, antimycin A, Pyruvate, Collagenase Type IV, and Glutamate were purchased from Sigma. MitoSOX (510 nm excitation, 580 nm emission) was obtained from Molecular Probes. Mice Casp2 -/mice were a gift from Dr. Carol Troy. Only male mice were used in these studies. In the present studies, we used mice at ages 5, 12 and 28 months of age. This represents for 5 month old WT mice 12 % of their maximum lifespan, 73.9 % for 28 month old WT, and 28.7 % and 31.7 % for 12 month old WT and Casp2 -/respectively. Hepatocyte isolation Hepatocytes were isolated using the method of Herman et al. 1988, from 5 and 28 month old male WT mice or 12 month old Casp2 -/and WT mice. In brief, animals were anesthetized and their livers perfused with basic medium solution containing 0.5 mM EGTA (115 mM NaCl, 5 mM KCl, 1 mM KH2PO4, 25 mM Na-HEPES) followed by a collagenase solution containing 1 mM CaCl2. After washing and dispersing the cells, they were counted. An average of 25 million hepatocytes per liver was obtained. Mitochondrial production of ROS Animals were handled according to the University's Institutional Animal Care and Use Committee approved regulations, protocols, and standards. Hepatocytes were isolated from 5 to 28 month old WT mice or 12 month old Casp2 -/and WT mice. After hepatocytes were isolated as explained above, cells were washed and dispersed, and counted. After determining cell viability (85-95 %), hepatocytes were seeded at 250,000 cells per well in two-well glass chambers and incubated for a minimum of 9 h to a maximum of 20 h. Generation of ROS over time was monitored with MitoSOX as previously described (Ramanujan and Herman 2007). Cells were washed once and incubated with 2.5 lM MitoSOX during 10 min at 37°C in the dark and diluted with HBSS containing Ca 2? and Mg 2? , followed by two washes with buffer. HEPES was added at a final concentration of 30 mM to avoid fluctuations in pH while the measurements were accomplished. Readings were performed in a confocal inverted LSM510 Zeiss microscope using a 609 oil immersion objective and an argon laser to detect MitoSOX through a 514 nm excitation/560 nm emission channel. Recordings were performed in live cells using 70 scans in a time period of 13-15 min. After the 20th scan, a mixture of pyruvate and glutamate (PG) was added to the chamber to a final concentration of 5 mM. In some experiments, hepatocytes were treated with 20 lM of rotenone, 20 lM antimycin A or 20 lM of a mixture of both and cells were incubated for 30 min. Cells were then washed with PBS and prepared as described above for mROS measurements. Three regions of interest were independently scanned and each experiment was repeated from six to nine times. Data was normalized for basal intensity by first dividing the intensity obtained at each time point by the initial intensity. Then, the best fit curve in nonlinear regression was found and used to normalize each time point. Analysis was performed with GraphPad Prism software, version 5. For specific time points, one-way ANOVA was used to analyze statistical differences, followed by Bonferroni posttests to discern what groups where significant. At least three animals were used for each sampling group. For the untreated experiments, results were pooled together and statistically analyzed. The results provided for hepatocytes treated with mitochondrial complex inhibitors are presented as individual graphs to highlight the similar response pattern. Results In the present study, we have examined the generation of mROS as a function of age in hepatocytes isolated from WT mice and also the impact of casp2 deficiency in the generation of mROS. Liver hepatocytes were selected as a model to study the action that absence of casp2 exerts in mitochondria since these cells are enriched with this organelle. We initially investigated the endogenous generation of mROS in hepatocytes isolated from young and old WT mice upon feeding the mitochondrial electron transport chain with PG ( Fig. 1a) and the mitochondrial probe MitoSOX. MitoSOX is known to selectively label mitochondrial superoxide radicals. All cells considered in these experiments showed distinct mitochondrial labeling. Hepatocytes are abundant in mitochondria due to their high metabolic activity and the basal ROS levels in hepatocytes were almost equal in both young and aged hepatocytes. There was no statistical difference observed between the dye uptake of young and aged hepatocytes. MitoSOX is reported to have a high response time so that in the time scales of measurement, it is ensured that the probe senses the free radicals in real time. Mitochondria from younger WT hepatocytes tended to produce slightly more mROS than older WT hepatocytes at the peak of PG stimulation (Figs. 1a,2a). However, as expected, younger hepatocyte mROS neutralization kinetics was more efficient compared to the mROS kinetics of hepatocytes from older mice. Since we are interested in analyzing the role that casp2 plays in mitochondrial oxidative stress induced apoptosis, we next assayed the response of PG-induced mROS metabolism (we defined the metabolism of mROS as the amount of decrease in the MitoSOX signal at t = 200 s to t = 800 s) in Casp2 -/hepatocytes isolated from middle age mice and compared it to the response of hepatocytes obtained from agematched WT mice (Fig. 1b). We chose middle age animals because in parallel with the present studies, our laboratory has demonstrated that liver from middle age Casp2 -/mice exhibit a general increase in protein oxidation identical to that seen in old WT mouse livers (Zhang et al. 2007). Hepatocytes from middle age Casp2 -/mice demonstrated lower maximal mROS production and decreased metabolism of mROS compared to hepatocytes from age-matched WT mice. In fact, the middle age Casp2 -/-mROS levels never reached below the baseline for the duration of the measurement, while middle age WT mROS level quickly went below the baseline. It is important to note that the hepatocytes isolated and used for these experiments (Fig. 1c) were free from contamination with other cell types as seen by the presence of binucleate cells and the richness of mitochondria within each cell. This technique has been employed in our laboratory for more than 20 years (Herman et al. 1988). The graphs in Fig. 2 demonstrate differences in mROS intensity at baseline (i.e. before PG was added, t = 200 s), peak mROS levels after PG stimulation (t = 250 s) and mROS decay at 800 s after the recording began. Changes in baseline intensity were relatively small for all ages of hepatocytes and there were no statistically significant differences in baseline mROS levels (t = 200 s; Fig. 2a; note minute scale). However, peak mROS intensities obtained after PG stimulation did show substantial and statistically significant differences between ages and casp2 content (Fig. 2b). Hepatocytes from middle age WT mice showed the greatest increase in mROS production. In comparison, hepatocytes obtained from old WT mice showed statistically significant lower PG stimulated increases in mROS levels. mROS production in hepatocytes obtained from young WT mice did not show a significant difference compared to those from WT middle age mice. Interestingly, mROS levels seen in old hepatocytes following PG stimulation were identical to those seen in PG-stimulated hepatocytes obtained from middle aged Casp2 -/mice. In addition to peak mROS production, we also examined how well hepatocytes metabolized PG-induced mROS as a function of age and in the absence of casp2 (Fig. 2c). We observed an age-dependent impact on the ability of hepatocytes to metabolize mROS. Young hepatocytes were the most efficient while the least effective were hepatocytes from old WT mice. Interestingly, the most compromised of all hepatocytes in terms of mROS metabolism were those obtained from middleaged Casp2 null mice. Next we sought to dissect the genesis of the changes in mROS that we observed. Therefore, we investigated mROS metabolism in hepatocytes from young and old WT mice following inhibition of complex I and III by rotenone (Fig. 3a, b, c) or antimycin A (Fig. 4a, b, c) respectively, in a time dependent manner. We used the same concentrations of rotenone and antimycin A, and incubation times to pre-treat hepatocytes, and then created a burst of mitochondrial respiration by adding a mixture of PG while scanning images of live cells to detect mROS previously stained with MitoSox Red. Figures 3 and 4 show graphs of three independent experiments that we present separated to avoid masking the results by the large standard error that Fig. 1 Age and casp2-dependent differences in mROS metabolism. Hepatocytes were isolated from 5 (young), 12 (middle age) and 28 (old) month old WT and 12 month Casp2 -/mice by perfusing their livers with a buffer solution containing EGTA, followed by collagenase treatment. Dispersed hepatocytes were seeded in glassbottom well chambers and stained with MitoSOX. Cells were then placed in a confocal microscope and the production of mROS was monitored over time after the addition of a mixture of 5 mM pyruvate and glutamate. Recordings were performed for 13-15 min. a Mitochondrial ROS production over time in 5 and 28 month old WT mouse hepatocytes. b Mitochondrial ROS production over time in 12 months old Casp2 -/and WT mouse hepatocytes. Data was normalized for basal intensity. Error bars = ± SEM. c Representative image of isolated hepatocytes labeled with mitochondrial ROS probe MitoSOX. No images analyzed or visually tested showed any contamination with other type of cells they generate, but to show the trend in treatments of rotenone and antimycin A. mROS kinetics of rotenone treated hepatocytes failed to demonstrate any differences in terms of the maximum burst of mROS or the kinetics of mROS metabolism between hepatocytes from young versus old WT mice (Fig. 3). On the other hand, pre-treatment of hepatocytes with antimycin A resulted in a striking and unexpected difference in the mROS kinetics curves obtained from hepatocytes isolated from young versus old WT mice (Fig. 4). While the response of the hepatocytes from young mice to antimycin A treatment was similar to that seen in rotenone treated hepatocytes (i.e. an initial upward tick and subsequent downward slope), hepatocytes from old mice showed the opposite outcome. Addition of PG in antimycin A pre-treated hepatocytes from old mice resulted in a sudden decrease in the generation of mROS that slowly and progressively returned towards pretreatment levels as a function of time. We also Fig. 3 Hepatocytes from young and old WT mice show the same mROS response following exposure to rotenone. Hepatocytes were isolated from 5 and 28 month old WT mice by perfusing their livers with a buffer solution containing EGTA, followed by collagenase treatment. Dispersed hepatocytes were seeded in glass bottom well chambers, treated with 20 lM of rotenone for 30 min, and stained with MitoSOX. Cells were then placed in a confocal microscope and the production of mROS was monitored over time after the addition of a mixture of pyruvate and glutamate. Recordings were performed over a period of 13-15 min. The results from three independent experiments are shown in a, b and c. Data was normalized for basal intensity. Error bars = ± SEM pre-treated hepatocytes isolated from WT mice with a mixture of both rotenone and antimycin A (Fig. 5). After addition of PG, no initial response was seen in either WT young or WT old. However, young WT hepatocytes progressively developed mROS in an upward linear fashion. Taken together our results indicate that: (1) there is a difference between the way hepatocytes from young and old mice respond to dysfunction of the mitochondrial electron transport chain and subsequent production and neutralization of mROS; (2) this difference is most probably due to age dependent dysfunction of complex III; and (3) absence of casp2 in hepatocytes obtained from middle age mice respond in a similar way as hepatocytes obtained from old WT mice in terms of PG-induced mROS metabolism. Discussion Our previous studies have shown a progressive increase in the activity of caspases that mediate apoptosis induced through the mitochondrial pathway as liver ages (Zhang et al. 2002). This suggests a strong contribution of the intrinsic pathway of apoptosis to liver aging. It is generally agreed that oxidative stress, which is a potent inducer of apoptosis, increases with age. The speculation that mitochondrial oxidative stress may underlie age-associated increases in apoptosis in the liver is supported by the observation that liver hepatocyte apoptosis is higher in old mice with Fig. 4 Hepatocytes from young and old WT mice show a marked difference in mROS response to antimycin A treatment as a function of time when the electron transport chain is fed with pyruvate and glutamate. Hepatocytes were isolated from 5 and 28 month old WT mice by perfusing their livers with a buffer solution containing EGTA, followed by collagenase treatment. Dispersed hepatocytes were seeded in glass bottom well chambers, treated with 20 lM of antimycin A for 30 min, and stained with MitoSOX. Cells were then placed in a confocal microscope and the production of ROS from the mitochondria was monitored over time after the addition of a mixture of pyruvate and glutamate. Recordings were performed for 13-15 min. The results from three independent experiments are shown in a, b and c. Data was normalized for basal intensity. Fig. 5 Hepatocytes from young and old WT mice show a marked difference in mROS response to a mixture of rotenone plus antimycin A treatment as a function of time when the electron transport chain is fed with pyruvate and glutamate. Hepatocytes were isolated from 5 and 28 month old WT mice by perfusing their livers with a buffer solution containing EGTA, followed by collagenase treatment. Dispersed hepatocytes were seeded in glass bottom well chambers, treated with a mixture of 20 lM of rotenone plus antimycin A for 30 min, and stained with MitoSOX. Cells were then placed in a confocal microscope and the production of ROS from the mitochondria was monitored over time after the addition of a mixture of pyruvate and glutamate. Recordings were performed for 13-15 min. The results from three independent experiments are shown half the amount of MnSOD than in age-matched WT mice (Kokoszka et al. 2001). These results validate previous studies that showed increase numbers of TUNEL positive hepatocytes in livers of aged Fisher 44 rats (Higami et al. 1997). In addition, we have also shown that hepatocytes isolated from old rats are more sensitive to oxidant-induced cell death than hepatocytes isolated from young rats (Zhang et al. 2002). Casp2 has been implicated in apoptotic and nonapoptotic processes such as cell cycle regulation, tumor suppression and aging (Bouchier-Hayes and Green 2012 for review). Casp2 has also been identified as a central player in the mitochondrial pathway of apoptosis (Guo et al. 2002;Boatright et al. 2003;Enoksson 2004). Furthermore, casp2 has been reported to be localized in the mitochondrial compartment (Susin et al. 1999, Deaciuc et al. 2004, Cheung et al. 2006, although the localization of casp2 in mitochondria is still under debate (Mancini et al. 2000, O'Reilly et al. 2002, van Loo et al. 2002. Our laboratory, as well as other groups, has suggested that casp2 mediates apoptosis induced by the lipid peroxidant tert-butyl hydroperoxide (tBOOH) (Amoroso et al. 2002, Zhang et al. 2002. Mitochondria are a major site of ROS generation and excess ROS triggers apoptosis mediated by caspases. Consequently, the role of casp2 as an initiator of mitochondrial apoptosis is now commonly accepted. To examine the potential role of casp2 in the relationship between aging and mitochondrial oxidative stress, we examined the implications of removal of casp2 on hepatocyte mROS metabolism. These studies have been performed in male mice for three reasons: (1) All our previously published data on casp2 deficient mice biology has been obtained using male mice; and (2) hepatocytes used in these experiments were cultured in vitro with specific media, thus removing any potential impact of gender in the results. Hepatocytes are cells with a high degree of metabolic activity and are greatly enriched in mitochondria, which in turn leads to production of high levels of ROS. We examined ROS production from mitochondria in hepatocytes isolated from young (5 months), middle age (12 months) and old (25 months) WT mice. Our data demonstrated that hepatocytes of old mice do not produce as much initial mROS as hepatocytes from young mice when mitochondrial respiration is accelerated. In addition, the kinetics of mROS neutralization were not as efficient in hepatocytes isolated from old mice compared to that observed in hepatocytes isolated from young mice. These results indicate that older hepatocytes lack the metabolic efficiency found in younger hepatocytes. Inhibiting certain complexes of the electron transport chain results in an increase in mROS production. Thus, we examined the level of ROS production following metabolic stimulation in the presence and absence of inhibitors of the electron transport chain. The two main sites responsible for ROS production following inhibition are complex I and complex III. We used rotenone and antimycin A to inhibit these electron transport chain complexes, respectively. Our results demonstrate that while there is no difference in ROS generation between young and old hepatocytes following complex I inhibition, hepatocytes from old WT mice respond very differently to inhibition of complex III in the mitochondrial electron transport chain and subsequent generation and neutralization of mROS than hepatocytes from young mice. However, we did not detect any differences when complex I was inhibited among hepatocytes of either age. These results lead us to hypothesize that older hepatocytes are less capable of handling the same or greater levels of mROS produced during inhibition or dysfunction mainly of complex III. Following treatment with both rotenone and antimycin A, the initial rapid increase in mROS production observed after metabolic stimulation was completely inhibited. Subsequently, a steady increase in mROS production was observed only in young hepatocytes. These results suggest that first, aging is associated with differences in mROS metabolism and second, that aging affects complex III activity preferentially with respect to mitochondrial oxidative stress. We also show that hepatocytes from middle age Casp2 -/mice display a greater difficulty in neutralizing mROS than hepatocytes from their agematched WT counterparts, and in fact resemble the characteristics of mROS neutralization found in hepatocytes obtained from old WT mice. These data implicate casp2 in the preferential metabolism of mROS generated from complex III. Recent evidence suggests that when complex III is inhibited, complex II may be a source of ROS (Quinlan et al. 2012). We examined mROS production in young and old WT hepatocytes containing non-functional complex I and III (Fig. 5). Our results indicate that complex II is not a source of ROS and it is in agreement with previous findings (Chen et al. 2003), using glutamate, but not succinate as a substrate. Since the equivalency of mROS metabolism in middle aged Casp2 -/mice is comparable to old WT mice, our data suggests that casp2 is an important player in the aging process resulting from oxidative stress. At the present time, the mechanisms by which casp2 may sense and respond to complex III generated ROS or its targets are unknown and are the focus of ongoing studies. However, recent findings suggest a number of feasible mechanisms by which casp2 may regulate cellular ROS levels and responses. Higher levels of ROS can damage enzymes responsible for production of NADPH and Calmodulin dependent kinase II (CaMK II) (Erickson et al. 2008). In addition, scavenging ROS or repairing oxidized macromolecules consume NADPH. These changes can potentially impair the function of CaMK II. Because CaMK II phosphorylates the cysteine apoptotic protease, procaspase-2, at Ser135 and inhibits its activation (Nutt et al. 2005), it is possible that partial loss of CaMK II function reduces the threshold of oxidative stress required for the activation of casp2, resulting in apoptosis. NADPH, a co-factor of several anti-oxidant enzymes such as glutathione reductase and thioredoxin reductase, enhances CaMK II-mediated inhibition of casp2 (Lee et al. 2001). Because NADPH is a critical molecule in redox regulation, casp2 activity may be ultimately regulated by cellular redox. Indeed, casp2 is necessary for mitochondrial oxidative stressinduced apoptosis because its absence protects cells treated with mitochondrial complex I inhibitors such as rotenone from apoptosis (Lopez-Cruzan et al. 2005, Tiwari et al. 2011). In addition, we find casp2 activity to be increased prior to other caspases in MnSODpartially-deficient mice that have enhanced mitochondrial oxidant stress (data not shown). Because it is the most conserved caspases (shares [90 % homology with human casp2), we aligned the protein sequence of casp2 from several species and found that casp2 contains 17 cysteine residues conserved across. Thus, casp2 is the highest cysteine containing caspase out of all the known caspases. Cysteine residues are known to regulate protein function via oxidation and reduction of disulfide bonds as a function of the redox environment. One of these 17 cysteine residues is involved in the dimerization of casp2 monomers via formation of a disulfide bond that stabilizes the molecule (Schweizer et al. 2003), but is apparently not required for casp2 activation. Four of the remaining 16 cysteines are aligned in the dimer in a conformation accessible to sense the redox environment, but do not form disulfide bridges under physiological conditions. We hypothesize that one or more of these 4 cysteines might serve as sensors of mitochondrial oxidative stress, potentially via oxidation. In earlier studies published by us employing a unique combination of spatially resolved single-cell chemical kinetics, scaling analysis, and biochemical assays, we observed that young liver cells manifest nonlinear dynamics for efficiently regulating ROS generation/removal machinery, and that these regulatory correlations in free radical dynamics are diminished in aged cells, suggesting that the aging process modulates chemical dynamics (complexity) in liver cell energy metabolism (Ramanujan and Herman 2007). We speculated that these differences arise from nonlinear network interactions among glycolysis, gluconeogenesis, and mitochondrial electron transport chain. Pyruvate dehydrogenase is the key enzyme that converts glycolytic product pyruvate to acetyl-CoA as an input to mitochondrial pathway, and it is known that pyruvate dehydrogenase is inhibited by excess NADH/acetyl-CoA (product inhibition). If the mitochondrial pathway is defective in aged cells, unmetabolized pyruvate will be converted back to glucose by gluconeogenesis, which then will initiate the glycolytic pathway. It has been reported earlier that aging is associated with ROS-induced chronic dysfunction of mitochondrial respiratory chain, either at site I or III, and that mitochondria isolated from aged animals show reduced sensitivity to complex I inhibitor (Fosslien 2001). Similar trends were observed in these studies, suggesting that aging in liver is accompanied by a dysfunctional mitochondrial network that can be due to structural or functional respiratory defects, or both. These findings are consistent with the data reported here, that defects in complex III accompany the aging process and loss of casp2. Lastly, a recent publication (Shalini et al. 2012), that repeats and recapitulates our previous finding in bone, livers, mouse embryonic fibroblasts and hepatocytes from Casp2 -/mice (Lopez-Cruzan et al. 2005, Zhang et al. 2007, Tiwari et al. 2011 showed that old Casp2 -/mice have increased cellular levels of oxidized proteins, lipid peroxides and DNA damage. Fibroblasts and neurons from Casp2 null mice generate higher levels of endogenous ROS that is associated with decreased levels of antioxidant enzymes (Tiwari et al. 2011). Casp2 was found to upregulate antioxidant protein expression by activating the transcription factor FoxO family (Shalini et al. 2012). In particular, FoxO3 is responsible for expression of catalase, MnSOD, and sestrins genes; the later which activates the peroxiredoxin family of H 2 O 2 detoxifiers. In the absence of casp2, these antioxidants are decreased, and in the case of MnSOD, it might explain the increased mROS production seen in older WT and middle age casp2 null hepatocytes. Taken together, these observations demonstrate strong mechanistic links between casp2, oxidative stress and aging.
2022-12-07T14:18:37.736Z
2013-03-16T00:00:00.000
{ "year": 2013, "sha1": "c8219af8e771799ecc04c1c0602f198150319f96", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10522-013-9415-x.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "c8219af8e771799ecc04c1c0602f198150319f96", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
260807505
pes2o/s2orc
v3-fos-license
DeSUMOylation of a Verticillium dahliae enolase facilitates virulence by derepressing the expression of the effector VdSCP8 The soil-borne fungus Verticillium dahliae, the most notorious plant pathogen of the Verticillium genus, causes vascular wilts in a wide variety of economically important crops. The molecular mechanism of V. dahliae pathogenesis remains largely elusive. Here, we identify a small ubiquitin-like modifier (SUMO)-specific protease (VdUlpB) from V. dahliae, and find that VdUlpB facilitates V. dahliae virulence by deconjugating SUMO from V. dahliae enolase (VdEno). We identify five lysine residues (K96, K254, K259, K313 and K434) that mediate VdEno SUMOylation, and SUMOylated VdEno preferentially localized in nucleus where it functions as a transcription repressor to inhibit the expression of an effector VdSCP8. Importantly, VdUlpB mediates deSUMOylation of VdEno facilitates its cytoplasmic distribution, which allows it to function as a glycolytic enzyme. Our study reveals a sophisticated pathogenic mechanism of VdUlpB-mediated enolase deSUMOylation, which fortifies glycolytic pathway for growth and contributes to V. dahliae virulence through derepressing the expression of an effector. To obtain complementary plasmid pNEO-VdSCP8-HA, the VdSCP8 (VDAG-_08085) gene, including the native promoter and terminator together with the HA fragment were ligated into a Hind III/EcoR I-linearized pNEO binary vector. The primers used above are listed in Supplemental data 1. pNEO-VdSCP8-HA construct was transformed into the knockout mutant VdΔscp8 to produce the complemented strains VdΔscp8/SCP8. The transformants were selected on PDA medium with 40 μg/mL G418. All the strains used in this study are listed in Supplemental Table 3. Protein purification To generate the plasmid pET-VdUlpB CD , the catalytic domain of VdUlpB (VdUlpB CD , 387-780 aa) was cloned and ligated into a BamH I/Xho I-linearized pET28α vector. For the plasmid pET-VdUlpB CDm , the cysteine (C711) in the catalytic domain of VdUlpB was mutated into serine using the Fast Site-Directed Mutagenesis kit (TIANGEN, KM101). All the primers used above are listed in Supplemental data 1. For 2-D protein separation, 800 g of total proteins was loaded onto IPG strips (18 cm, pH 4-7; Cytiva, Cat#17123301). Isoelectric focusing (IEF) was carried out and the strips were then put on the top of SDS-PAGE gel. After electrophoresis, the protein spots were detected and quantified using Image Master 2D Platinum software (Cytiva, V6.0) and the volume ratios of corresponding spots between V592 and Vd T-DNA were calculated from three biological replicates. Protein spots with a ratio higher than 2, p value <0.05 (unpaired Student's t-test) were considered significant and manually excised for mass spectrometry analysis. Mass spectrometry analysis To identify whether VdEno was SUMOylated, Matrix-Assisted Laser Ions score is -10*Log(P), where P is the probability that the observed match is a random event. Individual ions score > 26 indicate identity or extensive homology (p < 0.05). Protein scores are derived from ions scores as a non-probabilistic basis for ranking protein hits. EMSA The purified conserved DNA binding domain of VdEno (VdEno BD ) was incubated with specific probes for 1 h at room temperature. The competition experiment was conducted by adding a 50-fold molar excess of cold probes or nonspecific probes into the reaction before adding the specific probes. The products were analyzed with a 4% native PAGE gel and transferred to a HybondTM-N + membrane (GE, RPN303B). The probes were amplified with the primer pairs listed in Supplemental data 1, and they were labeled with 32 P as described above. Chromatin immunoprecipitation (ChIP) Conidia and mycelia of V. dahliae were cultured in liquid Czapek-Dox medium for 3 days and treated with 1% formaldehyde for 10 min at room temperature, followed
2023-08-12T06:17:38.265Z
2023-08-10T00:00:00.000
{ "year": 2023, "sha1": "d4f37af0bad2009a5575236ed36239dfd90dda34", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "42c606d35dc90325371782ce994b0e8c8b8865c0", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
19778043
pes2o/s2orc
v3-fos-license
The black-and-white coloring problem on permutation graphs Given a graph G and integers b and w. The black-and-white coloring problem asks if there exist disjoint sets of vertices B and W with |B|=b and |W|=w such that no vertex in B is adjacent to any vertex in W. In this paper we show that the problem is polynomial when restricted to permutation graphs. Introduction Definition 1. Let G = (V, E) be a graph and let b and w be two integers. A blackand-white coloring of G colors b vertices black and w vertices white such that no black vertex is adjacent to any white vertex. In other words, the black-and-white coloring problem asks for a complete bipartite subgraph M in the complementḠ of G with b and w vertices in the two color classes of M. The black-and-white coloring problem is NP-complete for graphs in general [4]. That paper also shows that the problem can be solved for trees in O(n 3 ) time. In a recent paper [2] the worst-case timebound for an algorithm on trees was improved to O(n 2 log 3 n) time [2]. The paper [2] mentions, among other things, a manuscript by Kobler, et al., which shows that the problem can be solved in polynomial time for graphs of bounded treewidth. In this paper we investigate the complexity of the problem for permutation graphs. An intersection model for permutation graphs is obtained as follows. Consider two horizontal lines L 1 and L 2 , one above the other. Label n distinct points on L 1 and on L 2 with labels {1, . . . , n}. For each k ∈ {1, . . . , n} connect the point with label k on L 1 with the point with label k on L 2 by a straight line segment. This is called a permutation diagram. The corresponding permutation graph with vertices {1, . . . , n} is the intersection graph of the line segments. Permutation graphs can be recognized in linear time [7]. A permutation diagram can be obtained in linear time. Proof. Remove the line segments from the diagram of the vertices that are not colored. Each of the remaining components is colored black or white. Notice that the components form a consecutive sequence in the diagram. Place a scanline between any two consecutive components. The vertices that are not colored are precisely those that cross one of the scanlines. Theorem 1. There exists a polynomial-time algorithm which checks if a permutation graph can be colored with b black and w white vertices. Proof. Consider a permutation diagram for a permutation graph G = (V, E). A piece consists of a pair of non-intersecting scanlines. Consider the subgraph of G induced by the line segments with both endpoints between the two scanlines. Using dynamic programming, the algorithm checks if there is a black and white coloring of the piece with b ′ black and w ′ white vertices, for all values b ′ and w ′ . We describe the procedure below. The smallest pieces consist of two scanlines such that there is exactly one line segment between them. The subgraph induced by this piece has one vertex. There are two possible optimal colorings; either the vertex is black or it is white. Consider an arbitrary piece, say that it is bordered by scanlines s 1 and s 2 . Two possible colorings color the piece completely black or completely white. Cut the piece in two by a scanline s which is between s 1 and s 2 . Let S be the set of line segments that cross s. The vertices of s are uncolored. If there are colorings with b 1 and w 1 vertices colored black and white in the left piece and with b 2 and w 2 black and white vertices in the right piece, then the piece can be colored with b 1 + b 2 black vertices and w 1 + w 2 white vertices. There are O(n 4 ) different pieces, namely, there are O(n 2 ) scanlines, and each piece is bordered by two of them. To process a piece, we try O(n 2 ) scanlines that are between the two bordering scanlines. By [3,5].
2012-01-30T18:40:54.000Z
2012-01-30T00:00:00.000
{ "year": 2012, "sha1": "1fa85126fdd48d462815ce8712381a9107ad4386", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1fa85126fdd48d462815ce8712381a9107ad4386", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
264097213
pes2o/s2orc
v3-fos-license
Primary Hepatic Neuroendocrine Carcinoma with Metastasis to the Mesentery: A Case Report Abstract Primary hepatic neuroendocrine carcinomas (PHNECs) are extremely rare, with only about 90 cases having been reported in the English-language literature. Among all neuroendocrine neoplasms, primary hepatic neuroendocrine tumors (NETs) and neuroendocrine carcinomas (NECs) are extremely rare, accounting for 0.3% of NETs and 0.28–0.46% of malignant liver tumors. Additionally, primary hepatic NECs occur infrequently. The clinical diagnosis of primary hepatic NEC remains challenging because of its rarity and the lack of information about its characteristic appearance on images. Consequently, pathological examination through the performance of a preoperative liver tumor biopsy is essential for diagnosis. Due to the lack of availability of substantial high-quality data, there is no standard therapy for primary hepatic NEC. We present the first case of PHNEC metastasized to the mesentery reported in the English-language literature. Introduction Neuroendocrine are neoplasms exclusively made by cells with a neuroendocrine phenotype.High-grade neuroendocrine neoplasms (NENs) of the gastrointestinal (GI) tract and pancreas are a heterogeneous group of aggressive malignancies [1,2].The 2010 and 2019 World Health Organization (WHO) classification of tumors of the digestive system, consider all neuroendocrine tumors (NETs) (e.g., gastroenteropancreatic) as malignant and classify them by cellular proliferation and degree of differentiation [2,3]. The current WHO defines neuroendocrine carcinomas (NECs) as they are high-grade neoplasms (grade 3 tumor), poorly differentiated in phenotype, since it has >20 mitoses/ 10 high-power fields, has a Ki-67 proliferation index >20%, and could be of the large or small cell type [3,4].In general, all high-grade, poorly differentiated gastroenteropancreatic NECs have an aggressive natural history that is frequently characterized by early, widespread metastases [5]. Incidence rate of liver NEC was estimated to be 0.01 per 100,000 habitants.In liver NEC, there is a male predominance of 0.02.Primary hepatic NEC is most frequent in American Indian/Alaska Natives.The median age at diagnosis was 65 years [6][7][8][9]. The clinical diagnosis of primary hepatic NEC remains challenging because of its rarity and the lack of information about its characteristic appearance on images.When NEC of uncertain origin is diagnosed by liver tumor biopsy, it is extremely important to perform preoperative workup, including gastroscopy, colonoscopy, and Gallium-68 DOTATATE positron emission tomographycomputed tomography (PET-CT) examinations, because NENs of the liver are usually metastatic from other organs, such as the GI tract and pancreas [9,10]. We present a 22-year-old female patient with a primary hepatic neuroendocrine carcinoma (PHNEC) with mesentery metastases.As it is an exceptionally unusual presentation, it represents a diagnostic challenge. Clinical Case A 22-year-old Mexican woman presented to our medical oncology department.The family and personal inherited antecedents have no relevance to the case.The patient presented with abdominal pain, discomfort abdominal, constipation, asthenia, adynamia, weight loss of 9 kg in 6 months.Abdominal ultrasound reported an expansive-looking mass in the right hepatic lobe of 93 mm × 73 mm × 74 mm, without cholelithiasis.A simple and contrast-enhanced CT scan of the abdomen and pelvis reported a mesenteric tumor of 126 × 96 mm, hepatomegaly, and a liver tumor of approximately 20 cm in both lobes (Fig. 1). A percutaneous biopsy was performed on where poorly differentiated neoplastic cells were reported.Immunohistochemistry reported: positive AE1/AE3, positive chromogranin, positive synaptophysin (Syn), positive CD99, negative CD56, positive β-catenin, negative progesterone.High-grade NEC of the small cell type of liver was reported (Fig. 2). Gastroscopy and colonoscopy were performed; however, no evidence of tumor was reported.One year ago, palliative chemotherapy based on etoposide and cisplatin was started every 21 days for 6 cycles. Control CT scan was performed where stable disease was reported.Subsequently, she presented with headache, loss of bitemporal vision, and CT scan of the skull with contrast, which reported pituitary macroadenoma, confirmed with magnetic resonance imaging (MRI). A hormonal profile was requested to evaluate the hypothalamic-pituitary axis, where it was reported to be elevated prolactin.Ten months ago, she started cabergoline 0.5 mg orally every week and lanreotide 20 mg every month. She went to the emergency room with asthenia, adynamia, vomiting, diarrhea, acute abdominal pain.Eight months ago, an exploratory laparotomy was performed, with complete removal of mesentery tumor (Fig. 3). The histopathological report was high-grade NEC of small cell type, metastatic, 12.5 cm × 9.4 cm × 6.3 cm, with necrosis in 40%, mitotic count of 18 mitoses/10 high-power fields, extensive lympho-vascular invasion was identified, no residual tissue was identified. Subsequently, she continued with cabergoline 0.5 mg orally every week and lanreotide 60 mg every month, and surveillance.Control CT was performed where stable disease of the liver tumor and complete response of the mesenteric tumor were reported. During her follow-up, Gallium-68 DOTATATE PET-CT scan was performed, every 6 months.The last report 1 month ago, showed primary liver tumor activity, with a tumor in the right lobe measuring 12 cm × 18.9 cm × 28.6 cm, SUV max of 20.7, and another tumor in the left lobe of 6.2 cm × 0.5 cm, SUV max of 14.2, but no evidence of tumor in mesentery.Stable disease was concluded based on the response evaluation criteria in the solid tumor guide version 1.1 (RECIST 1.1) (Fig. 4). Currently, the patient is asymptomatic with stable liver tumor disease, 14 months after diagnosis, and without recurrence of the mesentery tumor, 8 months after its complete removal.She is scheduled to start external beam radiation therapy (EBRT) as a bridging treatment for the liver transplant next month. Discussion Primary hepatic neuroendocrine tumors (PHNETs) are a rarity and represent about 0.3% of all NETs, with only about 180 cases having been reported in the English-language literature.This number includes both primary hepatic NETs and NECs.Primary hepatic neuroendocrine carcinomas (PHNECs) are extremely rare, with only about 90 cases having been reported in the English-language literature (Table 1).During 2000-2012, the incidence rates of high-grade NENs, increased over time for all sites, except for the lung.Liver NEC is most frequent in American Indian/Alaska Native than in other races (white, black, Asian/Pacific Islander).The median age at diagnosis is 65 years.Incidence rates of liver NEC is of 0.01 per 100,000 habitants.In most series, incidence rates Case Reports in Oncology are similar in males and females.However, in liver NEC, there is a male predominance of 0.02 versus 0.01 [1,6,7].Our case involves a 22-year-old Mexican woman who initially presented with a liver tumor under study. In a separate analysis of over 162,000 cases of NEC reported to Surveillance, Epidemiology, and End Results (SEER) between 1973 and 2012, the upper GI tract and the pancreas accounted for 23 and 20 percent, respectively, of the NECs.About 3% were coded as liver primary, although it is likely that the majority of these were metastases.The form of presentation is localized or intrahepatic (29.9%), regional or nodal (33.1%), and metastatic (37.1%) [6,10,11].According to the characteristics of the imaging studies performed on our patient, a tumor was reported in both hepatic lobes that was metastatic to the mesentery. NENs are a heterogeneous group of tumors originating from enterochromaffin cells throughout the body, which most commonly develop in the GI tract, lungs, pancreas, gallbladder, thymus, and ovaries.The liver is the most common metastatic site of NENs but a rare site of tumor origin [5,8,11]. Presently, the origin of PHNECs is controversial.There are three hypotheses about the origin of PHNEC: (1) it is transformed from neuroendocrine cells of the intrahepatic bile duct epithelium.(2) It originates from multifunctional stem cells in the liver.(3) It originates from the ectopic adrenal and pancreatic tissues in the liver [50]. The clinical diagnosis: patients are usually asymptomatic (13%) at the early stages and are often discovered incidentally during physical examination.At the middle and late stages, patients may present symptoms such as abdominal discomfort or abdominal pain (44%), bloating, loss of appetite, weight loss, and obstructive jaundice (5%) as the tumor grows, and very few patients show signs of carcinoid syndrome, such as flushing, diarrhea, asthma, fever, and palpitations.Most patients have a single lesion (76.6%) located commonly in the right Case Reports in Oncology Case Reports in Oncology Case Reports in Oncology liver (48.4%).In our case, the patient presented a functional digestive disorder accompanied by abdominal discomfort, which led to an imaging study finding the liver tumor lesion as an incidental finding [11,51,52].Imaging characteristics: PHNECs have a rich blood supply from the hepatic artery and therefore exhibit hyperenhancement in the arterial phase and washout appearance in the portal venous phase of dynamic CT and MRI resembling hepatocellular carcinomas (HCCs).Both HCCs and PHNECs show a peripheral rim of smooth hyperenhancement in the portal venous or delayed phase pathologically correlating with a tumor capsule [8,9]. Case Reports in Oncology On CT, primary hepatic NEC appears as a low-density mass with an enhanced margin, and the center of the mass is not enhanced due to necrosis.MRI shows a low-intensity mass on fatsaturation T1-weighted images and a high-intensity area on fat-saturation T2-weighted images.Based on the abovementioned clinical-imaging findings, it is difficult to distinguish hepatic NECs from other hepatic carcinomas, such as HCC and cholangiocarcinoma.Consequently, pathological examination through the performance of a preoperative liver tumor biopsy is essential for diagnosis .In our case, the patient underwent a biopsy of the liver lesion for histological diagnosis, since the characteristics in the imaging studies were not sufficient to make the diagnosis conclusively. PHNECs are a diagnosis of exclusion since they are far less common than hepatic metastasis of NECs hence an extrahepatic primary NEC must always be excluded first through studies including endoscopy and colonoscopy (to rule out GI origin), CT, MRI, and somatostatin PET-CT (to determine the extent of the disease and its origin), as was done in our patient [51,52].Since the radiological and laboratory findings of PHNECs are not specific, definitive diagnosis of PHNECs needs pathologic evaluation of a surgically resected specimen . Pathology: consequently, pathological examination through the performance of a preoperative liver tumor biopsy is essential for diagnosis.On hematoxylin and eosin staining, NECs demonstrate a solid "sheetlike" proliferation of tumor cells with irregular nuclei, high mitotic features, and less cytoplasmic secretory granules.Small-cell NEC has tightly packed fusiform nuclei with finely granular chromatin, whereas large-cell NEC has more rounded, markedly atypical nuclei and, sometimes, prominent nucleoli.Immunocytochemical (IHC) staining patterns for neuroendocrine markers are more limited diffuse expression of Syn, faint or focal staining for chromogranin A (CgA).Up to 40% of NECs contain elements of nonneuroendocrine histology; by definition, the neuroendocrine component has to exceed 30% for the tumor to be called an NEC; otherwise, it is classified as a mixed adeno-NEC.Although IHC markers effectively identify primary hepatic NENs, there is no specific IHC stain for hepatic NEC.IHC markers for NEC remain similar to those for common NENs, including CgA (89.1%) and Syn (48.9%), as previously reported by researchers.Commonly measured tumor markers in NENs include serum CgA and 5-hydroxyindoleacetic acid (5-HIAA), the final secreted product of serotonin, levels in a 24-h urine sample [11,53].It does not only establish the diagnosis but also the tumor grading based on the mitotic rate and the Ki-67 proliferation index which is essential for the treatment and prognosis [11]. Presentation: there is no staging for NENs of the liver, so they are classified according to the location of the disease; localized (located within the primary organ, in this case the liver), locally advanced or regional (nodal disease), and advanced or metastatic (disease in distant nodes and organs) [9,.The size of the tumor is 1.5-27 cm, presenting as single (76.3%) and multiple in 23.7% of cases, located mainly in the right lobe in 48.4% of cases, and as bilobular in 18.5%.Knox et al. [54] reported extrahepatic involvement in 18.6% of cases, such as bone (60%), lymph node (60%), and lung (40%) .The form of tumor presentation was two liver tumors that occupied both lobes of approximately 20 cm with metastasis to the mesentery. Case Reports in Oncology Treatment: there are multiple treatment options for PHNEC.However, there is no standard therapy.In early-stage PHNEC, surgical resection of liver tumor tissue or partial hepatectomy is the most common treatment, with a 5-year survival rate after surgery of 75-80% and with a recurrence rate of 18% [8,16]. Other therapeutic options include liver transplantation, trans-arterial chemoembolization (TACE), and radiofrequency ablation (RFA).RFA is another treatment method for PHNETs.The introduction of RFA has allowed physicians to surgically address a larger population of patients with curative intent.RFA may be performed alone or in combination with resection.To date, most reports on RFA management are single-institution retrospective series.Indications for RFA are the presence of three or fewer tumors and a tumor diameter of ≤5 cm.Tumors located near the major branches of the portal and hepatic veins have a higher potential for incomplete ablation [10,55,56].Approximately 20-37% of patients are diagnosed at the metastatic or advanced stage, where platinum-based chemotherapy (etoposide + cisplatin [EP] or carboplatin [EC]) is the first-line treatment according to The European Society for Medical Oncology (ESMO) guideline [57]. Li et al. [58] evaluated the efficacy of platinum-based chemotherapy versus TACE observed a median overall survival of 14.8 months versus 12.2 months, respectively (p = 0.040).Furthermore, patients with Ki-67 ≥ 55% who received EP/EC had a significantly longer progression-free survival than those who received TACE (5.0 vs. 2.8 months, p = 0.001).This result is consistent with the observation of Sorbye et al. [59] that NENs with Ki-67 ≥ 55% display generally display a better response to platinum-based chemotherapy.Therefore, in this patient, it was decided to use EP for 6 cycles, obtaining stable disease.Debulking operations are recommended for patients with distant metastatic NETs because debulking improves symptomatic control of hormone hypersecretion and survival [8,52,54]. There are two possible explanations for the prognosis-prolonging effect of primary tumor resection.First, reduction of immunosuppressive tumor burden may have extended the prognosis, potentially minimizing the chance that the tumor will lead to disease progression and further metastases.Second, it is suggested that chemotherapy compliance may be improved by primary tumor resection in symptomatic patients [48,53]. Knox et al. [54] and Iwao et al. [60] conducted studies on the survival of patients with PHNEC and showed that the 5-year survival rate of patients undergoing surgical therapy was >50% [61].However, there are reports of case series where it has been reported that tumorectomy has been shown to prolong recurrence-free survival, so this patient underwent tumor resection of the metastatic disease to the mesentery, achieving complete metastatic resection demonstrated by imaging study. In patients with unresectable disease, other palliative options are available, including systemic chemotherapy using fluorouracil, hepatic artery embolization, octreotide therapy, and liver transplantation.Notably, however, peptide receptor radionuclide therapy (PRRT) is mainly used for well-differentiated and somatostatin receptor-positive NETs.PRRT is less effective for poorly differentiated NECs, and PRRT is ineffective but may cause liver toxicity for multiple NETs with a large tumor burden [56]. Two somatostatin analogs (octreotide, which has short-term effects, and lanreotide, which has long-term effects) are currently used for this purpose.Although this treatment effectively controls the symptoms of carcinoid syndrome, it is largely ineffective for tumor recession, as shown radiologically [45,57].Due to the fact that the Gallium-68 DOTATATE PET-CT reported avidity for somatostatin receptors at the tumor level and also during its evolution, he presented bilateral hemianopsia, headache with red flags, for which a head tomography with contrast and hormonal profile was requested, which concluded in prolactinoma corroborated by MRI of the skull, so it was decided to start lanreotide 20-60 mg i.m. each month. Case Reports in Oncology Trans-arterial chemoembolization (TACE) is one of the most commonly used method in the management of patients with extrahepatic NET intrahepatic metastasis.TACE is normally performed for advanced primary hepatic NECs that are poor candidates for resection.TACE treatment of patients with PHNET has only been described in few case reports.In one study, TACE was performed to treat of 20 patients with hepatic metastases, and the radiological response and symptom improvement rates were 90% [9,10,56,60].Our patient was evaluated by the interventional imaging service, which determined that she was not a candidate for TACE due to the large liver tumor volume and the high probability of liver toxicity.The results obtained so far show that the median survival time of patients with PHNEC after TACE surgery is 39.6 months and that the 5-year survival rate is 35.5%, which significantly prolongs the survival time of the patients [47]. Complete removal of liver metastases with curative intention may be accomplished by liver resection orif hepatic disease is disseminatedby total hepatectomy and transplantation.The latter provides immediate and complete relief of hormonal symptoms and pain and has also been performed in palliative circumstances.Still, treatment of neuroendocrine liver metastases by transplantation is performed only for exceptional patients.Only 4 of 300 liver transplantations in Munich and 1 of 415 in Berlin were performed in patients with liver metastases from NETs [61,62]. It has also been shown that chemoembolization and EBRT have been bridging treatment options for liver transplantation in those patients who are candidates for it [62].Currently, she is scheduled to start EBRT as a bridging treatment for the liver transplant next month. Prognosis: currently, the overall prognosis of PHNEC is better than other types of liver cancer.Median survival is 16.5 months (range, 0.7-41.7 months) based on a review of 12 PHNEC patients.The 5-year survival following surgery for all three differentiation subtypes of PHNEC is about 75%.After surgical resection, PHNEC can recur or metastasize in one to 10 years.The prognosis of primary hepatic NEC is extremely poor.For metastatic poorly differentiated NEC, the 5-year survival rate is only 5.8% and the 1-year survival rate is 23.5% [10,26].Our patient is currently in stable disease for 14 months and is asymptomatic. Conclusion PHNECs are extremely rare.There are multiple treatment options for PHNEC, it is necessary to use an adequate approach.We present the first case of PHNEC metastasized to the mesentery in the English-language literature.More evidence is needed to be able to establish specific recommendations for the management and that could improve the prognosis of this group of patients.The CARE Checklist has been completed by the authors for this case report, attached as online supplementary material (for all online suppl.material, see https://doi.org/10.1159/000533199). Fig. 2 . Fig. 2. a The microscopic study showed small, blue, round tumor cells disposed in cords, festoons, and glandular formation.b, c The tumor cells are small with uniform nuclei, round/oval with salt, and pepper chromatin and inconspicuous nucleoli.Immunohistochemistry revealed positive staining for CKAE1/AE3 in the cellular membrane (d), chromogranin with granular cytoplasmic pattern (e), and CD99 in the cellular membrane (f). Fig. 3 . Fig. 3. Tumor resection.a A tumor is observed at the level of the mesentery.b Complete removal of the 130 × 100 mm mesentery tumor.c Macroscopically without evidence of residual disease at the level of the mesentery. Table 1 . Summary of cases report of PHNEC
2023-10-14T15:40:34.428Z
2023-08-22T00:00:00.000
{ "year": 2023, "sha1": "24b3e8983b2effbc3de2a42263aaf6f17d255601", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1159/000533199", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0d994f0c78e23ad00781024a4ac87cc7461ff04e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
236508867
pes2o/s2orc
v3-fos-license
Farmers’ adaptation in dealing with limited water (A case study on Wonogiri Regency) The vulnerability of farmers to drought is determined by the interaction of the potential impacts of climate change. The impact of climate change is felt by residents of Paranggupito District, one of which is in Ketos Village in the agricultural sector, especially rainfed land which is very vulnerable to long drought conditions. Farmers ‘losses to drought are determined by the interaction of climate change impacts and farmers’ adaptation. This research aims to find out the adapting strategy the farmers take in dealing with water limitation. The subject of research consisted of farmers in Ketos Village, Paranggupito Sub District, Wonogiri Regency. The research design used was descriptive qualitative one with purposive and snowball sampling techniques. Data collection was carried out using observation, in-depth interview, focus group discussion, and documentation. Data was analyzed using triangulation technique. The result of research shows that to adapt to their condition, the farmers adopt short-and medium-term adapting behaviour, including: determining crop calendar, selecting plant variety, applying intercropping system, determining appropriate planting pattern, controlling pest and disease, determining planting technique, and monitoring continuously. Introduction Indonesian area has a strategic position, located in tropical region, between Asia and Australia continents, between Pacific and Indian Oceans, and passed through by equator, making it vulnerable to climate change. Climate is so closely related to weather change [1]. Weather changes and global warming can reduce farming production by 5-20 percents. BMKG's (Meteorology Climatology and Geophysics Council) data [2] indicates that the phenomenon of weather variability can be observed from the change of average rainfall pattern in Indonesia. The highest agricultural production data on the contribution of Central Java's GDP (Gross Domestic Product) in 2010-2014 was Wonogiri Regency at 35.01%, processed by BPS (Central Bureau of Statistic) of regency [3] in the period of 2010-2014. One of sub district mostly affected by drought is Paranggupito. Paranggupito subdistrict is about 55 Km from Wonogiri Regency's downtown. This subdistrict is the southernmost one in Wonogiri, between two Provinces: East Java and Daerah Istimewa Yogyakarta. Paranggupito subdistrict consists of 8 villages, all of which are affected by drought. Three villages were established to be DESTANA standing for Desa Tangguh Bencana (Disaster Resilient Village) belonging to Pratama classification. Those villages are Ketos, Johunut, and Gendayaan. As suggested [4], this status assignment by Wonogiri Regional Agency for Disaster Management is because the three villages have high vulnerability to drought disaster. Climate change also affects strongly the farming sector of Paranggupito Sub District, particularly the rain-fed land very vulnerable to long dry season, one of which is Ketos Village. Water supply in rain-fed land is determined by rainfall condition. The farmers' vulnerability to drought is determined by the interaction between potential effect of climate change and farmers' adapting capacity. Potential effect or impact is the resultant of farmers' sensitivity and exposure to drought or very inadequate water supply. Considering the background elaborated, drought is one of problems the farmers often encounter in Ketos Village, Wonogiri. Monographic data of Ketos Village in 2019 [5] indicates that the village's farming lands are all dry land, 640.42 Ha wide, consisting of moorland (587.02 Ha) and yard (53.4 Ha). Ketos villagers' basic livelihood, according to Village's Monographic data of 2019, is majority (77.76%) farming. Adapting attempts the farmers have taken is to resist drought. Each of adapting measures taken by the farmers will affect the behaviour to take such adapting measure. Topographic condition of region and natural resource, particularly insufficient water supply, and the people's interestedness in farming realm as the majority of people's livelihood in Ketos Village are interesting to study in this research. Therefore, the problem then generates adapting behaviour among farmers in the attempt of maintaining their farming. This research aims to analyze the adapting strategy the farmers of Ketos Village take in the attempt of dealing with drought. Method This research employed qualitative research method. Informant of research was determined using certain criteria: people with farmer profession and having worked as farmers for at least 10 years as in Table 1. The criteria were selected based on adaptation and mitigation in farming sector as the unit of analysis and assumption that the people having worked in farming for 10 years have experienced drought occurring in their region. The assumption is based on an argument [6] that drought is creeping disaster or the one not occurring suddenly. Informants were selected using purposive and snowball sampling techniques with the criterion that the farmers have been affected by drought in Ketos Village, Paranggupito Sub District, Wonogiri Regency (see on Table 1). The strategy used was case study. Case study can be used to develop critical thinking ability and to find new solution to one topic studied [7]. Primary data was obtained through Focus Group Discussion (FGD), in-depth interview, and participatory observation. Forum group discussion (FGD) is a systematic process of collecting data and information specifically on a certain problem through group discussion [8]. This research employed FGD activity along with some representatives of farmer 3 groups existing in Ketos Village to obtain primary data on the analysis of effect and adapting strategy taken by farmers in dealing with limited water. Data validation was conducted using source and method triangulations. Technique of processing data using triangulation is the technique of validating data utilizing something other than the data as the comparator of data. In this research, data source triangulation was conducted by crosschecking teh data obtained through several sources. Method triangulation was conducted to crosscheck the use of data collecting methods including interview, participatory observation, and documentation study simultaneously on the same source. The value of triangulation lies on the provision of evidencewhether convergent, inconsistent, or contradictory [10]. Technique of validating data using triangulation can be illustrated in Figure 1. Effect of water restriction in farming in Ketos Village Long dry season affects plant growth adversely because all physiological activities such as photosynthesis, respiration, transpiration, growth rate and ultimately crop production are disrupted [11]. The air impacts felt by farmers in Ketos Village can be classified into impact groups: biophysical impacts and socio-economic. The biophysical impact on farming, where there is an increase in pests, especially uret, an increase in the number of weeds or weeds, and changes in both quantity and quality that have an impact on the effectiveness of production. The impact in the socio-economic aspect of farmers has an effect on decreasing production due to the erratic rainy season, thus catching up with the rainy season. Fluctuation occurs in the paddy of agricultural products, and the existence of rural youth working in the city during the dry season. There is a change in farmer household assets (development in the field of animal husbandry as a form of investment when the dry season arrives and cultivating the annual crops of Sengon Albasisa and Teak). "The Asset Vulnerability Framework" according to [12] includes various asset managements that can be used to carry out or develop strategies that can sustain viability. Short-term adaptation strategy taken by farmers to deal with water limitation in Ketos Village Various attempts have been taken in the agricultural sector in response to climate change including drought in Ketos Village. It is generally aimed at minimizing climate-related risks, in the sense of increasing resilience and reducing vulnerability to unfavourable climatic conditions [13]. This is because the increase in drought frequency has a negative impact on local production, especially in areas prone to water shortages. The strategy of managing planting environment is carried out through some attempts: farmers' planning and adaptation to their farming, using the following short-term adapting strategy approaches. Planting time planning Local weather forecasts or "Pranatamangsa" is used to determine the date to start planting season which has become a method of calculating the coming of planting period by farmers in Ketos Village started planting, other farmers will join the planting. Its characteristics can be seen from the presence of natural signs: If the sound of the Srigunting Bird comes from the east, rainy season will begin, and therefore simultaneously the Ketos Village farmers will start planting. Selection of planting varieties, local superior seeds tolerant to drought conditions Paddy is a plant very vulnerable to water availability. Paddy is still a popular commodity for the people of Ketos Village. Prior to 2000, Ketos village farmers cultivated upland paddy, a type of paddy that is often grown on dry land areas. However, along with the progress of farmers' knowledge, in 2000s Ketos Villagers began to recognize a new paddy variety, namely "Segreng Handayani". Segreng is red paddy of a new local superior variety (VUB) which is very adaptive to water limitations. The choice of this red paddy variety is quite profitable because in addition to not requiring a fairly high watering intensity, it is also resistant to drought, and has a short lifespan of 100 days. The topography of Ketos Village, which is sloped, is very suitable to plant Segreng variety. Segreng is the farmers' favourite paddy variety, because it is resistant to pests compared with other paddy varieties that are not suitable for planting in the Ketos Village area. As for the Segreng type paddy also having a high selling value, it is not surprising that farmers often exchange the segreng paddy harvest outcome with the cheaper white paddy for family consumption. Segreng paddy can grow in various agroecologies and soil types. Meanwhile, the main requirements for the growth of Segreng paddy are suitable soil and climatic condition. Climatic factors, especially rainfall, are factors determining the success of cultivation. This is because the water requirement for Segreng paddy relies on rainfall only. It means that segreng paddy is indeed resistant to drought stress. Application of intercropping planting system Since the last few decades, the implementation of the community planting system in Ketos Village has changed. Departing from the monoculture cropping system, it changed into intercropping. Intercropping system is an anticipative measure in the case of crop failure due to drought or catching up with rain. The application of intercropping and intercropping cropping systems based on the water needs of each plant has a very significant impact on farmers. The intercropping system applied is intercropping between paddy -corncassava can be illustrated in Figure 2. The intercropping system and different crop rotation means that it has increased the diversity of flora, this will also be followed by an increase in the diversity of fauna, both as pollinators, natural enemies of pests, and other useful faunas [14]. Cropping Pattern Through a cropping pattern that is anticipatory measure taken to respond to water supply, a number of farmers are able to achieve fairly high productivity, so that their farming income can increase considerably. In the month when the rainy season starts, namely October, farmers have started planting paddy using the "ngawu-awu" technique, intercropping with corn and cassava. Paddy and maize will be harvested on January / February. It is followed with planting peanuts on February and harvesting on April. In a year with an estimated long rainy period, the peanut planting period can be done twice. The outcome of harvest in planting period-1 in the first 3 months is used as seed for the next 3 months of planting, and the rest will be sold. Meanwhile, during the second planting period, the whole produce will be sold. On July-August farmers will wait for the outcome of cassava harvest. This includes changing the planting time in order to obtain optimal growth [15]. Drought tolerant cultivation techniques " Wana" is what Wonogiri farmers call their agricultural land in the form of dry (rainfed) agricultural moor, in which there are typical paddy planting techniques, local wisdom of southern or dry land farmers. The hereditary tradition of "Ngawu-awu" is a way for farmers in Ketos Village to welcome rain. Generally, paddy is planted by spreading it, even spreading the seeds before the rainy season arrives. This method is called "ngawu-awu" in order to save time. The "ngawu-awu" activity where farmers sow paddy seeds on dry land that has been processed manually or using farming tools, namely rakes. The soil is processed by turning the dry soil so that there are no more traces of cracks caused by drought. The "ngawu-awu" process will be considered successful if the rain comes timely. According to the Javanese calendar, it is the right time for the farmers to do the "ngawu-awu" process. Land intensification and continuous monitoring The land intensification was carried out due to the limited land of the farmers. Farmers regularly apply organic fertilizer / manure from their respective household livestock as basic fertilizer. Ketos Village farmers also buy sack organic fertilizer on the principle that even though it is expensive but the results are good, the fertilizer will be purchased. The use of chemical fertilizers (Urea, NPK, Phosphat) is also widely used because of information, offers or promotions from agents who often come at farmer group meetings. Good soil cultivation aims to form a root system. A good root system will ensure optimal absorption of water and nutrients in the soil. Optimal absorption of water and nutrients will ensure high plant quality. Soil processing is carried out routinely; the term tillage in Ketos Village is "diwalik / diluku / dibrujul". After the first rain fell, just before planting the first crop, the soil was dibbled or paliran is made. For the next crop, the farmers do not need to plough the soil anymore. The use of tools in soil cultivation depends on the thickness of the soil solum. Hoe only is enough to be used on the land with a thin solum. Such technology is compatible to the recommended technology in sustainable agricultural systems, which is also recommended in dry-land farming in India [16]. Maintenance is also carried out by continuously monitoring the attacks of pests, diseases and nuisance plants or often referred to as "Ndangir" can be illustrated in Figure 3. Medium term adaptation strategy taken by farmers to face water limitation in Ketos Village The strategy of managing planting environment can be carried out through various planning and adjusting efforts, both in agricultural activities, resource management and the application of agricultural technology to overcome the impacts of climate change and anomaly, namely drought [17]. The farmers take the following medium-term adapting strategies: Economic adaptation Some attempts have been taken by farmer communities that are affected by water shortages in relation to the economy, where farmers borrow money from Gapoktan (Farmer Group Association) as a capital loan in running their farming. Another adaptation strategy is the existence of "Tandon Pangan" where many people still have foodstuffs, so they do not worry about food shortages. They rely on past agricultural savings, as a form of anticipation when the dry season arrives. Empowerment of farmers through building and facilitation There were representatives of Ketos Village farmers who had participated in PMD (Empowerment of Village Communities) regarding TTG (Expedient Technology). On this occasion, they discussed the cultivation of herbal chilli plants that can grow on rocks (specific location varieties) in Ketos Village. The harvest time is long enough to make farmers less interested in cultivating it. Department of Agriculture provides facilitation in consultation with the Field Agricultural Extension Officer, but only limited to farmers' problems in their farming. Use of slopes Ketos Village has a variety of topographical conditions. Data of Ketos Village's Profile in 2019, the topography of village consists of lowland (269.00 Ha), hilly (107.00 Ha), highland or mountain (120.00 Ha), and slope (0.20 Ha) can be illustrated in Figure 4. In slope-to-bumpy land conditions, a roll terrace is constructed. On the embankment or terrace lip there are grass plants as animal feed. On the sloped terrace, other drought-tolerant commodities are planted, such as rhizomes (turmeric, ginger, etc.), gudai beans, and herbal chilly can be illustrated in Figure 5. Conclusion Climate change affects strongly the farming sector of Paranggupito Sub District, particularly the rainfed land very vulnerable to long dry season, one of which is Ketos Village. The impact is divided into biophysical and social-economic ones. Adapting attempts the farmers have taken are their form of resistance to drought. Adapting strategy taken is the short-term one including: (1) Planning the planting time, (2) selecting plant variety, local superior seed tolerant to drought condition, (3) applying intercropping system, (4) cropping pattern, (5) drought-tolerant cultivating technique, and (6) land intensification and continuous monitoring. Meanwhile the medium-term adapting strategy includes (1) economic adaptation, (2) farmers' empowerment through building and facilitation, and (3) utilization of slope land.
2021-07-30T20:05:29.827Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "5de69652ff3bc6069890a28259e2c143852d4481", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/824/1/012077", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "5de69652ff3bc6069890a28259e2c143852d4481", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics" ] }
102341765
pes2o/s2orc
v3-fos-license
Compact hyper-K\"ahler categories We define and study the notion of hyper-K\"ahler category. On the theoretical side, we focus on construction techniques and deformation theory of such categories. We also study in details some examples : non-commutative Hilbert schemes of points on a K3 surface and a categorical resolution of a relative compactified Prymian constructed by Markushevich and Tikhomirov. Background An irreducible holomorphically symplectic manifold (or compact hyper-Kähler manifold) is a smooth compact simply connected Kähler manifold with a unique (up to scalar) nowhere degenerate holomorphic 2-form. Together with complex tori and Calabi-Yau manifolds, such varieties are building blocks for the decomposition of Ricci-flat manifolds (see [Bog74,Bea83]). However, contrary to the former two, it is quite hard to produce many different examples (or to classify them all) of compact hyper-Kähler manifolds. Up to deformation, the following are the only known examples of such manifolds : -S [n] , the Hilbert-Douady scheme of n points on a K3 surface (see [Bea83]), -K n (A), the generalized Kümmer variety of level n associated to an abelian surface (see [Bea83]), -A crepant resolution of M S (2, 0, 4), the moduli space of semi-stable rank-2 torsion free sheaves with c 1 = 0 and c 2 = 4 on a K3 surface (see [O'G99]), -A construction similar to the previous one, but where the K3 surface is replaced by an abelian surface (see [O'G03]). On the other hand, the derived categories of compact hyper-Kähler manifolds form an extremely interesting playground to test Kontsevich's Homological Mirror Symmetry conjecture. Indeed, one expects that such categories have a lot of autoequivalences which do not come from automorphisms of the complex structure but are instead related to Dehn twists along lagrangian projective spaces in the mirror manifold (see [HT06]). 1 Hence, it seems of high importance to have more examples of derived categories of compact hyper-Kähler manifolds. Or perhaps not so much derived categories of compact hyper-Kähler manifolds at such, but we definitively need more examples of triangulated categories which closely look like these derived categories. The purpose of this paper is to introduce the notion of compact hyper-Kähler categories, to study some of their basic properties and to provide some interesting new examples. Roughly speaking, a compact hyper-Kähler category of dimension 2n is a smooth compact simply connected category with a Serre functor given by the translation by [2n] and endowed with a unique (modulo scalar) nondegenerate categorical 2-form (all these notions will be made precise in the sections 2 and 3 of this paper). One easily shows that if X is an algebraic variety then D b (X) is a compact hyper-Kähler category if and only if X is a compact hyper-Kähler manifold. Our main technique to construct new examples of such categories is based on Kuznetsov's theory of categorical crepant resolution of singularities (see [Kuz08b]). It is well known that one can produce a lot of singular holomorphically symplectic varieties (see [Muk84]) and that crepant resolutions of such varieties are holomorphically symplectic manifolds. Unfortunately, experience tells us that it is almost always impossible to find crepant resolutions of interesting singular holomorphically symplectic varieties (see [CK07,CK06,KL07,LS06,KLS06,MT07,Sac13]). It is however not too difficult to produce categorical strongly crepant resolutions of singularities of some nice singular holomorphically symplectic varieties. For instance, if X is a hyper-Kähler manifold and G is a finite group of automorphisms of X preserving the symplectic form, then X/G admits a categorical strongly crepant resolution of singularities (see Theorem 2.2.1 below for a more general statement). The existence of categorical crepant resolution for all quotient singularities is in clear contrast with the known results in the commutative world. It is indeed a notoriously difficult problem to decide when a quotient singularity of dimension bigger than 4 admits a crepant resolution [Kal02,BKR01]. Hence, it seems that examples of compact hyper-Kähler spaces are way easier to construct in the non-commutative setting. Given the scarcity of examples of commutative hyper-Kähler manifolds, the ease with which one can construct non-commutative incarnations of such varieties plainly justifies, in my opinion, the detailed study of such hyper-Kähler categories. Furthermore, unexpected properties of such categories will probably be discovered in the near future and they will certainly shed a new light on the algebraic study of compact hyper-Kähler manifolds. Overview of the paper Let me give a quick overview of the theory of compact hyper-Kähler categories developed in this paper. First of all one would like to give a definition of compact hyper-Kähler categories which is invariant by equivalence. The work of Huybrechts and Niper-Wisskirchen [HNW11] suggests that it is possible in some geometric cases. Indeed, they prove that if X 1 and X 2 are derived equivalent smooth projective varieties, then X 1 is hyper-Kähler if and only if X 2 is hyper-Kähler. A complete definition of compact hyper-Kähler categories will be given in section 3 of this paper. For now, let me focus on an important special case : Definition 1.2.1 Let X be a smooth projective variety and T ⊂ D b (X) be a full admissible subcategory and assume furthermore that O X ∈ T . The category T is said to be compact hyper-Kähler of dimension 2m (with respect to its embedding in D b (X )) if the Serre functor of T is the shift by 2m and This definition would be independent of the embedding if one could prove that for all smooth projective Y such that T is a full admissible subcategory of (as graded algebras). This does not seem to be easy. Indeed, even in the case where T ≃ D b (X) ≃ D b (Y ) and X is hyper-Kähler, the proof given in [HNW11] that H • (O Y ) ≃ H • (O X ) relies on deep structural results for the Hochschild cohomology of compact hyper-Kähler manifolds. Nevertheless, one would expect that these two graded algebras are isomorphic whenever X and Y are derived equivalent. I discuss this invariance problem in more details in [Abu16]. It would certainly be desirable to know if this result can be generalized to higher-dimensional cases. It would also be very interesting to discover which structural results known for the Hochschild cohomology rings of compact hyper-Kähler manifolds are still valid in the categorical context. For instance, if X is a compact hyper-Kähler manifold of dimension 2m and HH 2 (X) is the sub-algebra of HH * (X) generated by HH 2 (X), Verbitsky [Ver96] proved the following isomorphism : where q is the Beauville-Bogomolov quadratic form. This result is heavily used in [HNW11] to prove the derived invariance of the compact hyper-Kähler property for projective varieties. In my opinion, it would be fascinating to have a similar statement for the Hochschild cohomology of a compact hyper-Kähler category. The last two sections of the paper are dedicated to the study of specific examples. In section 4, we focus on a question that was asked to me by Misha Verbitesky. Question 1.2.6 Let S be a K3 surface, for which G ⊂ S n does the quotient S × · · · × S/G admits a non-commutative crepant resolution which is hyper-Kähler ? The answer to this question in the commutative world is due to Verbitski himself and he proves that only for G = S n such a resolution exists (and the resolution is the Hilbert scheme of n points on S). In the non-commutative world, there are quite a few more examples and we have the : Theorem 1.2.7 Let S be a K3 surface and n ≥ 1. Let G ⊂ S n acting on S × · · · × S by permutations. The quotient S × · · · × S/G admits a categorical crepant resolution which is a hyper-Kähler category if and only if G is one of the following : -G = S n , -G = A n (the alternating group), n = 5 and G = F * 5 , n = 6 and G = PGL 2 (F 5 ), n = 9 and G = PGL 2 (F 8 ), n = 9 and G = PGL 2 (F 8 ) ⋊ Gal(F 8 /F 2 ). Furthermore, this categorical resolution is always non-commutative in the sense of Van-den-Bergh. The study of the Betti cohomology ring of Hilbert schemes of points on a K3 surface is a classical subject where hyper-Kähler geometry, number theory and representation theory interact fruitfully with one another. We expect that the study of the Hochschild cohomology ring of the above hyper-Kähler categories should reveal interesting new connections between these three topics. In the final section of this paper, we describe in some details a categorical strongly crepant resolution of the relative compactified Prymian constructed by Markushevich and Tikhomirov in [MT07]. It is known that this fourfold is a singular irreducible symplectic variety of dimension 4 which has no crepant resolution of singularities. Our main result in section 5 is the : The relative compactified Prymian of Markushevich and Tikhomirov admits a categorical strongly crepant resolution which is a hyper-Kähler category of dimension 4. The Hochschild cohomology numbers of this category are : This result is based in an essential way on the computations of the Hodge numbers of some resolution of singularities of the Prymian. This is done in the appendix by Grégoire Menet. Using the deformation theory developed in section 3, we prove the : Proposition 1.2.9 A small deformation of the categorical strongly crepant resolution of the relative compactified Prymian of Markushevich-Tikhomirov can never be equivalent to the derived category of a projective variety (and it thus provides a counter-example to conjecture 5.8 in [Kuz16]). We expect in fact that any deformation of this category is never equivalent to the derived category of a projective variety. If our expectation is correct, then the moduli space of hyper-Kähler categories of dimension 4 (if such an object exists) contains a connected component which is purely noncommutative ! Acknowledgements. I am very thankful to Chris Brav, Victor Ginzburg, Daniel Huybrechts, Sasha Kuznetsov, Richard Thomas and Matt Young for very interesting discussions about the various possible definitions of compact hyper-Kähler categories and the properties one would expect them to enjoy. I would also like to thank my former PhD advisor Laurent Manivel for many helpful comments on some preliminary version of this work. I am especially grateful to Gregoire Menet for supplying me with the Hodge numbers I was looking for and to Misha Verbitsky for asking the question studied in section 4 of this paper. Categorical crepant resolution of singularities As mentioned in the introduction, our examples of compact hyper-Kähler categories are based on the theory of categorical crepant resolutions of singularities. This notion has been developed in [Kuz08b] and was further explored in [Abu13a, Abu15]. Definition and motivations Let us recall that a crepant resolution of a normal Gorenstein algebraic variety Y is a resolutions of singularities π : X → Y such that π * ω Y = ω X , where ω Y is the dualizing line bundle of Y . Crepant resolutions are often considered to be minimal resolutions of singularities (see the first part of [Abu13a] for an extended discussion about minimality for resolutions of singularities). Une fortunately crepant resolutions of singularities are quite rare. The following example is very classical : The variety Y is analytically equivalent to C 6 /{1, −1}. Hence, it is locally analytically Q-factorial (see [KM98], Chapter 5), so that it has no small resolution of singularities. Furthermore the blow-up of Y along the vertex gives a resolution of singularities where the coefficient of the exceptional divisor in the dualizing bundle formula is 1 (this is an obvious computation). As a consequence, the variety Y has terminal singularities. Since it admits no small resolution, we find that Y has no crepant resolution of singularities. Given a singularity which does not admit any crepant resolution, one still would like to know if it is possible to produce minimal resolutions from the point of view of category theory. Kuznetsov's insight is that such categorical "minimal" resolutions should be constructed as categorical crepant resolution (see [Kuz08b], section 4). Definition 2.1.2 Let Y be an algebraic variety and X be a smooth Deligne-Mumford stack. We say that X homologically dominates Y , if there exists a proper morphism p : X → Y , such that Typical examples of such phenomenon include resolutions of singularities for a variety with rational singularities and the canonical projection from a smooth Deligne-Mumford stack to its coarse moduli space. Definition 2.1.3 Let Y be an algebraic variety with Gorenstein rational singularities. Let p : X → Y be a smooth Deligne-Mumford stack which homologically dominates Y . A categorical resolution of Y is a full admissible subcategory T ⊂ D b (X ) such that Lp * D perf (Y ) ⊂ T . In [Kuz08b], the definition of categorical resolution was restricted to the case where X is a variety. A way more general notion of categorical resolution has been defined and studied by Kuznetsov and Lunts in [KL12]. The main advantage of their definition is that one can prove the existence of a categorical resolution for any scheme ( !) of finite typer over C. With Definition 2.1.3, we lie in the middle. The possibility to work with Deligne-Mumford stacks allows to produce interesting examples of non-commutative resolution of singularities (see [Abu15]). On the other hand, many elementary techniques and results from [Kuz08b] are still valid when X is a smooth Deligne-Mumfod stack, with proofs being exactly the same. Definition 2.1.4 (Categorical crepancy, [Kuz08b]) Let Y be a an algebraic variety with Gorenstein rational singularities and p : X → Y be a Deligne-Mumford stack homologically dominating Y . Let δ : T ֒→ D b (X ) be a categorical resolution of Y and let p T * : T → D b (Y ) be the composition of Rp * with δ. -We say that p T * : we have : T and p ! T are the left and right adjoint to p T * . -We say that p T * : T → D b (Y ) is a strongly crepant resolution of Y if the following two conditions hold : 2. the identity functor is a relative Serre functor for T with respect to the map p T * . Let us make a few comments on this definition. The first requirement in the definition of a categorical strongly crepant resolution is that T has a module structure over D perf (Y ) (see [Kuz08b], section 3). Assume that T is a categorical strongly crepant resolution of a projective variety Y . Then the (absolute) Serre functor of T is given by the tensor product by π * ω Y [dim Y ]. Note also that a categorical strongly crepant resolution of a Gorenstein rational singularity is automatically a categorical weakly crepant resolution of this singularity, but the converse is not true (see [Kuz08b], section 8). However, in the purely geometric setting (that is when T ≃ D b (X), for some algebraic variety X), all these notion coincide [Abu13a]. I refer to [Kuz08b,Kuz08a,Abu13b,SVdB] for the existence of categorical crepant resolution of determinantal varieties. Categorical crepant resolutions and quotient singularities In this sub-section, I will recall the main result of [Abu15]. It will be useful to construct new compact hyper-Kähler categories starting from a compact hyper-Kähler variety endowed with a finite group of symplectic automorphisms. ) Let X be a quasi-projective variety with normal Gorenstein quotient singularities and let X be a smooth separated Deligne-Mumford stack whose coarse moduli space is X. Assume that the dualizing line bundle of X is the pull back of the dualizing line bundle on X, then D b (X ) is a strongly crepant resolution of X. Furthermore, there exists a sheaf of algebras A on X such that D b (X ) ≃ D b (X, A ). Hence, the pair (X, A ) is a non-commutative crepant resolution of X in the sense of Van den Bergh. Note that if X is a normal quasi-projective variety with quotient singularities, there is always a smooth separated Deligne-Mumford stack associated to it as in the above statement (see Proposition 2.8 of [Vis89]). The non-trivial hypothesis (which can not be removed) is that the dualizing bundle of the Deligne-Mumford stack associated to X is the pull back of the dualizing bundle on X. This amounts to check that on an étale atlas of X , the line bundle ω X is equivariantly 2 locally trivial. This finally boils down to checking that for any x ∈ X, there exists an étale neighborhood U x of 2. for the isotropy groups of the fixed points of the étale atlas of X x ∈ X, such that U x = V /G where V is a vector space and G is a subgroup of SL(V ). This holds in particular for a variety whose singularities are isolated points locally analytically equivalent to a cone over v 2 (P 3 ) ⊂ P(S 2 C 4 ). In the local case, the above result was already known for a long time (see [vdB04], for instance). The heart of Theorem 2.2.1 is the existence result of categorical crepant resolutions for quotient singularities in the global setting. Indeed, there is a priori no reason for the local resolutions constructed in [vdB04] to glue globally. The main point of [Abu15] is to exhibit a sheaf of non-commutative algebras which provides such a gluing of the local resolutions. Compact hyper-Kähler categories Recall that a holomorphically sympletic variety of dimension 2m is (in the projective case) a smooth projective variety X having trivial canonical bundle and endowed with a 2-form σ ∈ H 0 (X, Ω 2 X ), such that σ ∧m = 0. On says that X is compact hyper-Kähler if X is simply connected and σ generates H 0 (X, Ω 2 X ). Since σ defines an isomorphism σ : Ω X → T X , one can equivalently say that X is holomorphically symplectic if X is smooth simply connected projective with trivial canonical bundle and there exists a Poisson bracket θ ∈ H 0 (X, 2 T X ), such that θ ∧m = 0. Hence, one could be tempted to give the following definition : Definition 3.0.1 (naive definition of holomorphically symplectic categories) Let T be a smooth compact triangulated category. We say that T is holomorphically symplectic of dimension 2m if the shift by 2m is a Serre functor for T and there exists θ ∈ HH 2 (T ) such that θ •m = 0. Such a definition has the advantage to be invariant by equivalences. Its (non-negligible) drawback is that the derived categories of many non holomorphically symplectic varieties are then to be considered as holomorphically symplectic categories. Indeed, the Hochschild-Kostant-Rosenberg isomorphism [Mar09] shows that for X smooth projective, there is a decomposition (compatible with products on both sides [CRVdB12,HNW11]) : Hence, with Definition 3.0.1, the derived category of an abelian surface would be considered as a holomorphically symplectic category, which is something we want to avoid. The main problem here is to define categorically one of the algebras H 0 (X, • T X ) or H 0 (X, • Ω X ) (the latter being isomorphic, by Hodge duality, to the algebra H • (X, O X )). We will give such a definition in the first subsection below. Homological units Let X be a algebraic variety and let F ∈ D b (X) be an object whose rank is not zero. Then the trace map : splits and gives a splitting : where Hom • (F , F ) 0 is the graded vector space of trace-less endomorphisms. Hence, the algebra H • (O X ) appears as a maximal direct factor of the endomorphisms algebra of any object in D b (X) which rank is not vanishing. We will see below that this algebra is an important categorical invariant. Definition 3.1.1 Let C be an abelian category with a non-trivial rank function and T be a full admissible subcategory in D b (C ). A graded algebra T • is called a homological unit for T (with respect to C ), if T • is maximal for the following properties : 1. for any object F ∈ T , there exists a pair of morphisms i F : T • → Hom • (F , F ) and t F : is a graded algebra morphism which is functorial in the following sense. Let F , G ∈ T and let a ∈ T k for some k. Then, for any morphism ψ : F → G , there is a commutative diagram : -the morphism t F :: Hom • (F , F ) → T • is a graded vector spaces morphism which satisfies the dual functoriality property of i F . 2. for any F ∈ T which rank (seen as an object in D b (C )) is not vanishing, the morphism t F splits i F as a morphism of graded vector spaces. With hypotheses as above, an object Of course, one can not expect that all examples of homological units as defined above will be significant. In the main applications of the present paper, one will look at C = Coh(X), Coh G (X) or Coh(X, α), where X is a smooth projective variety, G a reductive algebraic group acting linearly on X, α a Brauer class on X and the rank function will be the obvious one. However, it is well possible that many new examples of homological units coming from representation theory will be discovered, so that it seems sensible to give a general definition that does not restrict to purely geometrical examples. Note also that the hypothesis of non-vanishing rank for the splitting is a technical hypothesis which is important. It would be very interesting to know if there are some non-trivial examples where the splitting occurs whatever the rank of the object. 1. Let X be a smooth algebraic variety and α ∈ Br(X), a class in the Brauer group of X. Consider C = Coh(X, α), the category of coherent α-twisted sheaves on X. One can define a rank function on C as being the rank of F when seen as an O X -module. Then for any F ∈ D b (C ), we have a trace map : which splits when the rank of F is not zero. As a consequence, for all F ∈ D perf (C ), we have a graded algebra morphism : which is split (as a morphism of vector spaces) when the rank of F is not zero. The mor- is maximal for the properties required in Definition 3.1.1 and it is a homological unit for C . 2. Let X be a smooth algebraic variety and G be a reductive algebraic group acting linearly on X. For any F ∈ D b (Coh G (X)), the trace map Tr : RH om(F , F ) → O X is G-equivariant and it is split if the rank of F is non-zero. Hence, for all F ∈ D b (Coh G (X)), we have a graded algebra morphism : which is split (as a morphism of vector spaces) when the rank of F is not zero. The morphism is maximal for the properties required in Definition 3.1.1 and it is a homological unit for C . This readily generalizes for any smooth Deligne-Mumford stack. Namely, if X is a smooth Deligne-Mumford stack, then H • (O X ) is a homological unit for D b (X ). Note that all line bundles on X are unitary objects. One would like to know when the homological unit is unique and independent of the embedding in the derived category of an abelian category. This question seems to be interesting for itself and it it does not have an obvious answer. I discuss this invariance problem in [Abu16]), where I give some applications to the conjectural derived invariance of Hodge numbers. The following result is a slight generalization the fifth assertion of Theorem 2.0.10 in [Abu16] : Then we have an isomorphism of graded algebras : Proof : ◮ The proof follows closely the lines of the proof of Theorem 2.0.10 in [Abu16]. First we will prove that there exists L ∈ Pic(X) such that the rank of Φ(L) is non-zero. We proceed by absurd. Assume that for all L ∈ Pic(X), the rank of Φ(L) is zero. First of all, using Orlov's representability Theorem, we can see Φ as a Fourier-Mukai kernel with kernel where p and q are the natural projections in the diagram : Since Y is smooth, the vanishing of the rank of Φ(L) for all L ∈ Pic(X), implies the vanishing : for generic y ∈ Y and for all L ∈ Pic(X). This means that for all L ∈ Pic(X) and generic y ∈ Y . Using the projection formula for Rp * and the Leray spectral sequence for Rp * and RΓ, we find that it is equivalent to : for all L ∈ Pic(X) and generic y ∈ Y . As Lp * C(y) = O X×y , we find that : for all L ∈ Pic(X), generic y ∈ Y and where j y : X × y ֒→ X × Y is the natural inclusion. This can be rewritten also as : for all L 1 , · · · , L p ∈ Pic(X), all k 1 , · · · , k p ∈ N and generic y ∈ Y . Using the Grothendieck-Riemann-Roch Theorem, we find : for all L 1 , · · · , L p ∈ Pic(X), all k 1 , · · · , k p ∈ Z and generic y ∈ Y . As a consequence, we get : for all L 1 , · · · , L p ∈ Pic(X), all 0 ≤ k ≤ 4, all k 1 , · · · , k p ∈ N such that k 1 + · · · + k p = k. The Chern character is taken here to be with value in H • (X, C). Since numerical equivalence and homological equivalence coincide for curves and divisors, we deduce that : Let us prove that ch(j * y G ).td(X) 4 also vanishes. By equation 1, we know that ch(j * y G ).td(X) 4 is in the primitive cohomology of X. Assume that ch(j * y G ).td(X) 4 = 0, then the Hodge-Riemann bilinear relations imply that : .td(X) −1 has nonvanishing components only in degree 4, 6, 8 and its degree 4 component is ch(j * y G ).td(X) 4 . Hence, we find that ch(j * y G ∨ ) has non-vanishing components only in degree 4, 6, 8 and that its degree 4 component is ch(j * y G ).td(X) 4 (here G ∨ is the derived dual of G ). We deduce that : But the Grothendieck-Riemann-Roch Theorem implies : On the other hand the left adjoint to Φ : Thus, we find that : is an equivalence. We conclude that : This is a contradiction with equation 2 and we this proves that ch(j * y G ).td(X) 4 = 0 for generic y ∈ Y . We deduce that ch(j * y G ) = 0 and then that ch(j * y G ∨ ) = 0 for generic y ∈ Y . This translates as ch (Φ * (C(y))) = 0. But this is impossible. Indeed, Φ * : Φ(D b (X)) −→ D b (X) being an equivalence, we know that it induces a bijection between the image of the Chern character from Φ(D b (X)) to H • (Y, C) and H • (X, C). Since C(y) ∈ Φ(D b (X)) and that its class in H • (Y, C) is non-zero, we know that the class of Φ * (C(y)) must also be non-zero. We conclude that our starting hypothesis is absurd. Thus, there exists L ∈ Pic(X) such that the rank of Φ(L) is non-zero. Using the trace map, we find an injection of graded algebras : As a consequence, we have an injection of graded algebras : Now we consider the functor . The very same proof as above shows that there exists L ∈ Pic(Y ) such that the rank of Φ * (L) is non zero. The trace maps yields again an injection of graded algebras : On the other hand, given a map a k : and Ψ ! is the right adjoint to Ψ). Using axiom TR3 for the definition of a triangulated category, we deduce that the natural map Hom Definition and construction techniques We first recall the definition of smoothness, compactness and regularity for triangulated categories. [Orl14]) Let T be the derived category of DG-modules over some DGalgebra (A , d) (over C). The category T is said to be : regular, if it has a strong generator. -Calabi-Yau of dimension p if the shift by p is a Serre functor for A . where X is an algberaic over C. It is easily shown that X is smooth and proper over C if and only if T is smooth and compact (see [Kon09]). Note also that if T is a semi-orthogonal component of the derived category of a smooth proper scheme over C, then T is smooth, compact and regular (see [Orl14]). With these definitions in hand, we can introduce the main notion of this paper : Definition 3.2.2 (compact hyper-Kähler categories) Let T be a smooth, compact and regular triangulated category which is closed under direct summands. Assume that T is a semi-orthogonal component of D b (C ), where C is an abelian category with a rank function. We say that T is a compact hyper-Kähler category (with respect to its embedding in D b (C )) if T is Calabi-Yau of dimension 2m and there is a unique homological unit for T (with respect to its embedding in D b (C )), which is isomorphic C[t]/(t m+1 ) with t homogeneous of degree 2. Proposition A.1 of [HNW11] implies the following : Proposition 3.2.3 Let X be an algebraic variety. The category D b (X) is compact hyper-Kähler (with respect to its embedding in D b (X)) if and only if the variety X is compact hyper-Kähler. We will see that one can construct many examples of compact hyper-Kähler categories which are non-commutative. It seems extremely hard to find new examples of commutative compact hyper-Kähler manifolds. Actually, one can produce a lot of compact singular holomorphically symplectic varieties [Muk84]. But almost all of them do not admit any geometric crepant resolution of singularities. Hence, I believe that the following result opens the door to a new world of compact hyper-Kähler spaces. , with t homogenous of degree 2. Any categorical strongly crepant resolution of Y is a compact hyper-Kähler category. The above statement is slightly ambiguous as we haven't proved that the notion of compact hyper-Kähler category is independent of the embedding inside the derived category of an abelian category with a rank function. However, our definition of categorical resolution always refer to a Deligne-Mumford stack which homologically dominates Y . In the above statement, we implicitly refer to the embedding of T inside the derived category of this Deligne-Mumford stack. ◮ Let p : X → Y be a projective Deligne-Mumford stack which homologically dominates Y and let T ⊂ D b (X ) be an admissible full subcategory such that the induced map : Rp * : T → D b (Y ) is a strongly crepant resolution. Since T is an admissible subcategory of the derived category of a smooth projective Deligne-Muford stack, we know that T is smooth, compact and regular. Furthermore, it is a strongly crepant resolution of a Gorenstein projective variety whose dualizing bundle is trivial, hence T is Calabi-Yau of dimension dim Y = 2m. We are only left to prove that there is a unique homological unit for T (with respect to its embedding inside D b (X )), which is isomorphic to C[t]/(t m+1 ) with t homogeneous of degree 2. By hypothesis, we have is a homological unit for D b (X ). Hence, for all F ∈ T , we have a graded algebra morphism : given by a → id F ⊗ a. As a consequence, this morphism satisfies the functoriality condition stated in definition 3.1.1. Furthermore this morphism is split when the rank of F is not zero. But O X ∈ T , so that there is a unique homological unit for T (with respect to its embedding in D b (X )), which is isomorphic to C[t]/(t m+1 ) with t homogeneous of degree 2. ◭ Corollary 3.2.5 Let X be a compact hyper-Kähler variety and G be a finite group of symplectic automorphisms of X. The category D b (Coh G (X)) is a compact hyper-Kähler category. Proof : ◮ One can show directly that D b (Coh G (X) is a compact hyper-Kähler category, but I think it is interesting to show it is a consequence of Theorem 3.2.4. Indeed, if G is a finite group of symplectic automorphisms of X, then X/G is a projective Gorenstein variety with rational singularities. The generator of H 2 (O X ) being G-equivariant, it descends to X/G and its top wedge-product remains non zero on X/G. As a consequence, we have Remark 3.2.6 The notion of (holomorphically) symplectic stack has been defined by Pantev, Toën, Vaquié and Vezzosi [PTVV13] and by Zhang [Zha11]. It would be of course desirable to know if one can define the notion of irreducible holomorphically symplectic stack and if the derived categories of such stacks are related to compact hyper-Kähler categories. Deformation theory for compact hyper-Kähler categories In this subsection, I will prove some basic results for the deformation theory of compact hyper-Kähler categories. They will be used in the last section of this paper to prove that there exists compact hyper-Kähler categories of dimension 4 which deformations are never equivalent to the derived category of a projective variety. I will focus on a specific type of deformation of triangulated categories : deformation inside the derived category of an algebraic variety (all results proven below should carry on without any problem to deformation inside the derived category of a Deligne-Mumford stack). Let T ⊂ D b (X) be a full admissible subcategory. Given a smooth algebraic variety B, one wants to define the deformation of T inside D b (X) over B. Definition 3.3.1 Let X be a smooth projective variety, let T ⊂ D b (X) be a full admissible subcategory and B a smooth connected algebraic variety with a marked point 0 ∈ B. A smooth deformation of T inside X over B is the data of : -a smooth projective morphism π : X → B such that X 0 = X, -a full admissible subcategory is the kernel representing the projection functor D b (X ) → D. The existence of the kernels in the above definition has been proved by Kuznetsov in [Kuz11]. We have a semi-orthogonal decomposition D b (X ) = D, ⊥ D and I denote by ⊥ E ∈ D b (X × B X ) the kernel of the projection D b (X ) → ⊥ D. Let us display a Cartesian diagram which will be important to study the deformation of T over B. 3.2 With hypotheses and notation as above, for all b ∈ B, there exists a semi-orthogonal decomposition : This proposition allows one to think of the D b for b ∈ B as the deformation of D 0 = T over B. Proof : here the first equality is the identity Lq * b Li * b = Lj * b Lq * , the second is the flat base change Rp b * Lj * b = Li * b Rp * , the third is adjunction with respect to i b and the fourth is the projection formula with respect to i b . By flat base change for the morphism π : As a consequence, we deduce the vanishing : , the above vanishing finally proves that Hom(F , G ) = 0, for all G ∈ D b and F ∈ t D b . We are left to show that for all H ∈ D b (X b ), there exists an exact triangle : with F ∈ t D b and G ∈ D b . But on X × B X , we have an exact triangle : Hence, for all F ∈ D b (X b ), we have an exact triangle : Proof : ◮ Using exactly the same identities as in the proof of proposition 3.3.2, one shows that Note that we do not need to assume that the kernel of the projection D b (X ) → D is flat over B. Proof : ◮ Let X → B be a smooth projective morphism such that D is a full admissible subcategory of As a consequence of Theorem 4.5 in [Kuz09], we have an equality : Let us prove that the dimension of the cohomology vector spaces are upper semi-continuous with respect to b ∈ B for all i. By flat base change for the diagram : we have the equality : Since B is a smooth variety, we can represent Rπ * (E ⊗ E T ) ⊗ C(b) by a bounded complex of vector bundles on B, say E • . Thus, we only have to show the following : the cohomology sheaves of E • ⊗ C(b) are upper semi-continuous, for b ∈ B. This result is now obvious as the dimension of the image of the differential : is lower semi-continuous with respect to B. We have proved that the dimension of HH i (D b ) is upper semi-continuous with respect to B, for all i. This holds also true for the dimension HH i ( ⊥ D b ). By corollary 7.5 of [Kuz09], we have : But the morphism X → B is smooth projective, so that the Hodge numbers of X b are constant with respect to B. By the Hochschild-Kostant-Rosenberg decomposition, this implies that the Hochschild numbers of X b are constant. Hence the sum of the dimensions of the cohomology vector spaces HH i (D b ) and HH i ( ⊥ D b ) is constant with respect to B. But each dimension is upper semi-continuous with respect to B, so that they are in fact both constant with respect to B. ◭ Before going turning to deformation results for compact hyper-Kähler categories, I want to comment about the level of generality of the deformation theory used above. In order to define the notion of deformation of an admissible subcategory T ⊂ D b (X), one could be tempted to work with a seemingly more general definition, as follows. A deformation of T over B is the data of a (not necessarily flat) morphism π : X → B and a B-linear admissible subcategory D ⊂ D b (X × B X ), such that D 0 = T and the flat base change formula holds for D with respect to the diagram : . But we have a commutative diagram : In particular, we have T or 1 B (O X , C(b)) = 0. By Theorem 22.3 of [Mat89], the morphism X → B is flat. Hence, a strictly more general setting than the one developed above for the deformation of triangulated categories can not be obtained if one requires the following three conditions : -the total space of the deformation is a full admissible subcategory of the derived category of an algebraic variety, -the base change formula holds for the total space of the deformation, -O X ∈ D. The first two conditions seem essential if one wants to get some significant homological results while working with admissible subcategories of derived categories of algebraic varieties. As far as the third condition is concerned, it is satisfied in many examples (for instance in the setting of non-commutative resolution of singularities). I will now focus on the deformation theory of compact hyper-Kähler categories. We start with the following : can be translated as : By semi-continuity, there exists an open neighborhood 0 ∈ U ⊂ B such that : The very same proof also yields : Lemma 3.3.6 Let X be a smooth projective variety, let T ⊂ D b (X) be a full admissible subcategory and B a smooth algebraic variety. Let D be a smooth deformation of T over B with respect to π : We now state our first result on the deformation theory of hyper-Kähler categories. It shows that a "small deformation" of a hyper-Kähler category is still hyper-Kähler. Proposition 3.3.7 Let X be a smooth projective variety, let T ⊂ D b (X) be a full admissible subcategory and B a smooth algebraic variety. Let D be a smooth deformation of T over B with respect to π : X −→ B. Assume that O X ∈ T and that T is a compact hyper-Kähler category (with respect to its embedding in D b (X)). Then, there exists a neighborhood 0 ∈ U ⊂ B, such that D b is compact hyper-Kähler (with respect to its embedding in D b (X b )) for all b ∈ U . Proof : ◮ Let π : X → B be the smooth projective morphism in which the deformation D is embedded. We know that D 0 = T is compact hyper-Kähler of dimension 2m (with respect to its embedding in D b (X)). In particular, the category T is Calabi-Yau of dimension 2m. Hence there exists a quasi-isomorphism : Hence, by Nakayama's lemma, there exists a neighborhood 0 ∈ U ⊂ B, such that θ 0 can be lifted to a quasi-isomorphism : This proves that the categories D b , b ∈ U are Calabi-Yau of dimension 2m. Since X b is smooth projective for all b ∈ B, the categories D b are also smooth, compact and regular for all b ∈ B. It remains to prove (up to shrinking U ), that C[t]/(t m+1 ) (with t homogeneous of degree 2) is a homological unit for We know by hypothesis that T contains O X . Hence, by lemma 3.3.5, there exists an open 0 ∈ U ′ ⊂ U such that the categories The open U ′′ is the neighborhood of 0 ∈ B we are looking for. ◭ The above statement shows that being compact hyper-Kähler is an open condition (if one assumes that O X ∈ D). I also expect it to be a closed condition. Namely : Conjecture 3.3.8 Let X be a smooth projective variety, let T ⊂ D b (X) be a full admissible subcategory and B a smooth algebraic variety. Let D be a deformation of T over B. Assume that D b is compact hyper-Kähler for all b = 0. Then, the category D 0 = T is compact hyper-Kähler. The commutative specialization of this result is well-known. Namely, let π : X → B be a smooth projective morphism with B smooth. If X b is compact hyper-Kähler for all b = 0, then X 0 is also compact hyper-Kähler. It is usually proved using the holonomy principle and the invariance of holonomy groups in smooth families (see [Huy99], section 1). As far as I am aware, there are no algebraic proof of this result. Hence, a proof of conjecture 3.3.8, would certainly require the design of interesting new categorical techniques. Two key results are to be proved in order to demonstrate conjecture 3.3.8 : the invariance of the Calabi-Yau condition and of the homological unit under smooth deformations. Conjecture 3.3.9 Let X be a smooth projective variety, let T ⊂ D b (X) be a full admissible subcategory and B a smooth algebraic variety. Let D be a deformation of T over B. Assume that D b is Calabi-Yau of dimension r for all b = 0. Then, the category D 0 = T is Calabi-Yau of dimension r. Note that it is very unlikely that this conjecture can be proved by abstract algebraic arguments. Indeed, the work of Keller ([Kel11]) suggests that strong additional hypotheses are usually used in order to prove that a deformation of a Calabi-Yau algebra is again Calabi-Yau. Hence, the fact that the categories appearing in conjecture 3.3.9 are subcategories of derived categories of algebraic varieties will certainly play an important role in a potential proof. Let us conclude this section with a "long-time" deformation result for hyper-Kähler categories. It gives a partial answer to Conjecture 3.3.8 in the four-dimensional case. Proposition 3.3.10 Let X be a smooth projective variety, let T ⊂ D b (X) be a full admissible subcategory which is Calabi-Yau of dimension 4 and let B a smooth algebraic variety. Let D be a smooth deformation of T over B with respect to π : X → B. Assume that O X ∈ T and that for all b = 0, the category D b is compact hyper-Kähler of dimension 4 (with respect to its embedding in D b (X b )). Then, the category D 0 = T is compact hyper-Kähler of dimension 4 (with respect to its embedding in D b (X 0 )). Proof : ◮ We already know that T is smooth, compact, regular and Calabi-Yau of dimension 4. Since O X ∈ T , lemma 3.3.5 implies that there exists an open subset 0 ∈ U ⊂ B such that O X b ∈ D b , for all b ∈ U . Hence, for all b ∈ U , the algebra H • (O X b ) is a homological unit for D b . By hypothesis, for all b = 0 ∈ B, there exists a unique homological unit for D b (with respect to its embedding in D b (X b )) which is C[t]/(t 3 ), with t in degree 2. As a consequence, for all b = 0 ∈ U , we have : with t in degree 2. Hodge numbers are invariant in smooth family, so that H • (O X 0 ) ≃ C[t]/(t 3 ) as a graded vector space. But the category D 0 is Calabi-Yau of dimension 4, hence the pairing : given by the Yoneda product coincide with the Serre-duality pairing : it is non degenerate. As dim H 2 (O X 0 ) = 1, we find an isomorphism of graded algebras : , with t in degree 2. This proves that D 0 is compact hyper-Kähler (with respect to its embedding in D b (X 0 )). ◭ One would obviously like to generalize this result in higher dimension. However, the non-degeneracy of the Serre-duality pairing does not have so strong consequences in higher dimensions. Non-commutative Hilbert schemes In this section, we will be interested in the following question that was asked to me by Misha Verbitsky : Question 4.0.1 Let S be a K3 surface and n ≥ 1. For which subgroups G ⊂ S n does the quotient S × · · · × S/G has a categorical crepant resolution of singularities which is a hyper-Kähler category ? The commutative version of this question has been solved by Verbitsky himself. Indeed, in [Ver00] he proves the following : Theorem 4.0.2 Let S be a K3 surface and n ≥ 1. Let G be a subgroup of S n such that S × · · · × S/G has a crepant resolution which is hyper-Kähler. Then G = S n and the resolution is the Hilbert scheme of n points on S. Note that the result actually proved by Verbitsky in [Ver00] is more general. He shows that if V is a symplectic vector space and G ⊂ Sp(V ) is such that V / /G admits a symplectic resolution, then G is generated by symplectic reflections. In the setting of Theorem 4.0.2, symplectic reflections are immediately seen to be transpositions. Furthermore, for the crepant resolution of S × · · · × S/G to be irreducible symplectic, we need that each factor of S × · · · S is acted on non-trivially by an element of G. This is easily demonstrates that G = S n . We now state the answer to question 4.0.1 : Theorem 4.0.3 Let S be a K3 surface and n ≥ 1. Let G ⊂ S n acting on S × · · · × S by permutations. The quotient S × · · · × S/G admits a categorical crepant resolution which is a hyper-Kähler category if and only if G is one of the following : -G = S n , -G = A n (the alternating group), n = 5 and G = F * 5 , n = 6 and G = PGL 2 (F 5 ), n = 9 and G = PGL 2 (F 8 ), n = 9 and G = PGL 2 (F 8 ) ⋊ Gal(F 8 /F 2 ). Furthermore, this categorical resolution is always non-commutative in the sense of Van-den-Bergh. The categorical McKay correspondence [BKR01] implies that : Hence, in case G = S n , our result does not produce any new hyper-Kähler category. On the other hand, for G = A n , F * 5 , PGL 2 (F 5 ), PGL 2 (F 8 ), PGL 2 (F 8 ) ⋊ Gal(F 8 /F 2 ), it seems that Theorem 4.0.3 is the first instance of a result which connects hyper-Käher geometry with the quotient spaces S × · · · × S/G. Proof : ◮ We denote by Y the quotient S × · · · S/G. By [Abu15], we know that D b (Coh G (S × · · · × S)) is a categorical strongly crepant resolution of singularities of Y . Furthermore, the Serre functor of the category D b (Coh G (S × · · · × S)) is the shift by 2n and its homological units is H By the Künneth formula, we have Hence, the homological unit of D b (Coh G (S × · · · × S)) is : is the only element (up to scalar) of degree k in H • (O S ) ⊗ · · · ⊗ H • (O S ) which is invariant under the action of G, for all 1 ≤ k ≤ n. This condition can be rephrased as saying that G acts transitively on P k ({1 · · · n}) for all 1 ≤ k ≤ n. Such groups have been classified by Beaumont and Peterson [BP55,LW65] and they are the following : -G = S n , -G = A n (the alternating group), n = 5 and G = F * 5 , n = 6 and G = PGL 2 (F 5 ), n = 9 and G = PGL 2 (F 8 ), n = 9 and G = PGL 2 (F 8 ) ⋊ Gal(F 8 /F 2 ). In [Abu15], we showed that if X is a smooth projective variety and G a reductive algebraic group acting linearly on X with finite stabilizers, then there exists a sheaf of algebra B on X/ /G such that . This result applies in the present setting and shows that D b (Coh G (S × · · · × S)) is a non-commutative crepant resolution of S × · · · × S/G in the sense of Van den Bergh which is a hyper-Kähler category of dimension 2n. ◭ If X is a smooth projective holomorphically symplectic variety, then the twisted Hochschild-Kostant-Rosenberg isomorphism shows that the Betti cohomology ring of X is isomorphic (as a ring !) to the Hochschild cohomology ring of D b (X). On the other hand, the Betti cohomology ring of Hilb [n] (S) has been extensively studied and many fascinating connections between hyper-Kähler geometry, representation theory and number theory have been discovered this way. We refer to the ICM talk of Göttsche for a nice overview of these connections [G02]. It wouldn't be surprising that the study of the Hochschild cohomology rings of the categories appearing in Theorem 4.0.3 yield new connections between these three topics. In particular, we feel it is worth asking the following questions : We feel that this first question shouldn't be too hard and shall be just a matter of checking that Ginzburg's definition [Gin] of symplectic algebras matches with ours when the triangulated category under study is the derived category of coherent modules over a sheaf of finitely generated algebras. [ACH]. The compatibility of this isomorphism (or a twisted version of it) with cup product on both sides is still conjectural. Nonetheless, one can be confident that it will be proven soon. Hence, question 4.0.5 basically boils down to computing the orbifold cohomology of the Deligne-Mumford stack [S × · · · × S/ /G] for all G appearing in Theorem 4.0.3. In the last two questions, e(D b (Coh An (S × · · · × S) is the Euler number (that is the alternating sum of Hochschild numbers) of D b (Coh An (S × · · · × S)) and HH • (T ) is the Hochschild cohomology of T . If A n is replaced by S n , the answer to Questions 4.0.6 and 4.0.7 is known to be "yes". Furthermore, it is known that hyper-Kähler geometry plays an important role in the proof of these results in the S n case [G02]. Non-commutative relative compactified Prymian In this section, we study in details a categorical strongly crepant resolution of a singular compactified Prymian. This singular compactified Prymian first appeared in the work of Markushevich and Tikhomirov [MT07]. We recall briefly their construction in the first subsection. Markushevich-Tikhomirov's construction Let X be a Del Pezzo surface of degree 2 obtained as a double cover of P 2 branched in a generic quartic curve B 0 , µ : X −→ P 2 the double cover map, B = µ −1 (B 0 ) the ramification curve. Let ∆ 0 be a generic curve from the linear system | − 2K X , ρ : S −→ X the double cover branched in ∆ 0 and ∆ = ρ −1 (∆ 0 ). Then S is a K3 surface, and H = ρ * (−K X ) is a degree 4 ample divisor on S. We will denote by ι (resp. τ ) the Galois involution of the double cover µ (resp. ρ). The plane quartic B 0 has 28 bitangent lines m 1 , · · · , m 28 and µ −1 (m i ) is the union of two rational curves l i ∪ l ′ i meeting in 2 points. The 56 curves l i , l ′ i are all the lines on X, that is, curves of degree 1 with respect to −K X . Further, the curves are conics on S, that is, curves of degree 2 with respect to H. Each pair C i , C ′ i meets in 4 points, thus forming a reducible curve of arithmetic genus 3 belonging to the linear system |H|. We assume furthermore that B 0 and ∆ 0 are sufficiently generic. This implies that each line l i meets only one of the two lines l j , l ′ j for j = i. The following is lemma 1. The moduli space M 2m is a singular irreducible holomorphically symplectic variety of dimension 6. It is singular in exactly 28 points corresponding to the strictly semi-stable sheaves . Around each of these 28 singular points, the moduli space M 2m is locally analytically equivalent to the contraction of the zero section of Ω P 3 → P 3 . By varying the polarization, one gets symplectic desingularizations of M 2m which are deformation equivalent to Hilb 3 (S). The idea of Markushevich and Tikhomirov is to study the fixed locus of a specific symplectic involution on M 2m in the hope it may provide a new hyper-Kähler manifold. Let j be the involution of M 2m defined as : We consider the involution κ = τ • j. The involution κ is symplectic and its fixed locus is made of one four-dimensional components and 64 zero-dimensional components. The four-dimensional component, denoted by P 2m , is called the relative compactified Prymian of S by Tikhomirov and Markushevich and they prove the : Theorem 5.1.3 (Theorem 3.4 [MT07]) The variety P 2m is a singular irreducible holomorphically symplectic variety of dimension 4. It has exactly 28 singular points corresponding to the sheaves . Around each of these 28 singular points, the Prymian P 2m is locally analytically equivalent to C 4 /{−1, 1}. The topological Euler number of P 2m is 268. Since P 2m is locally equivalent to C 4 /{−1, 1} around its 28 isolated singular points, it has no crepant resolution. As a consequence, it is not possible to construct a hyper-Kähler manifold starting from P 2m . The singular variety P 2m is nevertheless studied in details by Markushevich and Tikhomirov and they prove, among other things, that it is birational to the quotient of Hilb 2 (S) by a symplectic involution. Strongly crepant resolution of the relative compactified Prymian of Markushevich-Tikhomirov In this section, we construct a strongly crepant categorical resolution of P. We study the Hochschild numbers and we show that they satisfy the Salamon's relation for Betti numbers of hyper-Kähler manifolds [Sal96]. We finally prove that this category can't be a deformation of the derived category of a projective variety, giving a counter-example to a conjecture of Kuznetsov (conjecture 5.8 of [Kuz16]). Proof : ◮ Let P 2m be the blow-up of P 2m along its 28 singular points. We denote by E 1 , · · · , E 28 are the exceptional divisors of the blow-up along the 28. Example 7.1 of [Kuz08b] shows that there exists a semi-orthogonal decomposition : where A P 2m is a categorical strongly crepant resolution of P 2m . Markushevich and Tikhomirov prove that P 2m is a singular irreducible holmorphically symplectic variety, so that ω P 2m = O P 2m and H • (O P 2m ) = C[t]/t 3 , with t homogeneous of degree 2. By Theorem 3.2.4, we deduce that the category A P 2m is hyper-Kähler of dimension 4 (with respect to its embedding in D b ( P 2m )). The Hodge numbers of P 2m are computed by Grégoire Menet in the appendix and they are : By the Hochschild-Kostant-Rosenberg isomorphism, we have : We deduce that the Hochschild homology numbers of D b ( P 2m ) are : hh 0 = 262 hh 2 = hh −2 = 16 hh 4 = hh −4 = 1, and the others are zero. By corollary 7.5 of [Kuz09], we have a graded direct sum decomposition : We deduce that the Hochschild homology numbers of D b (A P 2m ) are : hh 0 = 206 hh 2 = hh −2 = 16 hh 4 = hh −4 = 1, and the others are zero. But the Serre functor of the category A P 2m is the shift by 4, hence we get an isomorphism of graded vector spaces HH • (A P 2m ) ≃ HH •−4 (A P 2m ). We then find that the Hochschild cohomology numbers of A P 2m are as stated. ◭ We notice that the Hochschild cohomology numbers of A P 2m satisfy the following relation : This relation is the four dimensional case of the Salamon relation for Betti numbers of hyper-Kähler manifolds [Sal96]. It is very tempting to believe that this relation holds for all hyper-Kähler categories. Of course it would be interesting to first prove that this formula holds for the Hochschild numbers of the hyper-Kähler categories exhibited in Theorem 4.0.3. Note that the construction of relative compactified Prymians has been recently generalized to for arbitrary Enriques surfaces in [AFS15]. It is very likely that their construction will provide new examples of hyper-Kähler categories and that the Hochschild cohomology numbers of these categories will satisfy conjecture 5.2.2. We close this section with a discussion of a conjecture made by Kuznetsov in [Kuz16] : Conjecture 5.2.3 ([Kuz16], conjecture 5.8) Let X be a smooth projective variety of dimension n and A ⊂ D b (X) be a full admissible subcategory which is Calabi-Yau of dimension n. Then, there exists a birational morphism X −→ X ′ such that A ≃ D b (X ′ ). We will prove that this conjecture is far from being true : Proposition 5.2.4 The category A P 2m is not equivalent to the derived category of any projective variety. In fact, a small deformation of A P 2m is never equivalent to the derived category of a projective variety. The deformation theory we use here is the one developed in section 3 of this paper. Proof : ◮ Let D be a deformation of A P 2m . This is the data of a smooth connected algebraic variety B and a smooth projective morphism p : X −→ B such that : -X 0 = P 2m , -The category D is full admissible in D b (X ) and it is B-linear with the property that E 0 := is the kernel representing the projection functor D b (X ) → D. As in section 3, for any b ∈ B, we denote by D b the full admissible subcategory of D b (X b ) whose projection functor is given by the kernel E ⊗ O X × B X O X b ×X b . Since O X 0 ∈ A P 2m and C(x 0 ) ∈ A P 2m , for generic x 0 ∈ X 0 , we know by lemmas 3.3.5 and 3.3.6 that there exists an open subset 0 ∈ U ⊂ B such that : -for all b ∈ U , we have O X b ∈ D b , -for all b ∈ U , we have C(x b ) ∈ D b , for generic x b ∈ X b . Furthermore, up to shrinking U , proposition 3.3.7 shows that for all b ∈ U , the category D b is hyper-Kähler of dimension 4 (with respect to its embedding in D b (X b )). Assume that there exists b 0 ∈ B such that D b 0 ≃ D b (Y ), for some Y projective. We immediately see that Y is smooth projective of dimension 4 with trivial canonical bundle. By hypothesis, the homological unit of D b 0 with respect to its embedding in D b (X b 0 ) is C[t]/t 3 . Since O X b 0 ∈ D b 0 , we have : with t in degree 2. By Theorem 3.1.3, we have H • (O Y ) ≃ C[t]/t 3 , with t in degree 2. By proposition 3.2.3, we deduce that Y is hyper-Kähler of dimension 4. Theorem 3.3.4 and Theorem 5.2.1 imply that the Hochschild numbers of Y are : hh 0 = 206 hh 2 = hh −2 = 16 hh 4 = hh −4 = 1. But Y being holomorphically symplectic, the Hochschild-Kostant-Rosenberg isomorphism implies that the Betti numbers of Y are : . This is a contradiction. Indeed, it is proved in [Gua01] that the second Betti number of a hyper-Kähler fourfold is either less than 8 or equal to 23. This concludes the proof that a small deformation of A P 2m can not be equivalent to the derived category of a projective variety. ◭ If one assumes that the deformation of A P 2m is Calabi-Yau of dimension 4, contains O X and the structure sheaf of a generic point, then one has a stronger statement than proposition 5.2.4 : Proposition 5.2.5 Let D be a deformation of A P 2m inside D b (X ) over B, for some p : X −→ B smooth projective. Assume that O X b ∈ D b , that C(x b ) ∈ D b , for generic x b ∈ X b and that D b is Calabi-Yau of dimension 4, for all b ∈ B. Then, for all b ∈ B, the category D b is never equivalent to the derived category of a projective variety. Proof : ◮ Assume that there exists b 0 ∈ B such that D b ≃ D b (Y ), where Y is projective. By hypothesis, this immediately implies that Y is smooth projective of dimension 4 with trivial canonical bundle. By proposition 3.3.7, the set of b ∈ B such that D b is hyper-Kähler of dimension 4 (with respect to its embedding in D b (X b )) is open (and non empty). Hence, up to shrinking B, one can assume that for all b = b 0 , the category D b 0 is hyper-Kähler of dimension 4 (with respect to its embedding in D b (X b 0 ). Since D b 0 is Calabi-Yau of dimension 4 and contains O X b 0 , we can apply proposition 3.3.10 and we find that D b 0 is hyper-Kähler of dimension 4 (with respect to its embedding in D b (X b 0 )). One finishes the proof exactly as in the proof of proposition 5.2.4 above. ◭ Of course, in view of conjecture 3.3.8, one would expect that the hypotheses O X b ∈ D b , C(x b ) ∈ D b for generic x b ∈ X b , and D b CY-4, for all b ∈ B, are superfluous in the statement of proposition 5.2.5. If this expectation is correct, then any deformation of A P 2m would never be equivalent to the derived category of a projective variety. In particular, this would imply that the moduli space of hyper-Kähler categories of dimension 4 (if such an object exists) contains a component which is purely non-commutative ! 25
2017-04-10T16:24:07.000Z
2015-10-14T00:00:00.000
{ "year": 2015, "sha1": "ec934a540d05f068bc7d90d32671065f0b97875a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5a80872bfe3329155a9fde11ae13a2194f81c91c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
19585569
pes2o/s2orc
v3-fos-license
Enzyme Kinetics and Interaction Studies for Human JNK1β1 and Substrates Activating Transcription Factor 2 (ATF2) and c-Jun N-terminal kinase (c-Jun)* Background: JNK1β1 has a role in diabetes and unique substrate interactions could affect function. Results: JNK1β1 showed lower affinity and capacity to phosphorylate ATF2 than other variants. ATP binding and JNK1β1 activation affected substrate interaction rates and affinities. Conclusion: JNK1β1 activation and ATP binding affects interactions with substrates. Significance: First kinetic and biochemical characterization of a β JNK splice variant. c-Jun N-terminal kinase (JNK) is a stress signal transducer linked to cell death, and survival. JNK1 has been implicated in obesity, glucose intolerance, and insulin resistance. In this study we report the kinetic mechanism for JNK1β1 with transcription factors ATF2 and c-Jun along with interaction kinetics for these substrates. JNK1β1 followed a random sequential mechanism forming a ternary complex between JNK-substrate-ATP. Km for ATF2 and c-Jun was 1.1 and 2.8 μm, respectively. Inhibition studies using adenosine 5′-(β,γ-methylenetriphosphate) and a peptide derived from JNK interacting protein 1 (JIP1) supported the proposed kinetic mechanism. Biolayer interferometry studies showed that unphosphorylated JNK1β1 bound to ATF2 with similar affinity as it did to c-Jun (KD = 2.60 ± 0.34 versus 1.00 ± 0.35 μm, respectively). The presence of ATP increased the affinity of unphosphorylated JNK1β1 for ATF2 and c-Jun, to 0.80 ± 0.04 versus 0.65 ± 0.07 μm, respectively. Phosphorylation of JNK1β1 decreased the affinity of the kinase for ATF2 to 11.0 ± 1.1 μm and for c-Jun to 17.0 ± 7.5 μm in the absence of ATP. The presence of ATP caused a shift in the KD of the active kinase for ATF2 to 1.70 ± 0.25 μm and for c-Jun of 3.50 ± 0.95 μm. These results are the first kinetic and biochemical characterization of JNK1β1 and uncover some of the differences in the enzymatic activity of JNK1β1 compared with other variants and suggest that ATP binding or JNK phosphorylation could induce changes in the interactions with substrates, activators, and regulatory proteins. cade. It is activated by environmental and cellular stresses such as UV light (1,2), ␥-irradiation (3), osmotic shock, cytokines, and oxidative stress (4,5). Phosphorylation of the Thr 183 and Tyr 185 residues of the Thr-Pro-Tyr motif in the activation loop of JNK activates this kinase triggering responses like cell death among others (6). There are three genes that encode for JNK isoforms: jnk1 (2), jnk2 (7,8), and jnk3 (9). These genes can undergo alternative splicing to produce 10 variants that differ in the subdomain IX (␣ and ␤) and the C-terminal end, which is shorter in variants 1 and have an additional 43 amino acids in variants 2. JNK1 and JNK2 can be spliced into four variants each, whereas JNK3 is spliced into two variants (10). Studies using knock-out or knockdown cells and organisms, as well as the use of JNK-specific inhibitors have suggested that JNK isoforms display both redundant and non-redundant functions. For example, phosphorylation of the most well known substrate of JNK, c-Jun, seems to be equally accomplished by all isoforms upon activation by certain stimuli (11); JNK1 is directly involved in insulin resistance by phosphorylating Ser 307 of insulin receptor substrate 1 (IRS1) (12,13) and is required for normal brain cytoarchitecture (14) and T cell receptor-mediated T H cell proliferation, apoptosis, and differentiation (15). JNK1, and not JNK2, is preferentially activated during TNF␣-induced cell death (16). JNK2 is a negative regulator of cell proliferation, whereas JNK1 seems to be a positive regulator (17). Finally, JNK3 deficiency protects against 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine-induced dopaminergic cell loss (18), kainic acid-induced seizures (19), stroke (20), oxidative stress (21,22), nerve growth factor deprivation (21), and ␤-amyloid-induced neuronal death (23). Conversely, very little is known regarding the biochemical and functional differences of JNK splice variants, hence the need to address this question. First, Gupta et al. (10) qualitatively demonstrated differences in JNK variants capacity to bind and activate transcription factors c-Jun, ATF2, and Elk-1 using an immune complex kinase assay. Among JNK1 variants, the ␤ forms (JNK1␤1 and JNK1␤2) showed the highest level of c-Jun and ATF2 binding. All JNK2 variants were capable of binding c-Jun and ATF2, but the Ϫ1 variants, JNK2␣1, and JNK2␤1, showed the highest c-Jun and ATF2 binding, respectively. ␣ Forms of JNK2 were better at binding c-Jun, whereas ␤ forms showed higher affinity for ATF2. Both JNK3 variants showed a comparable low capacity to bind the transcription factors assayed by the authors. RNA and protein analysis of various human cell lines has shown that the 54-and 46-kDa bands observed in Western blots, correspond mostly to JNK2 and JNK1, respectively (24). Interestingly, the most abundant splice variant of the JNK1 isoform is JNK1␤1, whereas JNK2␣2 is the most prevalent of the JNK2 splice variants (24). Lastly, Yang et al. (25) showed that higher expression levels of scaffold protein JIP1 in tissues and cell lines correlated with higher levels and increased stability toward degradation of the short (46 kDa) but not the long (54 kDa) variants of JNK. Moreover, using Western blots, these authors showed that JIP1 inducible HEK-293 cells had 4-fold higher protein expression levels of JNK1␤1 than any other splice variant. Couple this with the finding that a missense mutation in JIP1 (S59N), which reduced its ability to inhibit JNK, segregated with type II diabetes mellitus in one French family (26), provide genetic and biochemical evidence that JNK1 and perhaps JNK1␤1 plays a significant role in type II diabetes mellitus. Hence the relevance of an exhaustive biochemical characterization of JNK1␤1, which would not only contribute to the understanding of the differences between splice variants; but may also facilitate the design of specific small molecule inhibitors for therapeutic purposes. To date, only three of the 10 JNK splice variants have been biochemically characterized: JNK1␣1 (27), JNK3␣1 (28), and JNK2␣2 (29). This study presents an in vitro characterization of the JNK1␤1 kinetic mechanism and the kinetics of the interactions with substrates c-Jun and ATF2. Here we report the first exhaustive biochemical characterization of the most well known substrate of JNK, c-Jun. JNK1␤1 phosphorylated ATF2 and c-Jun via the formation of a ternary complex with noninteracting binding sites for the protein substrate and ATP. JNK1␤1 showed similar affinity for ATF2 and c-Jun. Both the phosphorylation state of the kinase and the presence of ATP significantly affected the kinetics of association and dissociation of JNK1␤1 and its substrates, and therefore the K D for the interactions. Our results reveal some distinctive features of JNK1␤1 and support the idea of different roles for JNK splice variants. EXPERIMENTAL PROCEDURES Reagents and Supplies-All reagents, chemical compounds, and materials used were purchased from Fisher Scientific, Sigma, Invitrogen, or Pierce, unless a different company is specified. Protein Production and Purification-Expression and purification of Bioease-FLAG tag ATF2 construct (2-115, B-F-ATF2) has been previously reported (27,28). cDNA of fulllength human JNK1␤1(1-384) (EC 2.7.11.24) was cloned into a modified pET19 vector using BamHI and EcoRI restriction enzymes. This plasmid was used to generate N-terminal His 10 -tagged full-length JNK1␤1. The N-terminal portion of c-Jun(1-89) was cloned using the Gateway technology into the vector pDEST544 developed by D. Esposito (Addgene). The latter plasmid allowed the bacterial expression of His 6 -tagged NusA c-Jun fusion protein (1-89, H-N-cJun). Another plasmid, pEL124 -544, for expression of His 6 -tagged NusA (H-N) followed by the 8-amino acid attB1 peptide and a stop codon (a kind gift of Dr. Esposito) was used to produce a negative control substrate for JNK activity and binding assays. Escherichia coli strain BL21(DE3) was used for protein production in Luria broth and induction of expression was done during log phase with 0.25 mM isopropyl ␤-D-thiogalactopyranoside for 3-5 h at 30°C. Cells were collected by centrifugation and lysed on ice with B-Per reagent containing lysozyme and benzonase (Novagen). Lysates were clarified by centrifugation at 20,000 ϫ g for 60 min at 4°C. Supernatants were filtered and loaded onto the 15-ml cobalt column HisPur. The column equilibration buffer used was 50 mM sodium phosphate, pH 8, 300 mM NaCl, 5 mM imidazole, and the elution buffer was 50 mM sodium phosphate, pH 8, 300 mM NaCl, 250 mM imidazole. Quality and purity of proteins was assessed by SDS-PAGE and Coomassie Blue stain; only samples with estimated purity higher than 90% were used. Collected proteins were concentrated in 25 mM HEPES, pH 7.4, 150 mM NaCl, 0.1 mM EGTA, 6.25% glycerol, 1 mM DTT (dithiothreitol), 0.03% Brij-35 buffer using Amicon Ultra-15 centrifugal filter units, aliquoted, and stored at Ϫ80°C. JNK1␤1 Activation-Active MKK4 and MKK7 (Millipore) were used for in vitro activation of JNK1␤1 following the method of Lisnock et al. (30) with slight modifications: the final concentration of JNK1␤1 was 500 nM, whereas MKK7 was added at 100 nM 1 h prior to the addition of 1 nM MKK4. Total reaction incubation time was 2 h at 30°C. Activation achieved ranged from 70 to 90% based on MS analysis. JNK1␤1 Kinetic Assays-JNK activity was determined as incorporation of 33 P from [␥-33 P]ATP into recombinant proteins B-F-ATF2 or H-N-cJun. Reactions were carried out in a volume of 120 l containing 25 mM HEPES, pH 7.4, 10 mM MgCl 2 , 1 mM DTT, 20 mM ␤-glycerophosphate, 0.1 mM Na 3 VO 4 , 0.5 mg/ml of bovine serum albumin (BSA), 1.5 Ci of [ 33 P]ATP (3000 Ci/mmol), 0.1-32 M ATP, 0.05-4 M recombinant substrate, and 2 nM active JNK1␤1. Reactions were carried out at 30°C in 96-well plates and stopped by adding an equal volume of 100 mM H 3 PO 4 after 45-60 min. Fifty microliters of each reaction were transferred to pre-wet Immobilon-P Multiscreen HTS IP plates (Millipore) in triplicate or quadruplicate. Vacuum was applied to filter samples and the wells were washed exhaustively with 10 mM HEPES, pH 7.4, 100 mM NaCl, 25 mM EDTA. Finally, a wash with 50% ethanol was done to minimize drying time prior to the addition of 100 l of Ultima Gold XR liquid scintillation counting mixture (PerkinElmer Life Sciences). Plates were read in a TopCount microplate scintillation counter (Packard Instrument Co.). Initial velocities of the reactions were fitted with two-substrate equations using GraFit version 5.0.13 (31) as published before (27,28). JNK1␤1 Inhibition Assays-Inhibition of JNK1␤1 was done with the non-hydrolyzable ATP analog, AMP-PCP (5-400 M), and the JIP1 ␦-domain peptide 153 RPKRPTTLNLF 163 , JIP1 pep (32) (5-400 nM, NeoPeptide). These assays were per-formed keeping the concentration of one JNK substrate constant, whereas the concentration of the other substrate and the inhibitor were varied. When ATP was kept constant the concentration was 10 -20 M, 1 M was chosen for ATF2 and H-N-cJun. Data were analyzed as previously published (27,28). c-Jun in Vitro Biotinylation-H-N-cJun (500 g/ml) was biotinylated using the EZ-Link NHS-LC-LC-biotin in 1ϫ PBS using a 5:1 molar ratio of biotin reagent:protein for 30 min at RT following the fortéBIO suggested protocol. Biotinylated H-N-cJun was separated from the biotinylation reaction reagents by D-salt dextran desalting columns. Biotin incorporation was confirmed by Western blot analysis of samples using IRDye 680 streptavidin (Licor) to visualize the labeled protein. Biolayer Interferometry (BLI)-A fortéBio Octet Red instrument was used to study kinetics of JNK1␤1 binding to its substrates B-F-ATF2 and H-N-cJun. All the assays were performed with agitation set to 1000 rpm in fortéBIO 10ϫ kinetic buffer to minimize nonspecific interactions. To this assay buffer, 10 mM MgCl 2 and 2 mM DTT were added to stabilize the JNK-substrate interactions. The final volume for all the solutions was 200 l/well. Assays were performed at 30°C in solid black 96-well plates (Geiger Bio-One). 5-25 g/ml of biotinylated ligand (B-F-ATF2 or H-N-cJun) in 10ϫ kinetic buffer was used to load the ligand on the surface of streptavidin biosensors (SA) for 200 -300 s. Typical capture levels were between 0.5 and 3 nm and variability within a row of eight tips did not exceed 0.1 nm. A 500 -800 s biosensor washing step was applied prior to the analysis of the association of the ligand on the biosensor to the analyte in solution (0.15-10 M JNK1␤1 unphosphorylated or phosphorylated) for 100 -1000 s. Finally, the dissociation of the interaction was followed for 100 -1000 s. Dissociation wells were used only once to ensure buffer potency. Correction of any systematic baseline drift was done by subtracting the shift recorded for a sensor loaded with ligand but incubated with no analyte. Data analysis and curve fitting were done using Octet software version 7.0. Experimental data were fitted with the binding equations available for 1:1 interaction, 2:1 heterogene-ous ligand (HL), 1:2 bivalent analyte, and mass transport binding model ( Fig. 1, a-d, respectively). Global analyses of the complete data sets assuming binding was reversible (full dissociation) were done using nonlinear least squares fitting. Hence, a single set of binding parameters was obtained simultaneously for all the analyte concentrations in every experiment. Additionally, steady-state kinetic analyses were done for every data set to calculate the K D using the estimated response at equilibrium for each analyte concentration rather than the k on and k off values. All the experiments were repeated at least twice and the results obtained were similar. The data reported here correspond to the results of one representative experiment. Statistical Analyses-All the enzyme kinetic and inhibition data were evaluated using F test of the Grafit version 5.0.13 software to compare the goodness of the different fitting analyses, 2 values were below two in all the experiments. Goodness of fit for the interferometry data were assessed by evaluation of the residual plots, the 2 and R 2 values generated for all the fitting analyses were done. High Level of c-Jun Fusion Protein Expression in E. coli-To date, no detailed kinetic characterization of any JNK isoform with c-Jun has been published and this may be because of the lack of reports of high level expression and purification of c-Jun. To address this shortcoming, we attempted the expression and purification of numerous c-Jun constructs in E. coli. High levels of expression and increased solubility of c-Jun(1-89) was possible by the construction of the N-terminal His-tagged-NusA recombinant fusion protein (Fig. 2a). The yield of purified protein typically was near 10 mg/liter and Ͼ90% purity as assessed by SDS-PAGE (Fig. 2B). It was decided not to proteolytically cleave the NusA solubility tag of c-Jun based on the fact that in our hands, numerous attempts to express high levels of soluble c-Jun or truncations of this protein with no tags have only been partly successful due to low yields, the formation of insoluble aggregates, or protein sticking to membranes during concentration. Kinetic Analyses of JNK1␤1 Activity-The kinetic mechanisms of JNK1␤1 phosphorylation of B-F-ATF2 and H-N-cJun were assessed by varying the concentration of the recombinant substrates and ATP. Nonlinear regression analysis of the data using the two substrate equations presented in Ember et al. (27,28) showed that the mechanism of phosphorylation of the substrates followed ternary complex formation with noninteracting substrate binding sites. This also suggested a sequential kinetic mechanism; i.e. all the substrates bind to the enzyme before products are formed. A representative example of the graphs showing the two-substrate profiles and double-reciprocal plots for B-F-ATF2 and H-N-cJun phosphorylation by JNK1␤1 is given in Fig. 3, a-d. Table 1 presents the steady-state kinetic parameters for the substrate phosphorylation reactions studied. To determine whether the kinetic mechanism was ordered or random with respect to the substrates, the inhibition of B-F-ATF2 and H-N-cJun phosphorylation was studied using the ATP competitive inhibitor AMP-PCP and an 11-mer peptide consisting of the JIP1 ␦-domain (32). Table 2 presents the inhibition parameters determined for the reactions studied and supplemental Figs. S1 and S2 show representative data for all the inhibition assays. As expected, AMP-PCP was a competitive inhibitor against ATP and a pure noncompetitive inhibitor against B-F-ATF2 and H-N-cJun. On the other hand, we found that JIP1 pep was a competitive inhibitor for B-F-ATF2 and H-N-cJun, and a pure noncompetitive inhibitor against ATP. Best fits for the experimental data were assessed by comparison of the mean Ϯ S.E., nonlinear regression analysis, and F-tests for competitive, pure noncompetitive, mixed noncompetitive, and uncompetitive inhibition fits. The inhibition experiment results support the idea of a kinetic mechanism with random sequential addition of substrates and either random or ordered release of products (Fig. 3E). Studies of the enzymatic reaction in the reverse direction are not feasible for JNK and comprehensive product inhibition analysis could only be carried out if phosphorylated ATF2 and c-Jun were available. Nonetheless, we performed a partial characterization of product inhibition kinetics by analyzing the effects of ADP on the phosphorylation of B-F-ATF2 at constant or variable concentrations of ATP and we observed that ADP was competitive versus ATP as expected and had minimal inhibition of B-F-ATF2 also as expected suggesting that the release of the products could be either random or ordered. To ensure that phosphorylation of the H-N-cJun construct was occurring in the c-Jun portion of the fusion protein, we performed a Western blot analysis of H-N-cJun samples treated with active JNK1␤1 in the presence of ATP using anti-c-Jun phospho-Ser 73 antibody (Cell Signaling). A band for phosphorylated c-Jun was only observed in those samples containing the protein substrate, active JNK1␤1 and ATP (supplemental Fig. S3a). Additionally, we showed that His-tagged NusA (H-N), a construct lacking c-Jun(1-89), does not behave as a substrate for JNK1␤1 in enzymatic assays (supplemental Fig. S3b). These experiments allowed us to confirm that H-N-cJun was effectively used as a substrate by JNK1␤1. Interestingly, activity assays carried out with ATF2 or c-Jun concentrations above 2 M showed a partial inhibition of JNK1␤1 activity; consequently, it was not possible to obtain good quality data using substrate concentrations higher than 2 M. Rates of Interactions between Substrates and Active or Inactive JNK1␤1-Kinetic analyses of the interactions between JNK1␤1 and B-F-ATF2 or H-N-cJun were accomplished using BLI. The objective of these studies was to determine the impact of the ATP presence and the phosphorylation state of the kinase on the association/dissociation rate constants and the equilibrium dissociation constants of JNK1␤1 interaction with B-F-ATF2 or H-N-cJun. All the binding curves in Figs. 4 and 5 showed initial very short and fast association/dissociation steps, followed by slower and much longer association/dissociation steps. Fig. 4 shows representative data for both association and dissociation phases of the curves obtained for JNK1␤1 binding to immobilized B-F-ATF2 in the absence and presence of ATP for both the inactive and the active forms of the kinase. Similarly, Fig. 5 shows the same type of data for the interaction of JNK1␤1 and immobilized H-N-cJun (experimental data are represented by black lines and curve fitting by gray lines). Comparison of the interaction of B-F-ATF2 or H-N-cJun with inactive (unphosphorylated) JNK1␤1 in the absence (Figs. 4a(i) and 5a(i)) and the presence of 200 M ATP (Figs. 4b(i) and 5b(i)) showed that the association and dissociation were much more rapid in the presence of ATP and the interaction came to equilibrium much more quickly than in the absence of ATP. Phosphorylation of JNK1␤1 seemed to have a similar effect to the presence of ATP based on the kinetics of the interaction between active (phosphorylated) JNK1␤1 and B-F-ATF2 or H-N-cJun (Figs. 4c(i) and 5c(i)) as compared with the inactive (unphosphorylated) kinase (Figs. 4a(i) and 5a(i)). ATP-dependent effects were also observed in the binding of active JNK1␤1 and B-F-ATF2 (Fig. 4d(i)) as compared with the binding of the same ligand-analyte pair in the absence of ATP (Fig. 4c(i)). On the other hand, the presence of ATP seemed to have little effect on the kinetics of the interaction between active JNK1␤1 and H-N-cJun (Fig. 5, c(i) and d(i)). Interestingly, the signal intensity during the dissociation phase of the interaction of phosphorylated JNK1␤1 and immobilized B-F-ATF2 or H-N-cJun approached baseline much more rapidly than in the case of the unphosphorylated kinase indicating an almost complete or full dissociation of the active kinase from the substrate during the experiment (compare Figs. 4a(i) and 5a(i) with 4c(i) and 5c(i)). Nonlinear least squares fitting of the experimental data generated best fits using a 2:1 HL model, which assumes the existence of two independent ligand binding sites (Fig. 1B), hence the two values shown in Tables 3 and 4 for each kinetic parameter describing the interactions studied. Residual Figs. S4 and S5). Only a minor deviation from the experimental data were observed for the initial fast step of both the association and dissociation phases; however, R 2 values were above 0.9 and 2 values below 2.0 for all the fits. Additionally, equilibrium dissociation constants were obtained from the steady-state analyses (response at equilibrium plotted as a function of analyte concentration) presented in Figs. 4, a(ii)-d(ii), and 5, a(ii)-d(ii). A single steady-state equilibrium dissociation constant, K D , is shown on Tables 3 and 4 for every JNK1␤1 and B-F-ATF2 or H-N-cJun interaction studied. It is interesting to note that the steady-state K D for B-F-ATF2 was 4.2-fold higher for active JNK1␤1 as compared with inactive JNK1␤1 in the absence of ATP, whereas the steady-state K D for the active enzyme plus ATP was 6.5fold lower than active JNK1␤1 without ATP (Table 3). Showing a similar trend, the affinity for H-N-cJun was 17-fold lower for active JNK1␤1 than inactive JNK1␤1 in the absence of ATP, whereas the steady-state K D value for active JNK1␤1 plus ATP was 4.8-fold lower than active JNK1␤1 without ATP (Table 4). To investigate if JNK1␤1 bound specifically to the c-Jun portion of the fusion protein H-N-cJun, we biotinylated and immo- S3c). No binding was observed for His 6 -tagged NusA and inactive JNK1␤1 up to 2.5 M. Finally, because the active JNK1␤1 used in these binding assays was prepared by treatment with active MKK4 and MKK7 and the two upstream activators were not separated from the reaction mixture prior to the binding studies, a control experiment using the same activation reaction conditions but in the absence of JNK1␤1 was done. Serial dilutions of the active MKK4 and MKK7 solution showed no significant interaction with immobilized B-F-ATF2 (data not shown), indicating that the observed alterations of the binding curves upon activation of JNK1␤1 are likely due to the phos-phorylation state of the kinase and not to the presence of the upstream activators. Enzyme Kinetics Characterization of JNK1␤1-Both in vitro and in vivo studies have shown that JNK1 is associated with insulin resistance (12,13), normal brain cytoarchitecture (14), TNF␣-induced cell death (16), and T cell receptor-mediated T H cell proliferation, apoptosis, and differentiation (15). An exhaustive kinetic analysis of the substrate preference and activity of JNK1 would undoubtedly contribute to the understanding of the differences between JNK variants and contrib- APRIL 13, 2012 • VOLUME 287 • NUMBER 16 JOURNAL OF BIOLOGICAL CHEMISTRY 13297 ute to the design of specific isoform-selective inhibitors of JNK, which ultimately is one of our goals. Here we present the first study of a ␤ variant of JNK, which could contribute to establish differences and similarities between variants and the role of the different subdomains of the protein. The initial velocity studies reported here suggested a random sequential kinetic mechanism for JNK1␤1, forming a ternary complex with its substrates, i.e. JNK1␤1-ATF2-ATP or JNK1␤1-cJun-ATP. This was deduced from the analysis of the converging lines obtained for the double-reciprocal Lineweaver-Burk plots of the two-substrate profiles (Fig. 3, b and d), which corresponded to a ternary and sequential rather than a ping-pong mechanism that is characterized by a plot with parallel lines (27,33). Additionally, the values of K m and K ia were virtually identical for the reactions done varying either the concentration of ATP or the concentration of protein substrate (B-F-ATF2 or H-N-cJun). These results indicated similar affinities of JNK1␤1 for one substrate when the second substrate was bound to its corresponding binding site or when it was absent (27,33). Based on these results, it is possible to suggest that JNK1␤1 has noninteracting substrate binding sites, which is in agreement with previously published data for JNK1␣1 (27), JNK3␣1 (28), and JNK2␣2 (29). We report here a K m value of 1.1 Ϯ 0.4 M for B-F-ATF2, which is about 4.6-fold higher than the value reported for JNK1␣1 (0.24 Ϯ 0.01 M) (27), almost identical to the value reported for JNK2␣2 (0.8 Ϯ 0.3 M) (29) and ϳ6.9-fold higher than the K m value published for JNK3␣1 (0.16 Ϯ 0.01 M) (28). Because K m roughly reflects the affinity of an enzyme for its substrate under certain conditions (33), it can be suggested that JNK1␤1 binds slightly less tightly to ATF2 than JNK1␣1 and JNK3␣1, whereas it practically matches JNK2␣2 affinity for the substrate. To date, only one report has been published comparing all JNK splice variant binding capacities to the transcription factors c-Jun, ATF2, and Elk-1 (10). Based on the results of an in vitro assay where 35 S-labeled JNK binding to immobilized GSTtagged truncations of ATF2, c-Jun, or Elk-1 was estimated by autoradiography, the authors suggested that among JNK1␣1, JNK1␤1, JNK2␣2, and JNK3␣1, the highest ATF2 binding capacity corresponds to JNK2␣2, closely followed by JNK1␤1, whereas JNK3␣1 and JNK1␣1 showed the lowest binding to the substrate. Differences in the experimental procedures employed here and in the report in question could account for the divergence in the binding capacity for the different JNK splice variants, especially because the quantitation from Gupta et al. (10) was done from estimation of signal intensity in autoradiographs. Furthermore, the k cat for the B-F-ATF2 phosphorylation reaction by JNK1␤1 was 2.2 Ϯ 0.7 min Ϫ1 (Table 1); which is slightly lower than the reported values for other JNK splice variants. JNK1␤1 k cat was approximately half of what that parameter is for JNK3␣1, 5.5 min Ϫ1 (28), around 3.5-fold lower than JNK1␣1 (7.7 min Ϫ1 ) (27), and more than 13-fold lower than JNK2␣2 (30.6 min Ϫ1 ) (29). This suggests that JNK1␤1 has a slightly lower capacity to convert the substrate ATF2 into product phospho-ATF2 as compared with the other splice variants characterized so far (27,28). Another possibility for the differences in k cat and K m for JNK1␤1 compared with the other splice variants could be heterogeneous activation (i.e. less than 100% activation of our sample) in this study, or for the splice variants in the other studies. Lisnock et al. (30) showed that mono-and bisphosphorylated JNK3␣1 had nearly identical activities so this possibility appears unlikely but cannot be strictly ruled out. TABLE 4 Kinetics of interaction between inactive and active JNK1␤1 with H-N-cJun in the absence or presence of ATP On the other hand, we report here a K m value for H-N-cJun(1-89) of 2.8 Ϯ 0.9 M (Table 1) Although the k cat value obtained for H-N-cJun (16.8 Ϯ 5.1 min Ϫ1 ) was within the range of the k cat values reported for the alternative JNK substrate ATF2, it is important to point out that the JNK1␤1 turnover for c-Jun seemed to be about 7.5-fold higher than for ATF2. Additionally, a comparison of the catalytic efficiencies (k cat /K m ) of JNK1␤1 for B-F-ATF2 and H-N-cJun presented in Table 1 (3.2 Ϯ 1.3 and 6.2 Ϯ 2.1 M Ϫ1 min Ϫ1 , respectively) suggested no significant preference of the kinase for one substrate over the other. This corresponds to the first comprehensive characterization of the c-Jun kinetic mechanism for any splice variant of JNK. Interestingly, if the concentration of the protein substrate ATF2 or c-Jun used in the kinetic assays exceeded 2 M, JNK1␤1 activity was partly inhibited. This does not seem to be the case for the other JNK splice variants studied thus far. We believe this could be due to either substrate (ATF2 or c-Jun) or product (phospho-ATF2 or phospho-c-Jun) inhibition. The latter possibility seems to be less likely because during the assays no more than 15% of substrate was converted to product, therefore presumably not enough product was generated to inhibit the enzyme activity. Because the inhibition in question was observed for both ATF2 and c-Jun, it did not seem to depend on the nature of the fusion protein used as substrate. It is possible that the nonproductive complexes formed at high substrate concentrations represent a splice variant-specific mechanism of enzyme activity regulation in vivo at physiological concentrations of the substrate. We are currently exploring this hypothesis. Inhibition of JNK1␤1 by ATP Analog and ␦ Domain of JIP1-To confirm the nature of the kinetic mechanism followed by JNK1␤1 during the catalytic phosphorylation of substrates B-F-ATF2 and H-N-cJun, inhibition studies were conducted with AMP-PCP and JIP1 pep. Table 2 shows the kinetic parameters obtained for the inhibition of JNK1␤1 substrate phosphorylation by AMP-PCP or JIP1 pep. The observed modes of inhibition are consistent with the random and sequential mechanism for the substrates, which bind to noninteracting binding sites (27,33). Moreover, this kinetic mechanism coincides with the mechanisms reported by Ember et al. (27,28), and Niu et al. (29) for JNK1␣1, JNK3␣1, and JNK2␣2. Interestingly, the inhibition of JNK1␤1 activity by JIP1 pep when the concentration of ATP was varied and the concentration of protein substrate was held constant showed a pure noncompetitive inhibition mechanism rather than a mixed noncompetitive mechanism, which was the case for JNK1␣1 and JNK3␣1 (27,28). Although the experimental data fitting to the equation for mixed noncompetitive inhibition was almost as good as the fitting done using the equation for pure noncompetitive inhibition; the similarity of the values obtained for K is and K ii in the mixed noncompetitive fit (between 0.5-and 2-fold difference) did not allow us to justify the additional parameter included in the analysis of the inhibition. JNK1␤1 activity was inhibited by AMP-PCP in a manner similar to JNK1␣1 and JNK3␣1 (27,28) based on the less than 2.5-fold difference between the K i values for the JNK variants mentioned. A comparison of the K i values for the inhibition of B-F-ATF2 (78 Ϯ 15 nM) and H-N-cJun phosphorylation (173 Ϯ 39 nM) by JIP1 pep (Table 2) suggested a similar ability of the peptide to block the phosphorylation of both substrates (only 2.2-fold difference in K i values). Similar K i values have been reported for the inhibition of B-F-ATF2 phosphorylation by JIP1 pep in the case of JNK1␣1 (55 Ϯ 4 nM) and JNK3␣1 (25 Ϯ 6 nM) (27,28). On the other hand, Niu et al. (29) reported that JIP1 pep inhibited JNK2␣2 phosphorylation of GST-ATF2 (amino acids 19 -96) with a K i of 1.1 M, which could reflect kinetic differences in the substrate phosphorylation due to the nature of the fusion protein used in their studies. Taken together these results suggest that the kinetic mechanism for the phosphorylation of ATF2 is the same for all the splice variants of JNK characterized so far. On the other hand, the kinetic parameters (K m and k cat ) did show differences for JNK1␤1 when compared with other variants. These differences could be due to the variability in primary structure (98% sequence identity for JNK1␣1 and JNK1␤1, 91% identity for JNK3␣1 and JNK1␤1, and 85% identity for JNK2␣2 and JNK1␤1). It is tempting to speculate that the differences in kinetic parameters are due to subtle but unique protein-protein interactions between JNK variants and substrates perhaps associated with features of JNK subdomains like the ␣ and ␤ segments of the splice variants. The Kinetics of JNK1␤1 Interaction with B-F-ATF2 or H-N-cJun Are Affected by ATP Presence or Phosphorylation of Kinase- BLI is a methodology that can be employed to study JNK interaction with substrates, activators, and scaffolding proteins; as long as the affinity of the interaction is within the detection limits of the instrument (K D 10 Ϫ3 -10 Ϫ12 M for the Octet RED instrument used here) and the size of the analyte is large enough (Ͼ250 Da) to be detected upon binding to the immobilized ligand. To date, we have employed this technology to study the interactions of JNK1␤1, JNK3␣1, JNK1␤2, and JNK1␣1 with B-F-ATF2 and JNK3␣1 with c-Jun. 3 The real-time binding data shown in Figs. 4 and 5 for JNK1␤1/B-F-ATF2 and JNK1␤1/H-N-cJun, respectively, suggested that both association and dissociation processes were described by two steps: an initial quick event followed by a slower and much longer event. Both the presence of ATP and the phosphorylation of JNK1␤1 seemed to increase the biphasic character of the association and dissociation phases of the binding curves, and as a consequence equilibrium was reached faster. Inactive JNK1␤1 bound to B-F-ATF2 with similar affinity to H-N-cJun (Tables 3 and 4). ATP caused a slight decrease in the K D of inactive JNK1␤1 for B-F-ATF2 and H-N-cJun, whereas phosphorylation of the kinase increased the K D for both substrates. Finally, in the presence of ATP, active JNK1␤1 showed higher affinity for B-F-ATF2 and H-N-cJun when compared with the active kinase in the absence of ATP (steady-state K D values in Tables 3 and 4). The lower affinity observed for the interaction of active JNK1␤1 and B-F-ATF2 or H-N-cJun compared with the interaction of inactive JNK1␤1 seemed to be mainly driven by an increase in the dissociation rate constants. Conversely, ATP-induced changes in K D were mainly driven by faster on rates (Tables 3 and 4). Binding curve fitting was done using the 2:1 HL equations, which assume that there are at least two populations of immobilized ligand that differ in their ability to bind to the analyte and therefore the binding curves are described by two reactions with different rates. Based on the structural information and binding analysis done by other authors for several JNK variants (35)(36)(37)(38)(39)(40)(41)(42), we expected Langmuirian kinetics with 1:1 stoichiometry for the studied interactions. Some of the factors that can cause deviations from pseudo-first order approximation of binding data include: mass transfer effects, immobilized ligand density, inhomogeneity of immobilized ligand or soluble analyte, immobilization chemistry, and rebinding of dissociated analyte (43). To eliminate some of the possibilities that cause non-Langmuirian kinetics we attempted the following: 1) fitting data with equations for mass transfer; 2) immobilization of ligand at low density (between 0.5 and 3 nm signal shift during ligand loading step); 3) use of 10ϫ kinetic buffer to minimize nonspecific interactions; 4) analysis of the interactions in reverse orienta-tion; and 5) use of a "sink" (125 nM JIP1 pep in the dissociation solution) to prevent rebinding of dissociated JNK1␤1 to the immobilized substrate and to achieve reliable kinetic data (44). None of these changes altered significantly the shape of the binding curves and more importantly, did not drastically improve the data fitting process with binding models different from the 2:1 HL model. Hence, we believe that the observed mode of interaction between JNK1␤1 and its substrates is both a combination of the actual kinetics of the protein-protein interaction in vitro and a certain degree of ligand heterogeneity due to the biotinylation process, in particular for H-N-cJun, which was biotinylated in an in vitro reaction that attaches biotin randomly to solventexposed primary amines (45,46). The 2:1 binding model for the interaction of JNK1␤1 and its substrates does not seem to be a unique feature of this particular splice variant based on the fact that we have seen similar behaviors for JNK1␣1, JNK1␤2, and JNK3␣1 by BLI analysis. 3 It is possible that both active and inactive JNK1␤1 have the ability to form two different complexes with each substrate: a productive complex by binding an accessible or properly oriented ligand immobilized on the surface of the biosensor likely represented by k on2 and a nonproductive complex perhaps reflective of poorly oriented or less accessible substrate represented by k on1 (Fig. 1b and Tables 3 and 4). These complexes form both in the presence and absence of ATP. Based on our data we cannot completely rule out the possibility that substrate immobilization on the biosensor could have caused restricted conformational freedom for the substrate, steric hindrance, and/or limited number of appropriate substrate orientations for JNK1␤1 binding (47), therefore giving rise to the complex interaction kinetics observed in our studies. Because we expected the k on2 values shown in Tables 3 and 4 to more accurately represent the kinetics of JNK-substrate interaction, a comparison of this parameter for the active enzyme in the presence of ATP with the k cat /K m values from Table 1 would allow us to determine the validity of our kinetic analysis. For a given enzymatic reaction, k on cannot be greater than k cat /K m (48) because the reaction rate can only be as fast as the rate of substrate-enzyme association. In the present analysis, k cat /K m for B-F-ATF2 was ϳ2.5-fold higher than the values of k on2 for the interaction of active JNK1␤1 with B-F-ATF2 in the presence of ATP. This presents only a minor discrepancy based on what was stated before and we believe the observed differences could be due to the inherent differences in the technologies used to obtain the kinetics for the reaction and for the protein-protein interaction. Additionally, the k cat /K m is an apparent second-order rate constant defined by a complex equation that includes the microscopic rate constants in the reaction equation prior to an irreversible step (k 1 , k Ϫ1 , k 2 , etc.). On the other hand, k on only represents the substrate-kinase binding step of the reaction. Therefore, the complexity of the studied interactions could contribute to the mentioned discrepancy. Ngoei et al. (49) recently published an analysis of the interaction between JNK1␣1 and JIP1 pep using surface plasmon resonance technology. The authors presented binding curves for inactive and active JNK1␣1 interacting with JIP1 pep. Their data showed a remarkable similarity to the binding curves shown here. However, they suggested that the phosphorylation of JNK1␣1 significantly increased its affinity for JIP1 pep. The differences in K D shifts upon JNK activation between our data and Ngoei et al. (49) could reflect: 1) distinct mechanisms of interaction between JNK and its substrates as compared with the interaction between the kinase and the ␦-domain peptide of the scaffold protein JIP1; or 2) an inappropriate representation of the JNK-JIP1 interaction due to use of the small peptide rather than the full-length scaffold protein to study the kinetics of the interaction. Nonetheless, it is not unreasonable to expect a shift in the binding capabilities of an inactive versus an active enzyme. These changes could be caused by conformational alterations that allow JNK1␤1 in the presence of ATP or phosphorylated JNK1␤1 to bind differently to ATF2 and c-Jun altering its affinity for those ligands. Changes in affinity for interacting proteins and increase in enzymatic activity upon phosphorylation (activation) have been reported for other MAP kinases like ERK2 (reviewed by Rubinfeld and Seger (50)) and by Burkhard et al. (51) who demonstrated by surface plasmon resonance that phosphorylation of ERK2 affected interactions with Elk-1 and stathmin but not interactions with c-Fos and RSK-1. Interestingly, activation of JNK1␤1 seemed to be coupled to the almost complete dissociation of the kinase from the ligand (Figs. 4, c(i) and d(i), and 5, c(i) and d(i)). This observation suggests that the stability of the active kinase-substrate complex was lower than the stability of the complex between the inactive kinase and the substrates. This could be a regulatory mechanism to control the activity of the active enzyme in vivo. Additionally, in the presence of ATP, active JNK1␤1 should be able to catalyze the phosphorylation of the immobilized substrate and once that reaction has occurred, rapid dissociation of the kinase from the substrate can be expected. It is possible that the changes in the ATF2 and c-Jun K D values observed by BLI in the absence and presence of ATP were not large enough to cause a shift from hyperbolic to sigmoidal enzyme kinetics for JNK1␤1, implying that the kinase is not an allosteric enzyme. However, it is interesting to speculate that changes within JNK could occur upon phosphorylation or ATP binding altering the dynamics of the protein. These ideas are currently being investigated in our laboratory by nuclear magnetic resonance of JNK in the presence and absence of ligands. In summary, we were able to show that: 1) JNK1␤1 followed a random and sequential enzymatic mechanism with noninteracting substrate binding sites; 2) JNK1␤1 showed a slightly lower K m and k cat for ATF2 than other JNK splice variants characterized so far; 3) a higher turnover number for c-Jun when compared with ATF2; 4) no substrate preference between c-Jun and ATF2; and 5) inhibition of ATF2 phosphorylation activity by AMP-PCP and JIP1 pep remarkably similar to JNK1␣1 and JNK3␣1. We were also able to show that B-F-ATF2 and H-N-cJun had affinities similar to each other when binding to either inactive or active JNK1␤1. Additionally, we were able to demonstrate that both activation of JNK1␤1 and ATP presence induced alterations of the kinetics of the interaction with substrates ATF2 and c-Jun. Taken together these findings suggest that JNK1␤1 in vitro enzymatic activity toward ATF2 was distinguishable from other splice variants studied so far and that inhibition studies for ATF2 phosphorylation by a small molecule nonhydrolyzable ATP analog (AMP-PCP) or by the substrate mimetic peptide, JIP1 pep, did not uncover distinctive features of JNK1␤1 when compared with the other variants studied. Differences in the kinetics of the interactions with binding partners like substrates, scaffold proteins, activators, inhibitors, and phosphatases could be a mechanism for subtle regulation of binding and activity of JNK and therefore modulation of the function of this important stress-associated signaling pathway.
2018-04-03T02:53:18.185Z
2012-02-17T00:00:00.000
{ "year": 2012, "sha1": "b9401f749f3851d537ac3d8334fbc97d8b70d849", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/287/16/13291.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "c31217b2e3e406108c536ae4d6de24e42fc93094", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119136931
pes2o/s2orc
v3-fos-license
Feedback control of nonlinear PDEs using data-efficient reduced order models based on the Koopman operator In the development of model predictive controllers for PDE-constrained problems, the use of reduced order models is essential to enable real-time applicability. Besides local linearization approaches, Proper Orthogonal Decomposition (POD) has been most widely used in the past in order to derive such models. Due to the huge advances concerning both theory as well as the numerical approximation, a very promising alternative based on the Koopman operator has recently emerged. In this chapter, we present two control strategies for model predictive control of nonlinear PDEs using data-efficient approximations of the Koopman operator. In the first one, the dynamic control system is replaced by a small number of autonomous systems with different yet constant inputs. The control problem is consequently transformed into a switching problem. In the second approach, a bilinear surrogate model, is obtained via linear interpolation between two of these autonomous systems. Using a recent convergence result for Extended Dynamic Mode Decomposition (EDMD), convergence to the true optimum can be proved. We study the properties of these two strategies with respect to solution quality, data requirements, and complexity of the resulting optimization problem using the 1D Burgers Equation and the 2D Navier-Stokes Equations as examples. Finally, an extension for online adaptivity is presented. Introduction The control of systems governed by nonlinear partial differential equations (PDEs) is very challenging. Since classical control strategies are difficult to develop in this context, advanced techniques such as Model Predictive Control (MPC) [1] are very popular. In MPC, an open-loop optimal control problem is repeatedly solved online over a finitetime horizon using a model of the system dynamics, which then results in a closed-loop controller. The PDE-constrained optimal control problems we are interested in are of the where y ∈ Y is the system state (depending on the d-dimensional space coordinate x ∈ Ω ⊆ R d and the time t ∈ R ≥0 ), u ∈ U is the control function, and the partial differential operator G : Y × U → Y describes the system dynamics. For ease of notation, the term in the objective function L : Y → R does not depend on u explicitly. The downside of MPC is that the open-loop control problem (OCP) has to be solved in a short amount of time, which is generally not possible for PDE-constrained problems when using a standard discretization approach such as finite elements or finite volumes. A remedy to this issue is reduced order modeling (ROM), where the high-fidelity model is replaced by a low-dimensional surrogate model, see [2,3] for overviews. In the nonlinear case, the method of Proper Orthogonal Decomposition (POD) [4] has been successfully applied in a large variety of problems concerning both simulation and control. In the latter case, there exist different approaches to ensure convergence towards the true optimum. The two most popular are to derive error bounds based on the singular values associated with the POD modes [5,6,7,8] and to adapt classical trust-region approaches to surrogate modeling [9,10,11]. A much more recent approach to develop a ROM is via the linear but infinite-dimensional Koopman operator [12], which describes the dynamics of observables. In the past decade, significant advances were obtained concerning theoretical aspects of the Koopman operator [13,14,15,16] as well as its numerical approximation via Dynamic Mode Decomposition (DMD) [17,18,19] or Extended Dynamic Mode Decomposition (EDMD) [20,21,22]. An advantage over POD is that this approach can also be applied in situations where the underlying system dynamics is unknown. The above-mentioned advancements have led to several approaches for including the Koopman operator in control frameworks, see, e.g., [23,24,25,26,27,28]. In many of these approaches, the Koopman operator is approximated for an augmented state (consisting of the actual state and the control) in order to deal with the non-autonomous control system. For this reason, a large amount of data is necessary to cover a sufficient range of the dynamics. Alternative approaches have recently been presented by the authors in [29] and [30]. Since the Koopman operator is only applicable to autonomous systems in its original formulation, we take the following two steps: i) replace the control system G by a finite number of autonomous systems G u j with constant input u j , ii) construct reduced order models for low-dimensional observations (instead of the entire state) of G u j using the corresponding Koopman operator U u j . In a third step, the PDE constraint in Problem (OCP) is replaced by the reduced model. We perform this step in two different ways (cf. [29] and [30], respectively, for details): iii-a) transform the optimization problem into a switching problem (which of the autonomous systems has to be applied in each time step?), iii-b) construct a bilinear surrogate model via linear interpolation between two Koopman operators U u 0 and U u 1 . In this way, convergence towards the true optimum can be shown by utilizing a recent convergence result for EDMD [31]. Since reduced order modeling approaches using the Koopman operator are relatively new, people have much less experience in this direction compared to more established methods such as POD. The purpose of this chapter is therefore to study the two approaches described above regarding the numerical performance. We address both the quality of the solution compared to the PDE-constrained problem as well as the influence of the training data. Furthermore, the effect of introducing the switching problem transformation is studied. The remainder of this chapter is structured as follows. In Section 2, the notation for the Koopman operator is introduced and the reduced order modeling approach for lowdimensional observations is presented. In Section 3, we give a short introduction to model predictive control which we use to realize feedback behavior. We then introduce the two reduced order modeling strategies in Section 4 before studying numerical properties and the control performance in Section 5. Finally, we use the concept from [32] to obtain online updates for the reduced models in Section 6 before drawing a conclusion in Section 7. 2 Reduced order modeling using the Koopman operator Let Φ : Y → Y be a discrete deterministic dynamical system defined on the state space Y and let f : Y → R be a real-valued observable of the system. 1 Then the Koopman operator U : F → F with F = L ∞ (Y), see [33,15,16,20], which describes the evolution of the observable f , is defined by (Uf )(y) = f (Φ(y)). The Koopman operator is linear but infinite-dimensional. Its adjoint, the Perron-Frobenius operator, describes the evolution of densities. The definition of the Koopman operator can be naturally extended to continuous-time dynamical systems as described in [33,15]. Given an autonomous system of the forṁ the Koopman semigroup of operators {U t } is defined as where Φ t is the flow map associated with G. In what follows, we will mainly consider discrete dynamical systems, given by the discretization of ODEs or PDEs. That is, Φ = Φ h for a fixed time step h. One method to compute a numerical approximation of the Koopman operator from data is EDMD [20,22]. The following brief description is based on the review paper [34]. EDMD is a generalization of DMD [17,19] and can be used to compute a finite-dimensional approximation of the Koopman operator, its eigenvalues, eigenfunctions, and modes. In contrast to DMD, EDMD allows arbitrary basis functions -which could be, for instance, monomials, Hermite polynomials, or trigonometric functions -for the approximation of the dynamics. We do not observe the full (potentially infinite-dimensional) state of the system, but consider only a finite number of measurements, given by z = f (y) ∈ R q . The special case f = Id is known as the full state observable. For a given set of basis functions {ψ 1 , ψ 2 , . . . , ψ k }, we then define a vector-valued function ψ : R q → R k by If ψ(z) = z, we obtain DMD as a special case of EDMD. We assume that we have either measurement or simulation data, written in matrix form as where z i = f (Φ(y i )). The data could either be obtained via many short simulations or experiments with different initial conditions or one long-term trajectory or measurement. If the data is extracted from one long trajectory, then z i = z i+1 . The data matrices are embedded into the typically higher-dimensional feature space by With these data matrices, we then compute the matrix U ∈ R k×k defined by where + denotes the pseudoinverse. The matrix U can be viewed as a finite-dimensional approximation of the Koopman operator. Convergence of EDMD in the infinite-data limit for a fixed set of basis functions was first analyzed in [20,22]. Convergence towards the Koopman operator for the case that also the number of basis functions goes to infinity has recently been proven in [31]. Under some assumptions such as independent drawing of data points with respect to some given probability measure µ and boundedness of the Koopman operator, the EDMD approximation U converges to the Koopman operator for k → ∞ and m → ∞ provided that Figure 1: Relation between the system dynamics Φ, the corresponding Koopman operator U and its finite-dimensional representation U computed via EDMD. is an orthonormal basis of F. For the convergence results below, we assume that these conditions are satisfied. The decomposition of the Koopman operator into modes, eigenvalues and eigenfunctions is commonly used to analyze the system dynamics as well as predict the future state. In the situation we are presenting here, we can pursue an even simpler approach and obtain the update for the observable z directly using U which yields the Koopman operator based reduced order model: This approach is visualized in Figure 1, where we see that If we let the number of data points as well as the number of basis functions go to infinity, then the EDMD approximation converges to the Koopman operator as discussed above. Table 1 shows the efficiency of the K-ROM for the two example which we will consider throughout this chapter, the 1D Burgers equation and the 2D Navier-Stokes equations. Here, we have chosen a monomial basis for the dictionary Ψ. The maximum order of the monomials and the number of observations yield the K-ROM dimension. We see that the dimension of the PDE-constrained problem (obtained by a finite-difference (FD) approximation) is reduced by a factor of 1.4 in the Burgers example. Although this does not seem like a large reduction, we obtain a speed-up of approximately 100 since (K-ROM) is linear on the one hand and we can choose larger step sizes in the reduced model on the other hand. For the Navier-Stokes example, we additionally have a major reduction of the dimension of the state compared to the finite volume (FV) discretization by a factor of almost 500. This way, a speed-up of 75, 000 is achieved. where y s is the initial condition obtained (or approximated) from sensor data. The first part of this solution is then applied to the real system while the optimization is repeated with the prediction horizon moving forward by one sample time. In Problem (MPC), the system dynamics are of discrete form since the control input is constant over each sample time interval. When dealing with continuous-time systems such as Problem (OCP), this formulation can be regarded as the flow map Φ h (cf. Section 2) of the continuous dynamics with time step h. A consequence of the MPC method is that Problem (MPC) has to be solved online, i.e., within the time step h. Since this is in general impossible for PDE-constrained problems, we will present two approaches to replace the PDE constraint in (MPC) by a Koopman operator based reduced order model in the next section. A data-efficient method to construct Koopman operator based surrogate models As already outlined in the introduction, we will construct K-ROMs with significant speedup factors. In order to do this in a data-efficient way, we perform steps i) and ii) mentioned there. The first step ensures that our data requirements are moderate since we only have to collect data for a finite set of inputs. The second steps allows us to derive linear models with a low dimension. Since the approach is entirely data-based and hence equation-free, we can use any observation and in particular those which are relevant for the control task at hand. In a third step, we have to transform the optimal control problem accordingly which we will do in two different ways. In Section 4.1, the optimization problem is transformed into a switching problem and in Section 4.2, a bilinear model is constructed via linear interpolation. Transformation to switched systems A large variety of technical systems is controlled via switching between different inputs. Examples are valves in chemical reactors which are either open or closed or the switching of gears in electrical drives. These so-called switched systems can be regarded as a special case of hybrid systems which possess both continuous and discrete-time control inputs (cf. [35] for a survey). We here make use of the concept of switched systems in order to reduce the data that is required for the training process of the K-ROM. To this end, we replace the right-hand side of the dynamical control system by a finite set of autonomous systems which is achieved by fixing the input to n c different constant values {u 0 , . . . , u nc−1 } in (OCP). This yields n c different differential operators, G u 0 , . . . , G u nc−1 , and the respective flow maps Φ u 0 , . . . , Φ u nc−1 . A consequence of this approach is that the optimization problem is transformed into a combinatorial problem where we have to select the optimal right-hand side in each time step. Since combinatorial problems are often more challenging to solve, we can fix the sequence in which the different autonomous systems are used, i.e., we switch from G u 0 to G u 1 , from G u 1 to G u 2 and so on. Having reached the final system, we go back from G u nc−1 to G u 0 . This way, the optimization variable again becomes real-valued, as we now have to compute the time instants for the switches τ ∈ R p+2 (with τ 0 = t 0 and τ p+1 = t e ): Using (1), we can reformulate the open-loop problem (OCP) in terms of the switching instants (1), Different methods exist for efficiently computing solutions to (OCP s ), see, e.g., [36,37,38] for continuous-time or [39] for discrete-time problems. A second-order method has recently been proposed in [40]. The following example illustrates the consequences of introducing such a switching control. Example 4.1. Assume we want to control the behavior of the Van der Pol oscillator given byẏ , By restricting the input u to n c values, we can transform the control system into n c autonomous systems of the forṁ The system dynamics for different input functions u are shown in Figure 2. If we consider the discrete-time formulation (MPC) for the closed-loop controller, the transformation is achieved in a very similar fashion Since the input is constant over the sample time interval, we here obtain a combinatorial problem without a workaround as in the continuous-time case. Each entry of τ now describes which flow map Φ τ i to apply in the i th step. A popular approach to solve discrete-time optimal control problems of such form is via dynamic programming [41]. However, this is only advisable if we are interested in larger prediction horizons p. For low values of p (say, 3), the most efficient way is indeed to evaluate all n c p values of τ . Example: The 1D Burgers equation We now study the difference between a continuous control input and a switched system approximation using the 1D Burgers Equation with periodic boundary conditions and a distributed control:ẏ The viscosity is set to ν = 0.01 and the distributed control is realized by a time dependent scalar input u and a shape function χ which is shown in Figure 3 (a). In order to enable comparability to our K-ROM approach, the objective function only depends on a few points in space (the black dots in Figure 3 (a)), i.e., and we formulate the tracking type objective function in terms of these observations only. This yields the following MPC problem: with Φ(y, u) being the flow map of (2) and y s the initial value for the current iteration. Similar to Example 4.1, we now limit the control input to three constant values u 0 = 0.075, u 1 = 0 and u 2 = −0.075. By this, the right-hand side of (2) is transformed to u j χ(x) which yields Φ u j (y) as the flow maps corresponding to the constant inputs u 0 , u 1 and u 2 . The MPC problem is transformed accordingly: The results for the two MPC problems (3) and (4) are compared in Figure 3. We see in (b) that the switching approach results in inputs which are close to the continuous solution when averaging over small time frames. In (c) and (d) we see that this results in a solution with slightly reduced quality, i.e., the distance to the reference trajectory is slightly larger. Figures (e) and (f) show the observation z for the two problems, i.e., the dynamics which the controller acts on. Finally, the state y corresponding to (e) is shown in (g). Switched system MPC based on K-ROMs Assuming that we have computed the n c K-ROMs (i.e., U u 0 to U u nc ) via EDMD from data, we can now replace the expensive PDE evaluation in (MPC s ) by the much cheaper K-ROM: whereL is the reduced objective function formulated with respect to the observables. We now compare the two problem formulations (MPC s ) and (K-MPC s ), i.e., the switching time MPC problem and the corresponding approximation using the K-ROM. To this end, we first assume that the full objective function L can be expressed in terms of the observations z: Assumption 4.2. L(y(t)) =L(ψ(z i )) for all t ∈ [t 0 , t 0 + h, t 0 + 2h, . . . , t e ] and the corresponding i = (t − t 0 )/h. The assumption is not restrictive since this has to be satisfied in every application, where only observations are available (e.g., sensor data). Consequently, the objective L has to be defined accordingly. This assumption allows us to prove convergence for the K-ROM based MPC problem as the number of measurements m and the basis size k for the dictionary Ψ tend to infinity (i.e., we have convergence for EDMD, see [31] for details). ). Consider Problem (MPC s ) and the K-ROM based approximation (K-MPC s ) and assume that we have convergence of EDMD towards the Koopman operator according to [31], i.e., lim k,m→∞ U u j = U u j for j = 1, . . . , n c . As we have seen in Table 1 for two example problems, this reduced order modeling approach allows us to reduce the numerical effort by several orders of magnitude. A more detailed discussion regarding the numerical benefits will follow in Section 5. Bilinear models via linear interpolation The switched systems approach allows us to replace the system dynamics by a K-ROM in a straightforward manner. However, this approach comes at a prize, namely that we now have a combinatorial optimization problem which is often more expensive to solve. Furthermore, we have limited the control input to a small number of predefined values. Although this does not necessarily have a major impact on the control performance as we have seen in Section 4.1, we would nonetheless like to allow for arbitrary control inputs. One possibility to do so is to approximate a Koopman operator for an augmented observation, i.e.,ẑ This approach is pursued in [23] for open-loop and in [26] for closed-loop control problems. In order to approximate the Koopman operator forẑ, data for different combinations of states and inputs has to be collected. In order to reduce the amount of required data, we here pursue an alternative approach where we still only approximate Koopman operators for a small number of autonomous systems. In order to allow for continuous controls, we now define the matrices A = U u 0 and B = U u 1 − U u 0 and introduce the bilinear control system The term bilinear refers to the fact that (5) contains a term ψ(z) · u but is otherwise linear both in ψ(z) and in u [42]. We set the lower and upper bound for the input to u 0 and u 1 , respectively. Consequently, for u i ∈ [u 0 , u 1 ], we simply interpolate linearly between the two autonomous dynamics corresponding to u 0 and u 1 . Provided that we have convergence of EDMD, system (5) is equal to the observation of the exact dynamics for u 0 and u 1 . In order to show convergence for intermediate values, we have to introduce another assumption, namely that the flow map Φ(y, u) depends linearly on u. Assume that the observation map f is linear and that the system dynamics Φ are linear in u. Then, a linear interpolation between the two operators is equal to the Koopman operator for the linear interpolation between the controls u 0 and u 1 , i.e., Proof. The claim follows directly from the linearity assumptions: The above lemma hence yields convergence of the bilinear K-ROM: Corollary 4.5. The observations of the state of the full dynamical system are equal to the solution of (5) provided that we have convergence of the EDMD algorithm, i.e., In a similar fashion to the switched systems approach, we can now easily reduce the numerical effort of (MPC) (i.e., the MPC problem with continuous inputs) by replacing the system dynamics by the bilinear surrogate model: z s = f (y s ). (K-MPC) Convergence of the reduced problem now follows immediately from Lemma 4.4. Theorem 4.6. Consider Problem (MPC) with a control system Φ(y, u) which depends linearly on u and the K-ROM based approximation (K-MPC). Assume that we have convergence of EDMD towards the Koopman operator according to [31], i.e., Proof. The claim follows directly from Lemma 4.4 and Corollary 4.5. Remark 4.7. (K-MPC) is a bilinear control problem. Efficient algorithms specifically tailored to this problem class exist, see, e.g., [43,42]. An alternative to using MPC would be, for instance, to solve a state-dependent Riccati equation [44] in order to obtain a closed-loop controller. Remark 4.8. Lemma 4.4 and Theorem 4.6 are only valid if the control system Φ(y, u) depends linearly on u which is not always the case. (In [30], the influence of nonlinear control dependencies has been studied.) In these situations, a way to reduce the inaccuracy of the bilinear system (5) is to introduce multiple bilinear K-ROMs which are valid locally, which is inspired by similar concepts in the reduced-basis community, see, e.g., [45]. Consequently, the linear interpolation is performed between two operators which are less far apart in terms of the control input. This means that we approximate several Koopman operators U u j corresponding to u 0 < u 1 < . . . < u nc−1 . The K-ROM then consists of several locally valid bilinear models: Note that due to this, the control system is continuous and piece-wise smooth with possible kinks at u 1 , u 2 , . . . , u nc−2 which has to be taken into account by the optimization routine solving (K-MPC). On the influence of the amount of data and the selection of basis functions In this section, we study the two examples already mentioned in Table 1 in more detail. Since the convergence result for EDMD only holds for infinitely large dictionaries Ψ as well as infinitely many data points, the assumptions of the convergence theorems are obviously not satisfied in a practical setting. This means that we need to investigate the influence of the basis functions used in the construction of the dictionary Ψ as well as the impact of the amount of training data on the controller performance. Furthermore, we compare the two K-ROM approaches against each other and against the full solution. The latter is only possible for the Burgers equation as the numerical effort for solving the Navier-Stokes based MPC problem is prohibitively large. Test cases and reference setup Here, we first introduce the two test cases and validate the K-ROM approach using one particular numerical setup. All algorithms except the Navier-Stokes simulations are implemented in Matlab. For the switched-systems optimization, all possible n c p inputs τ are evaluated as motivated in Section 4.1. For the bilinear K-ROM, the Matlab function fmincon is applied which uses a sequential quadratic programming (SQP, see [46]) approach. The 1D Burgers Equation We now compare the solutions obtained by the full control problems and their K-ROM approximations, respectively, using the problem setup introduced in Section 4.1.1. For the switched system approach, we choose the inputs u 0 = −0.075, u 1 = 0.075, and u 2 = 0. For the bilinear surrogate model, we use u 0 and u 1 to construct A and B. For the data collection process, we use three different initial conditions and for each of these, we perform one simulation with constant inputs u 0 , u 1 and u 2 , respectively. Finally, we perform an additional simulation with a constant switching sequence between the inputs. This yields 12 simulations in total, each of which is 60 seconds long, and we collect a snapshot every 0.005 seconds. The switched sequences are then split into three matrices according to which input is active during which time step. These data points are then attached to the respective snapshot matrices with constant inputs. The time step h for the flow map Φ in the control problem as well as for the approximation of the Koopman operator is set to 0.5 seconds. The performance of the reduced approaches is visualized in Figure 4 (for a prediction horizon of length p = 3) where the switched approaches are compared in the left column and the continuous ones in the right column. We see that in the switched systems approach, the inputs (Figure 4 (a)) vary significantly. This is due to the MPC framework. As soon as the two z trajectories differ slightly, the corresponding optimization problems do not necessarily possess the same optimal solution any longer. Consequently, small inaccuracies may result in different control trajectories. However, we see in Figure 4 (c) and (e) that the value of the objective function is of comparable quality. In some parts (e.g., at t ≈ 10 s), the ripples around the reference trajectory (Figure 4 (g)) are slightly larger than in the PDE-constrained case (cf. Figure 3 (f) on p. 9). Nevertheless, it can be concluded that the switched systems K-ROM approach is very well suited for real-time control of the Burgers equation. Looking at the continuous K-ROM, we see that we have an even better performance (cf. Figure 3 (d)), as can be expected due to the larger freedom in choosing the input to the system. We see that the difference between the PDE based and the K-ROM based solutions is similar to the switched systems case. However, a significant advantage is that we now require only data for two autonomous systems instead of three. This means that the data requirements can be further reduced by 33%. For the same reasons as in the switched systems case, we do not have a very good agreement between the optimal control. We will further study this effect in Section 5.2. The 2D Navier-Stokes equations The second example is the flow around a cylinder described by the 2D incompressible Navier-Stokes equations at a Reynolds number of Re = 100: controlled via rotation of the cylinder, i.e., u(t) is the angular velocity. Without control, the well-known von Kármán vortex street occurs. Similar to the Burgers example, we do not observe the full state. Since we want to control the vertical force on the cylinder (i.e., the lift), we directly observe the lift coefficient C l . In addition, we observe the drag coefficient C d and the vertical velocity at six points (x 1 , . . . , x 6 ) in the cylinder wake (see Figure 5 (a)), which yields the following observation: y 2 (x 1 , t), . . . , y 2 (x 6 , t)) . As already mentioned, we want to influence the lift by rotating the cylinder. Since the lift coefficient is one of the observables, we simply have to track the corresponding entry of z in the MPC problem: Here, we introduce three autonomous systems with the constant cylinder rotations u 0 = −2, u 1 = 0, u 2 = 2 for both K-ROM approaches. The data is collected from one longtime simulation over 3000 seconds with a random switching. As the lag time, we choose h = 0.25. The fact that we have three autonomous systems means that we use the localized ROM concept (6) for the bilinear model. The MPC solutions to both reduced problem formulations (both with a prediction horizon of length p = 5) are compared in Figure 5. We see in (b) a comparison between a PDE simulation and the bilinear K-ROM, and very good agreement is observed despite the significant dimension reduction. In Figure 5 (c) and (d) the two K-ROM based solutions are compared and we observe almost equal quality of the solution. Interestingly, the switching approach is superior to the bilinear K-ROM. The reason for this is likely that due to the localized K-ROM approach, the objective function possesses many non-smooth kinks for p = 5. Consequently, the true optimum is difficult to compute numerically without using algorithms specifically tailored to continuous, piece-wise smooth problems. Furthermore, inaccuracies introduced via linear interpolation become more significant for longer prediction horizons, i.e., for larger p. Data sampling and basis selection We have seen in the previous section that both approaches are capable of controlling complex PDE-constrained problems in real time. As already mentioned, the convergence result for EDMD only holds for infinitely large dictionaries Ψ and infinitely many data points. Consequently, we now study the influence of different numerical parameters on the solution quality since these assumptions are not met in a practical setting. In Figure 6, the influence of the maximal order of the monomial basis and the size of the training data set is visualized for the Burgers example. In accordance with Section 5.1.1, we have taken three K-ROMs for the switched systems approach and two for the bilinear model. The order of the polynomials directly influences the dimension of the K-ROM such that the speedup crucially depends on this choice, see Figure 6 (a), where we have speedup factors of approximately 200 for a maximum order of 1 (i.e., standard DMD). This factor then reduces to roughly 50 for polynomials up to order 5. However, we see that for both K-ROM approaches, it is sufficient to consider monomials of order 2. Another benefit of these lower-dimensional K-ROMs is that the amount of required data is smaller. The larger the surrogate model is, the more data we need to compute satisfactory approximations of the Koopman operator. In fact, it appears that the standard DMD approach is the most robust concerning the amount of training data. When comparing Figure 6 (b) and (d), where the distance between the K-ROM approaches and the continuous PDE constrained MPC problem are compared, we see that -surprisingly -the switched systems approach yields a feedback behavior of similar quality compared to the bilinear K-ROM. Note, however, that the amount of training data is smaller for the bilinear model. When studying the Navier-Stokes example, the picture is very similar, cf. Figure 7. Since the PDE constrained solution is not available, we here plot the integrated objective function value of the PDE model: J = te t 0 J(t) dt. Similar to the previous example, the DMD based version appears to be the most robust concerning the data requirements. Again, both approaches are very similar in performance, and considering monomials of order larger than two is significantly inferior. Remark 5.1 (Influence of the number of K-ROMs). As a final experiment, we study how the number of localized K-ROMs influences the solution. To this end, we revisit the Burgers example with three different discretizations of u. We consider the cases with two, three, and five inputs at which data is available. We see in Figure 8 that no real trend can be identified. It is interesting to see, however, that no improvement can be observed when increasing the number of control inputs from three to five. On the contrary, increasing the number of ROMs may even be disadvantageous, in particular if the amount of data is insufficient for a good approximation of the Koopman operator. In conclusion, a small to moderate number of control inputs in combination with the classical DMD approximation appears to be the most efficient and robust choice for the problems studied here. Online updates using sensor data During the MPC algorithm, new sensor data is obtained at every sample time h. Furthermore, these data points are collected in regions of state space which are of particular interest since they are close to the desired state. Therefore, it is a natural idea to use this data to further improve the quality of the K-ROM. To this end, we make use of the idea developed in [32], where incremental updates of the DMD approximation are performed in a way that we do not have to store data points that have already been taken into account. We make use of the transformation see [20,22] for details. We see that in order to update U , we merely have to store the (in our case low-dimensional) matrices A and G. Each time we obtain a new snapshot pair (z m+1 , z m+1 ) we can update the EDMD approximation via where q ∈ N ≥1 is a weight parameter. Note that we have to set q = 1 in order to obtain the standard EDMD procedure. Alternatively, we can determine q in such a way that the update has a higher impact, e.g., by some prescribed percentage : The most expensive part for this update is to compute the pseudo-inverse of G. However, since the K-ROM dimension is generally low, it can be computed efficiently. We now apply this procedure to the Navier-Stokes example. As we have seen in Section 5.2, the standard DMD approximation is most stable in the low data limit. To illustrate the data efficiency of our approach, we consequently choose DMD for the K-ROM computation in this section and start with only 50 data points for each of the three autonomous systems u 0 = −2, u 1 = 0 and u 2 = 2. We set = 0.025 and store the matrices A and G and sensor data that we measure during the MPC algorithm. We then update the three autonomous systems every 10 seconds. Note that this approach is only viable for the switching approach (K-MPC s ) since otherwise, we collect data at intermediate control values for which we do not have a reduced order model. For this approach, the setup proposed in [26,28] would be more appropriate. Figure 9 shows the results for the lift tracking problem, where we compare the online adaptation with the pure offline training considered before. For the comparison, we consider a longer trajectory of 500 seconds with z opt (t) = 2.5 sin t 4 cos t 20 . In order to evaluate the online procedure, we compare the objective function value integrated over the past 10 seconds, i.e., over the period with the current K-ROM approximation:Ĵ (t) = t t−10 We see that after the first four to five updates, we already have a very good agreement. Consequently, a K-ROM based controller can be set up with a very small amount of data and then be updated regularly. After a short period of time, the performance is equal to that with an extensive offline phase. It should be noted that this approach is only applicable if the system cannot stray arbitrarily far from the reference trajectory. Otherwise, it could happen that a strong deterioration occurs from the beginning. Consequently, the resulting training data would not be suited for improving the control performance. Conclusion We have presented two methods for solving PDE constrained optimal control problems using the Koopman operator for significant speedup. In order to increase the data efficiency, we only collect data for a small number of constant inputs. The corresponding control problem then becomes a switching problem. Alternatively, we obtain a bilinear surrogate model via linear interpolation between two K-ROMs. Based on a recent convergence result for EDMD, convergence of the K-ROM based optimization problems can be shown. Extensive numerical studies show the applicability of both methods to nonlinear PDE constrained problems. Interestingly, a simple DMD approximation of the Koopman operator is beneficial both for the solution quality and robustness with respect to the amount of training data. The reason for the good performance is that predictability only has to be accurate for a small number of time steps. Finally, an extension to online adaptations using sensor data has been presented and validated. This way, we can set up real-time controllers with very limited data. For future work, it will be interesting to validate the methodology in experiments. First results for ODE constrained problems (electrical drives) are promising [48]. Furthermore, the system dynamics of the examples considered here are fairly well-behaved. Consequently, it will be of great interest to study the presented approaches for more complex dynamical systems, e.g., for decreasing viscosity / increasing Reynolds numbers.
2018-06-26T10:59:52.000Z
2018-06-26T00:00:00.000
{ "year": 2018, "sha1": "2de95f9975378d1b575d283fef4404410358dec1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1806.09898", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2de95f9975378d1b575d283fef4404410358dec1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
53039004
pes2o/s2orc
v3-fos-license
Effect of Two Transport Options on the Welfare of Two Genetic Lines of Organic Free Range Pullets in Switzerland Simple Summary Animal welfare has been of increasing interest to consumers and producers of animal products in Europe. Issues during transport affect both the wellbeing and the productivity of livestock. This study was conducted to analyze two practice-oriented transport variants of organically mixed-held white and brown pullets. No significant difference could be found between the transport variants. Instead, we discovered clear differences between the two genetic pullet lines. Abstract The welfare of two genetic lines of organic layer hen pullets—H&N Super Nick (HNS) and H&N Brown Nick (HNB)—was compared during two commercial transport variants of 15 flocks of mixed-reared birds. Birds were either transported overnight (with a break in travel), or were transported direct to the layer farm (without a break in travel). Samples of feces were collected non-invasively from 25 birds of each genetic line per flock for each transport variant before transportation to evaluate baseline values of glucocorticoid metabolites, and at 0 h, 3 h, 6 h, 10 h, 24 h, 34 h, 48 h, 58 h, and 72 h after the end of transportation, to measure transportation and translocation stress. We assessed the fear toward humans with the touch test before transportation, and we checked the birds’ body condition by scoring the plumage condition and the occurrence of injuries. Body weight before and weight loss after transportation were determined, and ambient temperature was measured before, during, and after transportation. Stress investigations showed no significant differences between the transport variants (effect: −0.208; 95% confidence interval (CI): (−0.567; 0.163)). Instead, we discovered differences between the pullet lines (effect: −0.286; 95% CI: (−0.334; 0.238)). Weight loss was different between the transport variants (2.1 percentage points; 95% CI: (−2.6; −1.5)) and between the genetic lines, as HNB lost significantly less weight than HNS (0.5 percentage points; 95% CI: (0.3; 0.7)). Introduction The World Organization for Animal Health provides a reference document [1] of international standards for animal health and zoonosis; it grants animals kept under human care the internationally recognized "five freedoms" of welfare described as follows: freedom from hunger, thirst and Table 1. Comparison between Switzerland, Germany and Austria regarding maximum stocking densities of pullets and laying hens per pen, as well as minimum space requirements and maximum duration in the transit of pullets according to transnational (European Union, EU), federal, and label-specific regulations. STS = Swiss Animal Protection ("Schweizer Tierschutz"). All numbers in square brackets are references. Experimental Design The study was conducted with organically mixed-held pullets of two genetic lines-H&N Super Nick (HNS) and H&N Brown Nick (HNB)-of a commercial breeder and distributor (H&N International, Cuxhaven, Germany). Parent animals were imported to Switzerland and raised as organic laying hens. The experimental unit consisted of pullets and laying hens of 15 flocks, which were reared and kept according to the guidelines of Bio Suisse (Association of Swiss Organic Farming Organizations, Basel, Switzerland) [9] on free-range farms. Each rearing farm raised 4000 pullets, and farms of laying hens kept 2000 birds. The average ratio between HNS and HNB normally was 50:50 to 60:40. The transport to the farm of laying hens was realized at the age of 18 weeks. The study was based on two practically relevant commercial transport variants-with and without transportation break-which were categorized according to distance and length. Variant I "transport overnight" (transportation was performed with break) was compared with Variant II "direct transport" (transportation was performed without break). On average, 2014 birds were transferred on each transit. Each plastic crate (90.5 × 61.5 × 31.5 cm) was loaded with 16 pullets according to the Swiss Order on the Protection of Animals [4]. Because the start of loading also means the starting point of stress, we defined the time from the beginning of loading until the end of unloading as "time in plastic crate" or transport duration. Thus, the average transport duration was 13.5 h for Variant I and 5.0 h for Variant II, whereas the mean journey time alone was 2.6 h for Variant I and 1.0 h for Variant II (time on the road). Loading regularly began at 7 p.m. The legally prescribed transport duration was never exceeded [4]. We timed our investigation to include winter, spring, and summer. Temperature was measured with HOBO U10 (temperature data loggers, Onset Computer Corporation, Bourne, MA, USA) inside of the stable at animal head height and during transportation inside of the plastic crates on the upper edge. For both transport variants, temperature was recorded during the whole investigation period every 10 min per flock. Means of minimum and maximum temperature values during the testing period (January until July) for Variant I ranged from 11.1 to 32.3 • C (rearing farm), 1.9 to 34.7 • C (transportation vehicle) and 3.8 to 34.1 • C (farm of laying hens) and for Variant II from 7.6 to 26.3 • C (rearing farm), −8.9 to 28.8 • C (transportation vehicle), and 6.1 to 26.2 • C (farm of laying hens). Animal husbandry varied according to the individual farmer's management. Corticosterone Monitoring To examine the effects of transportation and translocation on stress in each flock, corticosterone levels were measured non-invasively by extracting metabolites from bird droppings. For each sampling time point, 25 pullets of each genetic line were randomly caught from different tiers of the dimmed barn. To enable the collection of individual, spontaneously voided droppings, the pullets were placed separately in cleaned and disinfected plastic crates, and marked on their legs with a pen (Edding-Egg-Color-Pen, Wunstorf, Germany). Samples were collected within 1 h of the experimenter entering the barn according to Rettenbacher et al. [21], who found a first major peak 1 h after a stress pulse in laying hens. One dropping per bird was collected immediately after defecation, put into frost-resistant plastic bags and frozen on dry ice at a usual temperature of −78.5 • C. Droppings were transferred to a freezer after sampling. To determine baseline concentrations, pullets were sampled at 9:00 a.m., two days before transportation. For measurements of transportation and translocation stress, further droppings in both variants were collected 0 h, 3 h, 6 h, 10 h, 24 h, 48 h and 72 h after transportation. Initially, the flocks had been sampled 9 h and 12 h (instead of 10 h) after transportation. However, at these time points, only a few (if any) birds defecated. To prevent an unworkable additional work load and a possible violation of the numerical limit of permitted experimental birds, we decided to collect samples 10 h after transportation. Taking the circadian rhythm into account, flocks of Variant II were sampled additionally 34 h and 58 h after transportation. Altogether, 5751 droppings were collected and analyzed. In the laboratory, 0.5 g of each sample was suspended in 5 mL of 60% (v/v) methanol by shaking for 30 min on a multi-vortex (RapidVap, Labconco, Kansas City, MO, USA) [22]. When a smaller portion had to be used, an aliquot of methanol was added. After centrifugation (GS-6KR Centrifuge, Beckman, Krefeld, Germany) for 15 min, aliquots of the supernatant were diluted 1:10 with assay buffer, and concentrations of fecal corticosterone metabolites (CM) were determined with a cortisone enzyme immunoassay [21]. The applied method has been validated physiologically and biologically for chickens by Rettenbacher et al. [21,23]. Hen-Human Relationship: Touch Test The level of fear of humans is an important determinant of welfare of pullets and laying hens. Regular handling of pullets is fear reducing [24], and positive additional human contact of laying hens reduces their fear level and influences their corticosterone level in blood positively [25]. In contrast, fear-inducing humans reduce the wellbeing of animals [26]. Accordingly, we tested each flock on avoidance and approach behavior by using the touch test of Raubek et al. [27] to assess the birds' reaction to an unfamiliar human. The test was performed with each flock by the same test person, who was unfamiliar to the flock before test. The test person wore protective clothing such as a blue overall, plastic overshoes and a hair cloth. Tests were carried out in the roofed outdoor run area (winter garden) of the rearing farm. Entering the winter garden was the initial contact between flock and test person. The unfamiliar test person moved slowly-one step per second-through the winter garden, approached a group of at least three pullets, squatted for 10 s and then counted all pullets within one arm's length around her. Thereafter, the test person tried to touch one bird after the other. The test was carried out until 33 groups were examined. Any attempt to approach a group or squat down was counted, even if all pullets retreated from the test person [28]. Body Condition The body condition of the birds was evaluated by scoring the plumage and integuments before and after transportation, following feces sampling and the touch test. The assessment basis was the LayWel grading scheme [29] modified according to Schwarzer et al. [30] for pullets. Plumage condition was divided into four degrees of severity (4 = no damage, 3 = 1-5 damaged feathers, 2 = >5 damaged feathers, 1 = plucked area >1 cm). A higher score equaled a better plumage condition. This was assessed on seven individually scored body areas, resulting in a maximum pooled score of 28. Damage of flight feathers, tail feathers, and the presence of fault bars were evaluated separately with binary scores (0 = negative and 1 = positive). Originating from a total plumage score of 28, a bad feather cover was indicated by ≤11-14, and a good feather condition by ≥18-20. Injuries were divided into three degrees of severity (0 = negative, 1 = Ø ≤0.5 cm, 2 = Ø >0.5 cm) on 10 individually scored body areas. Injuries of the comb, head and eyelid were evaluated separately with binary scores (0 = negative and 1 = positive). Live Weight To check minimum body weight, which is 1300 g for HNS and 1479 g for HNB at the age of 18 weeks (according to the breeder and distributor H&N International), 50 numerically marked birds, 25 per line, were weighed during loading. To check a possible transport-related weight reduction due to water and feed withdrawal and "time in plastic crates," we compared body weights before and after transportation (following sampling) for each hen. Results were compared with a control flock that was not transported and was kept overnight in the winter garden without access to water and feed but were free to move around; hens of the control flock were weighed in the evening (8:00 p.m.) and in the morning (7:00 a.m.). For evaluation, the weight of the same numerically marked bird was compared in each case. Weight determination was carried out with a BAT1 poultry scale (VEIT Electronics, Moravany, Czech Republic). The 50 hens of the transport study were divided into four transport crates. To ensure a regular number of 16 birds per crate according to the Swiss Order on the Protection of Animals [4], the transport crates were supplemented with non-weighed birds. Statistical Analysis For the statistical analysis, the relationships between the predictors (transport variant, layer line, flock) and the response variables (baseline CM concentration, CM concentration after transport, returned to baseline value after 72 h, and difference in plumage score before and after transport, transport weight) were analyzed simultaneously using mixed-effects models. The flock was modeled as an unstructured random effect for the model constant (intercept), and the transport variant and the layer line were modeled as ordinary fixed effects. For the continuous response baseline CM concentration, CM concentration after transport, difference in plumage score before and after transport, and transport weight, normal distributions were chosen as observation models. For the binary outcome return to baseline value after 72 h, a logistic regression model was used. Results from this analysis were expressed as odds ratios (OR). For baseline CM concentration, temporal progression was also considered by including time as an unstructured random effect (in contrast to a temporal effect, because of very few unequally distributed time points). Data were analyzed by using the statistical programming language R [31]. All (generalized) mixed-effects models were estimated by the integrated nested Laplace approximation approach [32] within a fully Bayesian setup. Corticosterone Monitoring Mean baseline concentrations of excreted CM in the examined flocks were 43 ng/g and 66 ng/g for Variant I and Variant II, respectively. Overall, birds of Variant II had higher baseline values than birds of Variant I (Table 2). However, differences were not significant (effect: −18.6; 95% confidence interval (CI): (−45.3; 9.1)). )). Three flocks showed significant deviations from mean baseline values in Variant II: two flocks with significant above-average values (D2 and D7) and one with significantly lower values (D6). Any effect that crossed zero did not significantly deviate from the mean baseline value in this flock ( Figure 1). The transit from the rearing farm to the farm of the laying hens resulted in higher mean concentrations of CM compared with the mean baseline values. The highest values were found immediately on arrival (0 h). The mean concentration at 0 h was 173 ng/g and 323 ng/g for Variant I and Variant II, respectively. CM concentrations decreased rapidly during the 0-6 h interval after transportation. Variant I showed an increase during the 6-12 h interval, followed by a steady decline. Values for Variant II were slightly increased at 24 h, 48 h, and 72 h, and slightly decreased at 34 h and 58 h, with the additional sampling times considering the circadian rhythm (Table 3). Table 3. Minimum (Min), maximum (Max), and mean concentrations ± SEM of corticosterone metabolites (CM) after transportation in two transport variants. I 0 4 1337 173 22 3 4 570 111 7 6 3 440 61 7 9 5 568 96 5 10 4 459 69 6 12 11 315 128 3 24 4 786 92 4 48 3 402 73 3 72 2 245 64 3 II 0 4 2215 323 11 3 4 967 134 5 6 3 992 112 3 10 4 590 89 11 24 5 632 95 4 34 4 307 67 9 48 5 462 86 5 58 4 330 69 3 72 4 456 74 2 The transit from the rearing farm to the farm of the laying hens resulted in higher mean concentrations of CM compared with the mean baseline values. The highest values were found immediately on arrival (0 h). The mean concentration at 0 h was 173 ng/g and 323 ng/g for Variant I and Variant II, respectively. CM concentrations decreased rapidly during the 0-6 h interval after transportation. Variant I showed an increase during the 6-12 h interval, followed by a steady decline. Values for Variant II were slightly increased at 24 h, 48 h, and 72 h, and slightly decreased at 34 h and 58 h, with the additional sampling times considering the circadian rhythm (Table 3). Table 3. Minimum (Min), maximum (Max), and mean concentrations ± SEM of corticosterone metabolites (CM) after transportation in two transport variants. Hen-Human Relationship: Touch Test With the touch test, we evaluated the hen-human relationship based on the approach and avoidance behavior of the birds between flocks. To evaluate whether this behavior was reflected in the measured CM concentrations, we compared CM concentrations between hens that stayed an arm length away from the examiner, and those that could be touched. An increase in CM concentration by one unit (1.0 ng/g) resulted in a significantly greater number of hens that could be touched (effect: 0.004; 95% CI: (0.001; 0.006)). In addition, a few differences in approach and avoidance behavior between flocks were found: One flock (D5) had significantly fewer hens that could be touched compared with three other flocks (D2, D6, and N2). Body Condition The examined flocks of both transport variants showed an average plumage score of 24.62 ± 1.37 (mean ± SD) before and after transportation, indicating a good feather condition (maximum possible score = 28, for seven body areas with four degrees of severity). Flocks D4 and D5 of Variant II had a better plumage score after transportation than before ( Figure 5). Hen-Human Relationship: Touch Test With the touch test, we evaluated the hen-human relationship based on the approach and avoidance behavior of the birds between flocks. To evaluate whether this behavior was reflected in the measured CM concentrations, we compared CM concentrations between hens that stayed an arm length away from the examiner, and those that could be touched. An increase in CM concentration by one unit (1.0 ng/g) resulted in a significantly greater number of hens that could be touched (effect: 0.004; 95% CI: (0.001; 0.006)). In addition, a few differences in approach and avoidance behavior between flocks were found: One flock (D5) had significantly fewer hens that could be touched compared with three other flocks (D2, D6, and N2). Body Condition The examined flocks of both transport variants showed an average plumage score of 24.62 ± 1.37 (mean ± SD) before and after transportation, indicating a good feather condition (maximum possible score = 28, for seven body areas with four degrees of severity). Flocks D4 and D5 of Variant II had a better plumage score after transportation than before ( Figure 5). Altogether, we found no significant differences in the plumage condition of the body areas scored with four degrees of severity before and after transportation, regardless of layer line, transport variant or flock, with one exception: HNB in comparison with HNS showed less plumage deterioration (−0.28, 95% CI: (−0.8; 0.25)) in Variant I, and in Variant II, greater deterioration (0.35, 95% CI: (−0.03; 0.73)). The total plumage score, including those body areas scored with two forms of severity (flight and tail feathers, fault bars) apparently significantly improved after transportation compared with before transportation in Variant II (OR: 0.672; 95% CI: (0.53; 0.863)) but not in Variant I (OR: 1.454; 95% CI: (0.931; 2.218)). Integument injuries of body areas scored with three degrees of severity were not sufficiently variable in their distribution of characteristics, and only isolated injuries were found. The same applies for integument injuries of the eyelid (binary score). Both comb and head (evaluated with a binary score) were scored positive in 13% and 4% of the cases, respectively. We could find no major differences in integument injuries before and after transportation. Following the transit from the rearing farm to the farm of laying hens, the birds showed a weight loss of −2.9% ± 1.9% (mean ± SD). Comparing both transport variants, birds of Variant I lost significantly more weight (2.1 percentage points; 95% CI: (−2.6; −1.5)) than birds of Variant II. Regarding the layer lines, HNB lost significantly less weight than HNS (0.5 percentage points; 95% CI: (0.3; 0.7)). Considering the transport variants, differences in weight loss between layer lines can solely be found for Variant II: HNS showed higher loss in weight (−2.38% ± 1.46%) compared with HNB (−1.3% ± 0.71%) (effect: −0.01603; 95% CI: (−0.02026; −0.01180)). Differences in relative weight losses between flocks hardly existed ( Figure 6). None of the mean temperature variables on the rearing farm, the transport vehicle, and the farm of laying hens showed a significant effect on the change in body weight. The total plumage score, including those body areas scored with two forms of severity (flight and tail feathers, fault bars) apparently significantly improved after transportation compared with before transportation in Variant II (OR: 0.672; 95% CI: (0.53; 0.863)) but not in Variant I (OR: 1.454; 95% CI: (0.931; 2.218)). Integument injuries of body areas scored with three degrees of severity were not sufficiently variable in their distribution of characteristics, and only isolated injuries were found. The same applies for integument injuries of the eyelid (binary score). Both comb and head (evaluated with a binary score) were scored positive in 13% and 4% of the cases, respectively. We could find no major differences in integument injuries before and after transportation. Following the transit from the rearing farm to the farm of laying hens, the birds showed a weight loss of −2.9% ± 1.9% (mean ± SD). Comparing both transport variants, birds of Variant I lost significantly more weight (2.1 percentage points; 95% CI: (−2.6; −1.5)) than birds of Variant II. Regarding the layer lines, HNB lost significantly less weight than HNS (0.5 percentage points; 95% CI: (0.3; 0.7)). Considering the transport variants, differences in weight loss between layer lines can solely be found for Variant II: HNS showed higher loss in weight (−2.38% ± 1.46%) compared with HNB (−1.3% ± 0.71%) (effect: −0.01603; 95% CI: (−0.02026; −0.01180)). Differences in relative weight losses between flocks hardly existed ( Figure 6). None of the mean temperature variables on the rearing farm, the transport vehicle, and the farm of laying hens showed a significant effect on the change in body weight. The target weight of the 18-week-old pullets, which is defined by the breeder and distributor H&N International, is set at 1300 g for HNS and 1479 g for HNB. It was not reached by all weighed hens: HNS hens weighed on average 1339 ± 102 g (mean ± SD), and HNB hens 1679 ± 156 g. With age included, no differences could be found within each layer line in reaching the target weight ( Figure 7). The only significant effect was the effect of the layer line. The chance of HNS observing a shortfall was on average elevated by a factor of 8.1 (95% CI: (5.1; 12.7)). The target weight of the 18-week-old pullets, which is defined by the breeder and distributor H&N International, is set at 1300 g for HNS and 1479 g for HNB. It was not reached by all weighed hens: HNS hens weighed on average 1339 ± 102 g (mean ± SD), and HNB hens 1679 ± 156 g. With age included, no differences could be found within each layer line in reaching the target weight (Figure 7). The only significant effect was the effect of the layer line. The chance of HNS observing a shortfall was on average elevated by a factor of 8.1 (95% CI: (5.1; 12.7)). Birds of the control flock, which were not transported and were kept in the winter garden overnight, free to move around without access to food and water, showed a mean weight loss of −5.9% (95% CI: (−6.3; −5.6)). HNB hens of the control flocks lost significantly less weight (−5.4%; 95% CI: (−5.8; −5.0)) than HNS hens (−6.5%; 95% CI: (−6.9; −6.1)). Comparing the weights of all birds (study and control flocks) calculated as means, birds of the control flock showed on average a higher loss of −2.0% (95% CI: (−2.5; −1.6)) than birds of the study flocks. The target weight of the 18-week-old pullets, which is defined by the breeder and distributor H&N International, is set at 1300 g for HNS and 1479 g for HNB. It was not reached by all weighed hens: HNS hens weighed on average 1339 ± 102 g (mean ± SD), and HNB hens 1679 ± 156 g. With age included, no differences could be found within each layer line in reaching the target weight ( Figure 7). The only significant effect was the effect of the layer line. The chance of HNS observing a shortfall was on average elevated by a factor of 8.1 (95% CI: (5.1; 12.7)). Discussion To the best of our knowledge, this is the first study comparing two practice-oriented transport variants for pullets over a period of 72 h via fecal CM. The aim of the study was to evaluate which of the two examined transport variants resulted in less pronounced stress responses of the birds. Measurements of CM, a reliable indicator of stress [17], showed no significant transport-specific difference between Variants I and II. Instead, we discovered significant differences between the layer lines in CM responses, touch test results, and weight loss. Several studies have already shown differences between white and brown hens; for example, differences in plasma corticosterone responses after a treatment [33], in tonic immobility [34][35][36][37], or in results from other fear tests [38,39]. Corticosterone Monitoring Baseline values of CM concentrations and values measured between 0 h and 72 h after transportation did not show significant differences between the two transport variants. Instead, we found significant differences between the two layer lines in both baseline values and values measured after transportation. However, other studies found similar baseline plasma corticosterone concentrations in brown and white layer lines [33,40]. One study analyzed translocation stress in ISA Brown (name of hybride) hens for 36 h after a 1 h long transportation, and found the highest plasma corticosterone concentrations 4 h after transportation [23]. In contrast to this finding, the HNS and HNB hens of our study showed a rapid decrease in CM concentration during the first 6 h after transportation, but just a few returned to baseline values at the end of the study, which might be due to the novel environment. HNS had higher CM levels than HNB at all times in almost every flock. Fraisse and Cockrem [33] reported similar results after 15 min of repeated handling. White hens of their study also showed higher corticosterone levels than brown hens, but only for plasma corticosterone, whereas fecal CM concentrations did not differ between layer lines. At 9 h and 12 h after transportation, CM concentrations in Variant I of our study showed an increase from 61 ng/g at 6 h after transportation, to 96 ng/g and 127 ng/g, respectively, with an intermittent decrease at 10 h (69 ng/g), followed by a steady decline (Table 3). Samples at 9 h and 12 h were taken solely from Flocks N1 and N2; for Flocks N3 to N8, we reduced the sample collection to once at 10 h after transportation. During the investigation period of 72 h, CM concentrations never fell below the value measured 6 h after transportation. In Variant II, we found slight fluctuations of CM concentrations at 34 h and 58 h (additionally taken samples), indicating natural variation due to the circadian rhythm during a 24 h interval. De Jong et al. [41] found a plasma corticosterone peak at 4 h of the 8 h light period during a 24 h investigation on 5-week-old broilers that were fed ad libitum and showed low plasma corticosterone levels during the dark period for 12 h. This finding is contrary to the results from Variant II of our study because samples at 24 h, 48 h and 72 h were taken during the dark period (between 11:00 p.m. and midnight) and samples at 34 h and 58 h were taken during the light period (9-10:00 a.m.). Differences between individual flocks might be attributed to the so-called "passage effect": Management and processes of transportation, for example, differed. Further investigations are necessary to better understand these differences. Hen-Human Relationship: Touch Test The relationship between animals and humans is an important aspect of animal welfare. Additional contact to humans can positively influence the hen-human relationship [28]. Studies on laying hens showed that additional positive contact with a person resulted in reduced fear toward this person [25,28,42] and in a decrease of plasma corticosterone levels [25]. The pullets of our study behaved contrarily to these findings: Pullets with increased CM concentrations were more likely to allow touch by the test person than pullets with low CM concentrations. Four flocks deviated from the average test results. However, these flocks did not show deviations in any of the other study parts. We therefore cannot relate the tameness of these flocks to other test results of this study. Body Condition The examined flocks were in good condition before and after transportation, as measured by the use of the sum of the body parts that were individually scored for plumage condition and integument injuries [43]. With regard to the temporal effect (before and after transportation), we noted a slight improvement to both the plumage and integuments. The main reason is likely to be an insufficient sample size, to ensure that representative estimates and observer deviations are conceivable. Birds lose weight overnight, even without transportation, and this has potential side effects such as increased corticosterone levels or heat stress, as results from our control flock show. Birds of the control flock were able to move around in the winter garden without access to food and water, matching the lack of these resources for transported birds. "Time in winter garden" for the control flock was 11 h, and this is thus based on the "time in plastic crates" of Variant I. Several studies describe a diurnal and seasonal weight fluctuation in wild birds (e.g., [44][45][46][47]) with amplitudes of 5-15% [44,45,47]. The weight loss of the hens of our control flock (−5.9%) falls within this range. However, a mean loss of −3.9% ± 1.8% overnight, as measured for the transported hens of Variant I, is lower than the loss measured in wild birds. Amplitudes in winter (long and colder nights) are higher than in summer [47]. The mean temperature during transport of the studied flocks was 16 • C, whereas the mean outside temperature for the control flock was 19.5 • C. Unfortunately, none of our other study experiments were performed on the control flock. Explanations therefore remain speculative. Scholtyssek et al. [48] found an greater loss of weight in broilers with increasing durations of transportation (1.3%, 2.3%, and 3.1% after transit durations of 1.5 h, 3.0 h and 4.5 h, respectively) whereas another study did not find weight differences between the control and 4 h transported treatment groups [40]. Conclusions Our findings prove that no significant differences exist between the two studied transport variants. This conclusion may be supported by further investigations. Considering the tested flocks, we can say that both transport variants exerted a similar level of stress on the birds. Significant differences between the two layer lines indicated that HNS hens would benefit from transportation in the short variant, whereas stress levels in HNB hens were similar in both variants. Nonetheless, we cannot say whether a longer time of transportation exerts more and longer lasting negative impacts than a shorter period of transportation. Future studies comparing weight development or egg production and egg weight between both transport variants could help to answer the remaining questions.
2018-11-10T06:17:23.850Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "b5588aa5fa9a0f774521a27a80b5247d9adea2d3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/8/10/183/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b5588aa5fa9a0f774521a27a80b5247d9adea2d3", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
213334806
pes2o/s2orc
v3-fos-license
Development of Oil Production Forecasting Method based on Deep Learning Identification of the quick declines of the desirable production fluids and rapid increases of the undesirable fluids are the production problems of oil wells. The main purpose of this work is to develop a method that can forecast oil production with high accuracy, using Deep neural networks based on the debt data of wells. In this paper, a hybrid model based on a combination of the CNN (Convolutional Neural Network) and LSTM (Long Short-Term Memory) networks, called CNN-LSTM is proposed for the forecasting of oil production time series. The architecture of the proposed CNN-LSTM model is hierarchical. Here, at first the CNN layer of the model is applied to the current time window, then the relationship between the time windows is predicted by applying the LSTM. The challenges of time series prediction often come from the continuity duration of every state. In order to overcome this problem, we try to predict temporal dependency in the certain time window. This issue is solved by the application of the CNN algorithm. Evaluation efficiency of the proposed model is performed on the QRI dataset. The prediction accuracy of the method is tested by RMSLE loss function and the best results are obtained using our proposed in the testing process. Introduction The existing oil production industry is built on outdated technologies. Such systems can lead to decreased debt of oil production wells, increasing costs of the oil production process and so on. Under these conditions, an effective development of oil fields can increase the oil production and the oil transfer capability of reservoirs, extend the life cycle of oil fields, and have a great importance on economic efficiency. Identification of the quick declines of the desirable production fluids, and rapid increases in the undesirable fluids are the production problems of the oil wells. The application of Deep neural networks in the solution of such urgent issues can lead to successful results. Application of these methods to the oil production control and management field can result in high efficiency in various issues, such as the prevention of the inefficient use of energy, optimization of oil extraction time, control of equipment condition, collection, storage and processing of current and history data. Recently, there has been a revival in the application of Deep neural networks in the oil industry by the world's leading oil companies. Chevron is one of the several companies that point out that Deep learning is the most appropriate technology for reservoir characterization. In 2016 researchers at Chevron led an effort to employ Deep neural networks and fuzzy kriging to analyse the viability of a reservoir in Californias San Joaquin Valley [1] . 827 From another point of a view, researchers from the SAS Institute have found that merging traditional seismic analysis methods and Deep learning methods has been a more effective tool in efficient discovery of resources in an upstream. In [2] a supervised and unsupervised approach is proposed to characterize the reservoir on the seismic profiles. Here the seismic images are used as the input data of the proposed Deep Learning method. During the preprocessing phase, patches are created from larger image sets that can reduce the number of features required to represent an image and can decrease the training time needed by algorithms to learn from the images. In recent years, as Deep learning methods have gained great popularity in the various research areas as a new approach, the researchers have started using CNNs in their work to recognize high-level samples from multidimensional time series data. CNNs have two key features: weights sharing and spatial pooling. This feature has turned the CNN method into a very useful tool for computer vision applications. In this type of issues, the input data of CNN is 2 dimensional (2D) data. However, the CNN model is also applied to natural language processing and speech recognition issues. In this case, the input data of the CNN is 1 dimensional (1D) data [3,4]. CNN with one layer performs the feature extraction function from the input signals by applyying the convolution operation. Usually, 1D kernel function is used when the input contains one-dimensional time series data. In other words, the convolution operation is applied separately to each size of the data [5]. In addition, some hybrid models are proposed as a result of the combination of CNN and LSTM models for forecasting and classifying time series by learning complicated features [6]. Recently an researchers have conducted extensive investigation in the application of Deep learning methods to the oil production forecasting field. In [7], the method for the forecasting of the oil production time series based on multilayer neural networks with Multi-Valued Neurons (MLMVN) is proposed. In this work, the forecasting capability of the MLMVN model for the prediction of the dynamics of the reservoir is demonstrated. A dataset consisting of monthly production data from 14 wells of oil fields located on the shores of the Gulf of Mexico is used to test the proposed forecasting model. In [8], the application of high order neural networks (HONN) to water, oil and gas production forecasting is considered. In [9] natural gas prediction method based on neural networks is proposed. In [10] the application of the multiple artificial neural networks (MNN) to the evaluation of the future production performance of oil wells based on the time series of monthly production data is considered. MNN is a group of the single artificial neural networks (ANN) which cooperate with each other to solve a specific problem. Each MNN makes predictions in the different time periods. The results obtained here show that the MNN model has shown better results in making long-term predictions compared to the single ANN model. In this work, the data for the experiments are taken from the Saskatchewan Energy and Field center. The activation function of the neural network is sigmoid. Since the value of the sigmoid function changes in the interval [0, 1], the data here is normalized using the min-max algorithm. Deep learning methods such as CNN, LSTM, DBN and others are also applied in the oil production forecasting field. Since CNN is designated for short-term forecasting, it cannot model the long-term time series sequences. This is related to inability to record time series dynamics efficiently by CNN models that are limited by convolution layer. To resolve this problem the LSTM networks with memory cells are used by researchers. LSTM is using the concept of gates in the memory cells. This function allows the network to learn what hidden states it needs to forget and what states it needs to refresh [11]. LSTM can learn long-term temporal dynamics of the data in the form of sequences and accepts the unprocessed data as an input data. When applying LSTM to the features extracted from unprocessed data by the other Deep learning methods, it allows achieving higher efficiency in the forecasting [12]. In some approaches, LSTM acts as a predictive block of the proposed models [13]. In [14] combination of the conventional and recurrent layers is applied to the time series classification task and high results are achieved. To ensure accurate forecasting of oil production in oil wells, it would be expedient to use the combination [15] of the above mentioned models. In this paper, a hybrid model based on a combination of the CNN and LSTM networks is proposed for the forecasting of oil production time series. Here, at first the CNN layer of the model is applied to the current time window and by this, the features are extracted, and then the relationship between the time windows is predicted by applying the LSTM. DEVELOPMENT OF OIL PRODUCTION FORECASTING METHOD In the proposed model, the objective of the CNN layer is to extract features, and LSTM layer is to perform prediction. The main contributions of this work are: 1. The effectiveness of the Deep neural networks in the oil production forecasting is investigated; 2. The new architecture containing an improved Deep CNN and LSTM blocks is proposed for efficient forecasting of the oil production in the wells; 3. Proposed model achieved high results in the forecasting process. This paper consists of the following sections: Section 2 summarizes some of the methods used in the oil production time series prediction. In section 3 an architecture of the proposed CNN-LSTM model is provided. In section 4 dataset description is provided. In section 5 the results of the comparative analysis of the proposed method with existing methods is described. In section 6 conclusion of this work is provided. Each layer of the deep network learns independently, bypassing previous pre-training procedure. It then allows checking a good initial approach to run backpropagation algorithm. Depending on the selected model, each layer may be RBM or CNN (Convolutional Neural Network) [20]. Boltzmann Machine (BM) is a network of symmetrically connected stochastic binary units. The units are divided into two groups, describing visible and hidden states (analogy with hidden Markov models). The states of the visible and the hidden neurons vary according to the probability activation functions. Restricted BM (RBM) is a BM that has no relation between the hidden layer neurons. Due to the special bipartite graph structure, it is possible to clearly find the probabilities of the hidden layers neurons. If a sufficient number of neurons is used in the hidden layer, RBM can generate any discrete distribution. RBM is a key structural unit for constructing the Deep Belief Network (DBN). DBN is a multilayer network [21], in which the lower layers are sigmoid Belief network, and the upper layer is RBM. Deep BM (DBM) is sometimes used in the pre-training step instead of the autoencoder. Multilayer architecture of the DBM is the main difference from RBM. CNN is a multilayered neural network with a special architecture to detect complex features in data. CNNs have been used in image recognition, powering vision in robots, and for self-driving vehicles. LSTM recurrent neural networks are capable of learning and remembering over long sequences of inputs. LSTMs work very well if the problem has one output for every input, like time series forecasting. But LSTMs can be challenging to use when the problem has very long input sequences and only one or a lot of outputs. Hybrid DL architecture integrates the generative and discriminative architectures. Deep Neural Network (DNN) can be given as an example of hybrid architecture. In [22] DNN is a cascade of fully-connected hidden layers and often uses the RBM stack as a pre-training stage. The main purpose of this work is to develop a method that can predict the oil production with high accuracy using the Deep neural networks based on the debt data of wells. The oil and gas supply chain consists of three streams: upstream, which covers the exploration, development, and production of oil and gas; midstream, which includes the transportation of oil and gas by tanker; and downstream, which includes the refinement and sales processes. This paper researches the upstream level of the oil industry. The oil production processes are modeled on the basis of hydrodynamic numerical evaluation of the processes in the reservoir, a dataset containing history data on the creation of the oil fields, equipment characteristics, the time-varying geological characteristics of the reservoir, well operation modes, well operation and break time. The data required to predict oil production are divided into the following groups: 1. Time and periodicity of information recording. Determined by the recording time of the measurements. Characteristics of injection wells. This group includes the load volumes, the acceleration, the well operation time, the coordinates and numbers of the wells and so on. 3. Characteristics of the production well. This group includes the volume of produced water and oil, liquids separation, debt of the wells by oil and gas, total debt, total production, operating time, coordinates of wells, number of production wells and so on. Depending on the input data a number of Deep learning architectures are proposed. One of the special research directions of the oil and gas industry is the reservoir characterization. In [23] Deep neural networks are used to predict the properties of the oil reservoir, such as porosity, permeability, pressure-volume-temperature (PVT), depth, drive mechanism, structure and seal, diagenesis, well spacing and well-bore stability. Some of these properties of the reservoir are used to detect drilling problems, to determine reservoir quality, to optimize reservoir architecture, to the identification of lithofacies, and to measure reservoir volume. Here varies methods that perform petroleum reservoir characterization, based on the hybridization of different algorithms with neural networks are also reviewed. In [24] reservoir characterization issue based on neural networks is taken into consideration. In [25] by applying kernel method to Arps decline model, -a new nonlinear multidimensional forecasting model, titled as the nonlinear extension of Arps decline model (NEA) is proposed. The base structure of the NEA model is the Arps exponential decline equation and nonlinear combinations of time series in the input are created by applying the kernel method to the model. It can effectively determine the nonlinear relation between the input time series and the oil production. In order to evaluate the effectiveness of the NEA model, in this work, the experiments are conducted on the data taken from oil fields in China and India. In order to improve the ability of the model, the combination of the decline curve methods with intellectual methods is provided. In [26] oil well production model based on the MLP method is proposed using production data. In [27,13] LSTM type recurrent neural network is used in the recognition of top-level templates and value forecasting issues to study the temporal and sequential features of the time series. While the above-mentioned methods provide good results in the recognition of templates, these methods encounter great difficulties in recognition of the temporal features as a sequence. Proposed method In recent years, the combination of CNN and LSTM layers has gained more attention [28,29,30]. Two types of combinations exist. The first group combines by applying separate layers of convolutions or LSTMs one after another. The second group of combinations includes convolution into LSTMs or general RNNs. In this section, we introduce a CNN+LSTM Deep network for time series prediction. An architecture of the proposed CNN+LSTM Deep Learning model is described in Figure 1. Our model has two major components: the CNN layer and the LSTM layer. These layers are stacked from bottom to top, and statement of these layers individually refers to capture features from the sensor sequence in sliding windows and from the sequence of states. The algorithm of the proposed CNN-LSTM model for the prediction oil production properties is as follows: Step 1. The oil well characteristics determination. For a time series problem, here the observation from the last time step (t-1) is used as the input (a sequence of history values) and the observation at the current time step (t) is used as the output (the value on next timestamp). Step 2. Building training samples based on CNN. In our case, 48 samples are used for the neural network training and 12 samples are used for forecasting. Note that, there is no minimum or maximum for training and testing sample size. In the proposed model the number of samples can be taken in any size. Generally, higher number of samples for training can ensure better performance. Step 3. Building the network of the proposed hybrid Deep Learning architecture. Input the oil production debt data to the constructed network and train the neural network based on these data. Step 4. After learning the neural network, implement the testing phase and find the required solution. Step 5. Calculation of the loss caused by prediction. Here, the time series are used to perform the forecasting. A time series is a sequence of real-valued data points with timestamps generated by D different sensor channels. The raw data x ti at any timestamp i is a multidimensional vector that can be described as a tuple vector of measurements. The challenges of time series prediction often come from the continuity in a time of every state. In order to overcome this problem, we try to predict temporal dependency in the certain time window. This issue is solved by the application of the CNN algorithm. Convolutional neural networks Filtering time series data is an important tool to improve prediction performance. CNN allows the automatic creation of good filters. Although the success of CNN emerged from the vision domain [31], they also demonstrated potential in time series applications, for example, in activity classification [32]. One main difference from the most classical filters is that convolutional filters are multivariate and thus combine inputs. We define the 1D convolution operator at some time step t as: where k is a kernel vector that depicts the filter. Successively applied operations lengthen the amount of time in a filter and lead to potentially more meaningful, but abstract features. A CNN then consists of multiple conv layers and a dense layer at the end. The number of filters or neurons of the conv layer corresponds to the number of output channels. All filters are applied to the input series X δ t = ( x t−(δ+1) , ..., x t ) with time frame length δ at the time t: where s is a stride, skips intermediate steps. This layer uses less trainable weights than a dense layer who would learn on X δ t instead of just x t , as the kernel applies to all time steps. In the proposed architecture the structures of the individual convolutional subnet for the input p i at different time are the same. Assume that input p i = {x t1 , x t2 , ..., x t l } is a L 0 × D 0 tensor, L 0 equals to the sequence length of sliding window l, where D 0 equals to the number of sensor channels D. For each time interval l, the matrix p i will be put in into a CNN architecture. To learn temporal dynamics from the input p i the 1D filters with shape (k, 1) is applied. In this paper, the size of filters in every convolutional layer is the same, and the convolution is only computed where the input and the filter fully overlap. For each convolutional layer, the model learns f filters, through which the model got more nonlinear functions and learned more global information of the current sequence, and use ReLU as the activity function. The convolutional layer is not followed by a pooling operation, as the next LSTM layer requires a data sequence to process. The shape of feature maps output by the m convolutional layers is Here, we flattened the output matrix of layers into vector V i , which is considered as the feature representation of a high-level pattern. LSTM networks Unlike dense layers, LSTM utilizes the temporal information of a series by sequentially processing the data. This differs from conv layers because its kernel weights are trained to find features throughout the time series. RNNs have a memory for previous inputs and outputs because they introduce the previous output with a hidden weight back to themselves. In other words, the input series X δ t is sequentially processed vector for vector in increasing amount of time steps δ. LSTM is one of the types of RNN models. In RNN with an increasing amount of time steps δ the gradients begin to vanish exponentially, which impedes the learning process. Unlike RNN in LSTM the applied gating mechanisms prevent the loss of far-away information. LSTM is excellent at time series prediction tasks [33]. In an LSTM layer, the hidden weights h get adjusted with every time step ∨ t ∈ (t − (δ + 1) , ..., t) by taking the elementwise product (•) of a output gate o and the activation of a cell c. This cell determines how much of the previous cell is retained with a forget gate f and adds it to the product of an input gate i and an input modulation j. All gates consider the hidden weight from the preceding time step. We give a formal representation as follows: with non-linear activation functions φ (tanh, sigmoid, relu), weight matrices W , and bias vectors b, whereas the relations to the gates are visible in the subscripts. As a result of the operation of the convolutional layer, the sensor data p i = {x t1 , x t2 , ..., x t l } in a sliding window has been processed to a feature vector V i . Concatenating all n vectors {V 1 , V 2 , ..., V n } into a n-row matrix V , which is the input of the LSTM layer. Then the output of LSTM at every time step is passed into a relu output layer which yields the prediction outcome y i . So the input of this part of LSTM is V , the output is Y . Dropout Dropout established itself as new and reliable regularizer that helps to avoid overfitting [34]. Essentially, dropout deactivates a random part of the input for every neuron with probability p. Here, we define it formally as follows. Every weight matrix W is now interpreted as a random matrix that contains the weight matrix ∧ W and initializes a different vector z for every neuron: This dropout vector z has its values randomly set to one instead of zero with a probability p. Further, the dropout vector changes for every training step. For LSTM layers not only the input should be partially dropped, but also the recurrent units. To that end, equation Dataset description The Deep Learning method proposed in this paper is tested on the dataset presented by QRI (Quantum Reservoir Impact). This dataset consists of data from seven drilling reservoirs. Field name, formation type, well name, oil production, water production, gas production is a feature of this dataset. Each of the seven reservoirs consists of several wells. Each well has a name. Data on oil, water and gas production from each well are recorded in each month. In this paper, only oil production is considered. Here, each well is divided into smaller chunks (parts). Each of these chunks contains 60 points (these points represent the number of months). Here 48 points of these months are input data and expected 12 months (the points to be predicted) are outputs of the proposed system. To standardize the data of each chunk, they are normalized. In addition, according to the oil well index the separation of data into training, validation and test data is conducted. Where training data consist of 5041 chunks, validation data 1026 chunks, and the test data 1144 chunks. Efficiency assessment metric of the model (RMSLE) In this paper for the evaluation accuracy of the proposed method, The Root Mean Squared Logarithmic Error (RMSLE) is used. RMSLE is to compare the predictive value with the true value, and is calculated as the square root of the squared bias plus squared standard error: where n is the total number of observations in the testing dataset, a i is predicted value, and b i is the actual value. RMSLE is a method to measure the error rate. According to this measure, a low RMSLE value indicates a better forecasting solution. Experiments The dataset is divided into three subsets, training, validation and testing. The training and testing data are consists of 5041 and 1026 monthly production taken from several wells. The validation data is consist of 1144 productions. Here the training data is used to train the neural network. Validation data is used to determine the effectiveness of the neural network on samples, which are not used in the training process. Training and validation are done simultaneously. These two data are used to configure the neural network parameters. The test data is used to evaluate the overall effectiveness of the model after determination the neural network parameters values. Comparative analysis of the proposed CNN+LSTM method with CNN and LSTM methods is conducted. Here the main purpose of the CNN+LSTM approach is the forecasting the monthly production in various periods starting from 1 month to 48 months. As a result of this analysis, loss values of the LSTM, CNN and proposed hybrid CNN+LSTM models are calculated based on the RMSLE metric and added to Table 1. As seen from Table 1, the results of the proposed hybrid CNN+LSTM model outperforms the results of other models. Thus, during prediction of the 12 points in the test data the RMSLE value of the CNN+LSTM model is obtained 0.186891, but in the CNN model this value reached to 0.187198 and in the LSTM model to 0.193466. RMSLE is a method to measure the degree of loss (error rate), so smaller RMSLE value indicates a more accurate model. As seen from here, in the assessment of oil productivity of the wells, the effectiveness of the CNN+LSTM method is better than other algorithms. To better illustrate the results of Table 1, the loss dynamics of each model are depicted in Figure 2. As seen from the figure 2, by training the CNN+LSTM model the loss value starting from the first iteration to the last step is slow down by changing very smooth, compared to other methods at the end it achieves the lowest loss value. However, the loss values of the CNN and LSTM models separately varies not smooth. In some cases, there is a situation where the loss value of the next iteration is bigger than the previous one. As the loss was less than other methods in proposed hybrid CNN+LSTM model during the forecasting, this model is considered to be more effective. To evaluate the importance of methods, in the scientific papers statistical significance tests are used. For this purpose, the methods are launched 70 times on the QRI dataset ( Table 2). As seen from Table 2, better results are achieved by the hybrid CNN+LSTM model. For the verification the robustness of the prediction methods, the boxplot representation is created based on values in Table 2 ( Figure 3). As seen from Figure 3, the hybrid CNN+LSTM model has achieved better results than other methods. Here, because of the increase and decrease of values are changing with the small leap over every iteration, the boxplot representation of this method becomes tighter. The proposed hybrid CNN+LSTM model has predicted the future oil production data for the next 12 months very accurately. Forecasting of the hybrid CNN+LSTM model on the data, taken from several chunks is depicted in Figure 4. In Figure 4 the production dynamics of the oil well by time function is depicted. In general, oil production is changing for a variety of reasons. For example, well-stimulation operations cause changes in the wellbore area, which leads to an increase in production. The decrease in production is usually caused by reduced pressure in the reservoir or the degradation of the mechanical condition of the production well. An effective way to slow down the degradation is to carry out additional recovery operations, such as water flooding. Another way to restore the pressure is the shutdown the well for a certain period of time. If the production is down to zero during the closure of the well, then it usually begins to rise. But over time, the decline is repeated. As seen from Figure 4, constructed neural network predicts downward slopes, flat lines and sudden upward jumps with high precision. Most approaches cannot accurately predict sudden upward jumps in time series [35]. As seen in Figure 4, the oil production rate is increased considerably over a certain period of time and it started to decline later. In order to provide optimized training of the proposed multilayer neural network, the numerous experiments at various values of the batch size parameters are conducted. Batch size is the number of training data used in one iteration. Here, in high values of the batch size parameter, the effectiveness of the model falls (Table 3 and Figure Table 3 and Figure 5. In our work the experiments are conducted by the following parameters: a number of neurons 50, the batch size 20, the activation function relu, the optimization function sgd, the number of iterations 1000. Conclusion In this paper, we proposed a novel prediction model for the oil well production forecasting based on the hybridization of the CNN and LSTM models, which is called the CNN+LSTM model. The hybridization of these methods is improved forecasting capability of the above mentioned Deep Learning methods. It has been shown in the case studies that the CNN outperforms the LSTM accuracy, which indicated that the LSTM is eligible to be applied in the nonlinear forecasting problems in the petroleum engineering. The impact of the batch size parameter and the regularized parameter is also tested in this study. The results show that the smaller batch size makes the CNN+LSTM model be more accurate, and the larger batch size makes the CNN+LSTM unsuitable. And the CNN+LSTM model produced smoother points with a smaller batch size, and it was represented to be more accurate in training with smaller batch size. These results all indicated that in parameters such as the number of neurons 50, the batch size 20, the activation function relu, the optimization function sgd, the number of iterations 1000 the optimal solution would be obtained. The limitations of this study should also be noticed that we are not proposed an available algorithm to compute the optimal parameters of the proposed CNN+LSTM method. In fact, selection of these parameters is an open problem in the Deep Learning researches, and there is no available algorithm to select the optimal parameters up to now. But it will be interesting and might be possible to find an optimal interval which contains the optimal parameters, and once this interval is found, it will be more computationally efficient to select the optimal parameters for the CNN+LSTM model.
2019-12-05T09:25:07.710Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "4558ab6a3c1abb70e12d8c3bc7a3d84af08af1e2", "oa_license": "CCBY", "oa_url": "http://www.iapress.org/index.php/soic/article/download/soic.v7i4.1215/581", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "58eaed2cd64f64098054823c8a346ef2c879a1f9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
268318440
pes2o/s2orc
v3-fos-license
Postless Distraction Technique With No Additional Equipment in Hip Arthroscopy Hip arthroscopy has become a predominant treatment for hip disorders such as femoroacetabular impingement syndrome and labral injury, and appropriate distraction for the hip joint is necessary for successful surgery. The traditional distraction method uses a perineal post but may cause complications such as perineum injury and nerve damage. For this reason, some surgeons have proposed postless distraction techniques, but they usually require additional equipment purchase and cost, which is not conducive to application. Therefore, we developed a post-free distraction technique without additional equipment. This method uses only surgical draw sheets, safety straps, a hip fracture table, and a hip distractor that are routinely provided in the operating room, and postless hip distraction can be achieved by using the Trendelenburg position, which is reliable, simple, and reproducible to be used in hip arthroscopy. needed to assess the difficulty of distraction without a perineal post.Typically, patients who are female, with hip dysplasia, have greater body mass index, or multiple ligamentous laxity are easier to be distracted, whereas patients with osteoarthritis, joint stiffness, lower weight, and large cam deformity are more difficult to be distracted. 10Preoperative images in anteroposterior view, 60 Dunn view, and false-profile view of the pelvis are performed to assess the extent of cam and pincer deformity and to recognize the presence of bony prominence that interfere with traction.Computed tomography 3-dimensional reconstruction and unilateral magnetic resonance imaging of the hip are also commonly performed to further assess the extent and degree of cam and pincer deformity and to evaluate injury of the labrum, cartilage, and other structures. Preparation of Surgical Items Before surgery, a few conventional safety straps, an abdominal belt, several cotton pads, a hip fracture table, and a hip distractor are prepared.One safety strap is wrapped in cotton pads to be used in the inguinal region (Fig 1).The table is covered with a surgical draw sheet to prevent slippage during the traction (Fig 2). Anesthesia and Positioning The patient is administered general anesthesia, and muscle relaxants are used to enhance the effect of traction.After that, the patient is placed supine on the surgical bed with the contralateral upper extremity fixed on an abductor frame and the ipsilateral upper extremity suspended and fixed on a brace at the front of the chest.The patient's trunk is in contact with the draw sheet on the surgical bed and the buttocks are attached to the end of the bed (Fig 3).Bony prominences of bilateral feet and ankles are wrapped in cotton pads and placed in the traction boots, with the contralateral lower limb fixed at 45 of abduction and the affected lower limb in a straightened, internal rotation of 15 .The patient's inguinal region on the contralateral side is secured to the surgical bed with a thickened safety strap wrapped in cotton pads to act as a partial counterbalance to the distraction.An abdominal belt is used around the abdomen to keep the patient's back close to the bed, which increases resistance and protects the patient from falling off the surgical bed (Fig 4).Be careful not to fix the belt too tightly to avoid compression to the abdomen.When the aforementioned preparations are complete, the surgical bed is slowly adjusted to the Trendelenburg position so that the patient's head is low and feet are high by approximately 10 (Fig 5), using the frictional force generated by gravity to counteract the traction force. Traction Test Gradually increase the traction force on the affected lower extremity distally, paying attention to the patient's body displacement at this time to ensure that the patient is not pulled out of the surgical bed due to the continuous increase in traction.When the affected limb is tense and it is difficult to lift the knee upward, it indicates adequate tension and the effect of traction can be verified by fluoroscopy (Video 1).It is considered satisfactory if the joint gap can be sufficiently retracted to 8 to 10 mm under fluoroscopy.If the joint gap does not reach 8 mm, it indicates that the traction is not sufficient and the force needs to be increased.At this time, we need to observe whether the patient's body has obvious displacement, and if this happens, attention should be paid to whether the safety straps are fixed firmly.The angle of the Trendelenburg position can be gradually increased until the effect is satisfactory. Distraction Procedure Disinfection and sheeting are routinely performed, and traction is gradually applied to the affected lower extremity.The degree of body displacement of the patient needs to be observed at this point.Traction is fixed when the lower extremity is tense and reaches the tension of the traction test.Fluoroscopy is used to verify whether the joint gap is wide enough (Fig 6), and if the width is insufficient, fine adjustment of the traction is performed until the gap is satisfactory. Hip Arthroscopy The anterolateral portal, midanterior portal, and modified distal anterolateral portal are established under fluoroscopy.Exploration and debridement of the central compartment, trimming of the acetabular rim, and repair of the labrum are performed under distraction.Subsequently, the traction is slowly released, the hip is flexed to 30 , cam deformity is removed, and capsule suturing is performed in the femoral headeneck junction area.Because there is no blockage by the perineal post, the affected hip is able to perform adduction, abduction, internal rotation, and external rotation more freely at the time of cam removal and joint capsule suturing. Postoperative Rehabilitation Postoperative observation is required for numbness and pain in the perineal area, as well as distraction complications such as muscle weakness and numbness in the lower extremity.The rest of the rehabilitation process is the same as that of conventional hip arthroscopy.After surgery, the affected limb wears a neutral positioning shoe to prevent external rotation of the hip joint, and functional exercises such as ankle pump and core muscle strength training are started on the second day after surgery.The range of motion of the hip joint is controlled within 90 of flexion to prevent adhesions, and partial weight-bearing walking exercises are performed with crutches until 4 weeks after surgery. Discussion Hip arthroscopy techniques have been widely used in the treatment of hip disorders such as femoroacetabular impingement since the beginning of the 21st century. 11,12Currently, most hip arthroscopic procedures are performed with the perineal post against distraction in the supine position. 3However, the use of the perineal post can lead to prolonged compression of the perineum, which may cause nerve disorders, impaired circulation, and even persistent tissue ischemia, increasing the risk of pressure injury. 13Park et al. 14 found that the incidence of pudendal nerve dysfunction was 2.0% in 200 patients undergoing hip arthroscopy, and although most of the injuries were mild and could be recovered within a few months, they still caused great distress to the patients. 3A study showed that the operation time and postoperative pain with the postless technique were similar to that of perineal post distraction, but the hospital stay was shorter in the postless group. 15Another study showed that there was no significant reduction in venous blood flow or neurologic changes in the affected limb, muscle tissue damage was subclinical and transient, and no The inguinal safety trap and abdominal belt should be placed in the proper position and secured firmly Poorly secured safety straps may cause the patient's body to slide and thus slip off the bed The safety traps in the inguinal region need to be wrapped with thickened cotton pads to reduce local compression Overtightening of the abdominal belt may interfere with the patient's abdominal breathing Attention should be paid to extra cotton padding protection for bony prominences such as ankles and heels Inadequate protection may cause local pressure injuries Adjust the head-low-foot-high angle according to the ease of distraction; generally 10-15 is needed Too small of the angle may lead to difficulty in distraction; too large may increase the cardiopulmonary burden and raise intracranial and intraocular pressure Perform the traction test before disinfection to ensure adequate joint space retraction Avoid repeated adjustment of position due to inappropriate traction after disinfection and sheeting Observe the sliding of the patient's body during distraction Risk of falling from the surgical bed due to body sliding Table 2. Advantages and Limitations Advantages Limitations Avoids complications of the perineal injury.Patients with protrusio acetabuli and joint stiffness may have difficulty with traction.No additional instruments are required and no additional cost.The operation can be performed with ordinary surgical sheets and safety traps in the operating room. When the angle of Trendelenburg position is too large, it may increase the cardiopulmonary burden.Patients with significantly elevated intracranial pressure, intraocular pressure, and severe gastroesophageal reflux should be operated on with caution. Simple with high reproducibility Potential risk of the patient falling from the surgical bed. Suitable for patients of any weight and for most hip disorders.The central compartment is sufficiently exposed for regular operations No perineal post blockage allows for free adduction, abduction, internal rotation, and external rotation during cam resection and joint capsule suturing. NO EQUIPMENT POSTLESS DISTRACTION TECHNIQUE e5 cases of perineal injury were observed in the postless group during the study period. 16or these reasons, the use of perineal post-free distraction is more supported in current opinions, 17 and many surgeons have made improvements to this technique.Salas et al. 18 reported the "Tutankhamun technique" without the use of the perineal post, but the maneuvers were complicated, excessive restraint of upper extremities and chest tended to interfere with anesthesia, and rehydration through upper extremity veins was not available.Kollmorgen et al. 19 and Perry et al. 20 reported a technique for perineal post-free hip arthroscopy with a dedicated commercial pink pad positioning device placed on the surgical bed, but the cost of the equipment is relatively high.Salas et al. 13 reported a distraction technique without perineal post using a yoga mat to increase friction, which is less expensive but also requires additional equipment.All of the aforementioned techniques have limitations in the application because of the need for additional equipment and materials. To avoid complications caused by perineal post while not adding extra devices, we have improved the postless distraction technique (Table 1).This technique allows for effective traction of the hip joint through friction between the patient's weight and the surgical bed in the Trendelenburg position and the reverse force generated by the safety trap on the contralateral thigh.In this case, operations such as chondroplasty, labrum suturing, and trimming of the acetabular rim can be accomplished effectively.In our experience, a Trendelenburg position of 10 is sufficient to achieve satisfactory distraction, even for a lighter-weight patient who is difficult to retract, a position of no more than 25 is adequate to make an effective distraction. This technique also provides good safety (Table 2).Although there is a theoretical risk of the patient falling out of the surgical bed, it is usually difficult to distract the patient to the point of slipping in practice due to the presence of the abdominal belt; this risk can be avoided by the assistant watching the position of the patient's body during distraction.It has been reported that the Trendelenburg position may increase the cardiopulmonary burden, intracranial and intraocular pressure, and the degree of gastroesophageal reflux when the angle is too large. 21This technique usually uses a smaller head-low-foot-high angle (10-15 ).It was observed that there was no significant influence on the respiratory and circulatory status under anesthesia and no complications happened as described previously. This technique also has some limitations.First, the technique may have difficulties in traction for patients with specific disorders.Meek et al. 22 reported a patient with protrusio acetabuli in which portals were finally established using perineal post distraction after several unsuccessful attempts by post-free technique.Second, it should be used with caution in patients with poor cardiopulmonary function, high intracranial pressure and intraocular pressure, and severe gastroesophageal reflux when the angle should not be adjusted too large. In our experience, in more than 200 operations, we have not experienced any failure of distraction using the postless technique nor any adverse events due to this method.The technique is effective in avoiding complications due to compression to the perineum, does not require any additional equipment, is simple to perform and easy to learn, and is worthy of application in hip arthroscopy. Fig 1 . Fig 1. (A) A few safety straps, an abdominal belt, and several cotton pads are prepared before surgery.(B) One safety strap is wrapped in cotton pads to be used in the inguinal region. Fig 2 . Fig 2. (A) A hip fracture table and a hip distractor are prepared.(B) The table is covered with a surgical draw sheet. Fig 3 . Fig 3.The patient is placed supine on the surgical bed with the trunk is in contact with the draw sheet and the buttocks are attached to the end of the bed.The contralateral (left) lower limb is fixed at 45 of abduction and the affected (right) lower limb is in a straightened, internal rotation of 15 .The contralateral upper extremity is fixed on an abductor frame and the ipsilateral upper extremity is suspended and fixed on a brace at the front of the chest. Fig 4 . Fig 4. (A) The patient's inguinal region on the contralateral side is secured to the surgical bed by the safety strap wrapped in cotton pads to act as a partial counterbalance to the distraction.(B) An abdominal belt is used around the abdomen to keep the patient's back close to the bed.(C) Bony prominences of bilateral feet and ankles are wrapped in cotton pads and placed in the traction boots. Fig 5 .Fig 6 . Fig 5. Patient is in supine position, with right hip shown.(A) The surgical bed is slowly adjusted to the Trendelenburg position so that the patient's head is low and feet are high by approximately 10-15 .(B) The Trendelenburg position after disinfection and sheeting. e4 Table 1 . Pearls and Pitfalls
2024-03-11T17:07:25.714Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "92b1780f79ccf2b23487fe7e91147294bdf5aa0b", "oa_license": "CCBYNCND", "oa_url": "http://www.arthroscopytechniques.org/article/S221262872400063X/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fcb158d5502dde139db0049d5c30f6da0a317807", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
4717943
pes2o/s2orc
v3-fos-license
Relationship between Maximum Principle and Dynamic Programming in Stochastic Differential Games and Applications This paper is concerned with the relationship between maximum principle and dynamic programming in zero-sum stochastic differential games. Under the assumption that the value function is smooth enough, relations among the adjoint processes, the generalized Hamiltonian function and the value function are given. A portfolio optimization problem under model uncertainty in the financial market is discussed to show the applications of our result. Introduction Game theory has been an active area of research and a useful tool in many applications, particularly in biology and economic.Among others, there are two main approaches to study differential game problems.One approach is Bellman's dynamic programming, which relates the saddle points or Nash equilibrium points to some partial differential Equations (PDEs) which are known as the Hamilton-Jacobi-Bellman-Isaacs (HJBI) Equations (see Elliott [1], Fleming and Souganidis [2], Buckdahn et al. [3], Mataramvura and Oksendal [4]).The other approach is Pontryagin's maximum principle, which finds solutions to the differential games via some Hamiltonian function and adjoint processes (see Tang and Li [5], An and Oksendal [6]). Hence, a natural question arises: Are there any relations between these two methods?For stochastic control problems, such a topic has been discussed by many authors (see Bensoussan [7], Zhou [8], Yong and Zhou [9], Framstad et al. [10], Shi and Wu [11], Donnelly [12], etc.)However, to the best of our knowledge, the study on the relationship between maximum principle and dynamic programming for stochastic differential games is quite lacking in literature. In this paper, we consider one kind of zero-sum stochastic differential game problem within the frame work of Mataramvura and Oksendal [4] and An and Oksendal [6].However, we don't consider jumps.This more general case will appear in our forthcoming paper.For our problem in this paper, [4] related its saddle point to some HJBI Equation and obtained the stochastic verification theorem.[6] proves both sufficient and necessary maximum principles, which state some conditions of optimality via the Hamiltonian function and adjoint Equation. The main contribution of this paper is that we connect the maximum principle of [6] with the dynamic programming of [4], and obtain relations among the adjoint processes, the generalized Hamiltonian function and the value function under the assumption that the value function is enough smooth.As applications, we discuss a portfolio optimization problem under model uncertainty in the financial market.In this problem, the optimal portfolio strategies for the trader (representative agent) and the "worst case scenarios" (see Peskir and Shorish [13], Korn and Menkens [14]) for the market, derived from both maximum principle and dynamic programming approaches independently, coincide.The relation that we obtained in our main result is illustrated. The rest of this paper is organized as follows.In Section 2, we state our zero-sum stochastic differential game problem.Under suitable assumptions, we reformulate the sufficient maximum principle of [6] by adjoint Equation and Hamiltonian function, and the stochastic verification theorem [4] by HJBI Equation.In Section 3, we prove the relationship between maximum principle and dynamic programming for our zero-sum stochastic differential game problem, under the assumption that the value function is smooth enough.A portfolio optimization problem under model uncertainty in the financial market is discussed in Section 4, to show the applications of our result. Notations: throughout this paper, we denote by n R the space of n-dimensional Euclidean space, by n d R  the space of n d  matrices, by n S the space of n n  symmetric matrices. ,and |.| denote the scalar product and norm in the Euclidean space, respectively. appearing in the superscripts denotes the transpose of a matrix. Problem Statement and Preliminaries Let 0 T  be given, suppose that the dynamics of a stochastic system is described by a stochastic differential Equation (SDE) on a complete probability space   , , we suppose the filtra- where  contains all P-null sets in  and For any . ., . as an admissible pair. We make the following assumption. be continuous functions.For any     , 0, , we define the following performance functional Now suppose that the control process has the form where  and π are valued in two sets 1 K and 2 K , respectively.We let   for given    , 0, Such a control process (pair)   . . The intuitive idea is that there are two players, I and II. Player I controls  and Player II controls  .The actions of the two players are antagonistic, which means that between Players I and II there is a payoff   , ; ,π J t x  which is a cost for player I and a reward for Player II. We now define the Hamiltonian function H s x p q b s x p tr In addition, we need the following assumption. (H2) B, σ, f are continuously differentiable in   , ,π x  and g is continuously differentiable in x .Moreover, , x x b  are bounded and there exists a constant The adjoint Equation in the unknown s  -adapted processes , . , under (H2), we know that BSDE (6) admits a unique s  -adapted solution . We now can state the following sufficient maximum principle which is Corollary 2.1 in An and Oksendal [6]. , respectively.Suppose that there exists a solution       ˆ. , .p q to the corresponding adjoint Equation (6).Moreover, suppose that for all , the following minimum/maximum conditions hold: 1) Suppose that for all and 2) Suppose that for all 3) If both Cases (1) and ( 2) hold (which implies, in particular, that g is an affine function), then  is an optimal control (saddle point) and Next, when the control process where .,. 0, ; The following result is a stochastic verification theorem of optimality, which is an immediate corollary of Theorem 3.2 in Mataramvura and Oksendal [4].Lemma 2.2 Let (H1), (H2) hold and     , 0, and a Markovian control process  is an optimal Markovian control. Main Result In this section, we investigate the relationship between maximum principle and dynamic programming for our zero-sum stochastic differential game problem.The main contribution is that we find the connection between the value function V , the adjoint processes , p q and the following generalized Hamiltonian function Our main result is the following.  is an optimal Markovian control, and is the corresponding optimal state.Suppose that the value function .,. 0, ;   , , , .., s t T P a s   π , , , .., s t T P a s     and (3)   , , . . and sx V is also continuous.For any . , .p q solves the adjoint Equation (6).Proof.(13), (15) can be obtained from the HJBI Equation (10), by the definitions of the generator ,π A  in (9) and the generalized Hamiltonian function G in (12). We proceed to prove the second part.If and sx V is also continuous, then from (15), we have x Hence, by the uniqueness of the solutions to (6), we obtain (16).The proof is complete.□ Applications In this section, we will discuss a portfolio optimization problem under model uncertainty in the financial market, where the problem is put into the framework of a zero-sum stochastic differential game.The optimal portfolio strategies for the investor and the "worst case scenarios" for the market, derived both from maximum principle and dynamic programming approaches independently, coincide.The relation that we obtained in our main result Theorem 3.1 is illustrated. Suppose that the investors have two kinds of securities in the market for possible investment choice: (1) a risk-free security (e.g. a bond), where the price   0 S t at time t is given by Let   π t be a portfolio for the investors in the market, which is the proportion of the wealth invested in the risky security at time t . Given the initial wealth   π 0 0 Y y   , we assume that   π . is self-financing, which means that the corresponding wealth process The family of admissible portfolios is denoted by  .Now, we introduce a family Q of measures  Q parameterized by processes where We assume that is an equivalent local martingale measure.But here we do not assume that (22) holds. All  satisfying (20) and ( 21) are called admissible controls of the market.The family of admissible controls where     is a given utility function, which is increasing, concave and twice continuously differentiable on   0,  .We can consider this problem as a zero-sum stochastic differential game between the agent and the market.The agent wants to maximize his/her expected discounted utility over all portfolios π and the market wants to minimize the maximal expected utility of the agent over all "scenarios", represented by all probability measures   To put the problem in a Markovian framework so that we can apply the dynamic programming, define where denote the initial time of the investment, and which is a 2-dimensional process combined the Radon-Nikodym process with the wealth process Maximum Principle Approach To solve our problem by maximum principle approach, that is, applying Lemma 2.1, we write down the Hamiltonian function (5) as H s x x p p q q x q x s s s p The adjoint Equations ( 6) are be the corresponding state process, with corresponding solution ˆˆˆˆˆ. ., ., . ., .p p p q q q   to the adjoint Equations.By (7) in Lemma 2.1, we first maximize the Hamiltonian function H over all π   .This gives the following condition for a maximum point π : Then, we minimize H over all    , and get the Following condition for a minimum point  : We try a process with a deterministic differential function f .Differentiating (31) and using (17), we get Comparing this with the adjoint Equation ( 27) by equating the s W s coefficients respectively, we get and Substituting (33) into (30), we have Differentiating (37) and using (17), (36), we get Comparing this with the adjoint Equation (28), we have and Substituting (39) into (29), we have From (40), we get Let 0 t  , we have proved the following theorem.Theorem 4.1 The optimal portfolio strategy π   for the agent is The optimal "scenario", that is, the optimal probability measure for the market is to choose    such that That is, the market minimize the maximal expected utility of the agent by choosing a scenario (represented by a probability law ), which is an equivalent martingale measure for the market (see ( 22)). In this case, the optimal portfolio strategy for the agent is to place all the money in the risk-free security, i.e., to choose   π 0 t  for all t .This result is the counterpart of Theorem 4.1 in An and Oksendal [6] without jumps with complete information. Dynamic Programming Approach To solve our problem by dynamic programming approach, that is, applying Lemma 2.2, we write down the generator ,π A  of the diffusion system (25) as .,.,.0 Applying to our setting, the HJBI Equation ( 10) gets the following form We try a V of the form for some deterministic function f with   1 f T  .Note that conditions (i), (ii), (iii) in (47) can be rewritten as , , A V t x x  over all π gives the following first-order condition for a maximum point π : We then minimize , , A V t x x  over all  and get the following first-order condition for a minimum point  : From (52) we conclude that which substituted into (51) gives And the HJBI Equation (iii) , , 0 A V t x x   states that with these values of π and  , we should have (i.e., to put all the wealth in the risk-free security) and the optimal "scenario" for the market is to choose    such that (i.e., the market chooses an equivalent martingale measure or risk-free measure  Q for the market).This result is the counterpart of Theorem 2.2 in Oksendal and Sulem [15] without jumps. Conclusions and Future Works In this paper, we have discussed the relationship between maximum principle and dynamic programming in zerosum stochastic differential games.Under the assumption that the value function is smooth, relations among the adjoint processes, the generalized Hamiltonian function and the value function are given.A portfolio optimization problem under model uncertainty in the financial market is discussed to show the applications of our result. Many interesting and challenging problems remain open.For example, what is the relationship between maximum principle and dynamic programming for stochastic differential games without the illusory assumption that the value function is smooth?This problem may be solved in the framework of viscosity solution theory (Yong and Zhou [9]).Another topic is that we can continue to investigate the relationship between maximum principle and dynamic programming for forward and backward stochastic differential games, and then study its applications to stochastic recursive utility optimization problem under model uncertainty (Oksendal and Sulem [16]).Such topics will be studied in our future work.
2018-04-10T21:13:36.261Z
2013-10-24T00:00:00.000
{ "year": 2013, "sha1": "dd04d7ff8bcfed2daf5277dd74957930672fb01c", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=38451", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "dd04d7ff8bcfed2daf5277dd74957930672fb01c", "s2fieldsofstudy": [ "Business", "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
262023284
pes2o/s2orc
v3-fos-license
Development Challenges and Experience Enlightenment of Vocational Education in China and India under the Influence of Traditional Culture : Traditional culture, as a distinctive spiritual manifestation and value system within a nation or society, exerts notable inertia and permeability in influencing perceptions of vocational education, constituting a regulatory and constraining force in the development of vocational education. China and India, both renowned ancient civilizations, presently occupy pivotal roles as the largest developing countries and fastest-growing economies worldwide. From the vantage point of traditional culture and its influence, this paper endeavors to analyze and compare the developmental challenges and reformative experiences of vocational education in China and India. It is observed that in China, the impact of Confucian culture and the imperial examination system, which was influenced by the concept of “valuing morality over skills”, has engendered a dearth in societal acceptance of vocational education, a misalignment of educational objectives and content, and unsuitable instructional and evaluative approaches, among other difficulties. Meanwhile, in India, the influence of the religious belief of “the unity of Brahman and Atman” and various cultural traits has cultivated large, disparate gaps in vocational education levels, a detachment from social demands, and low sociocultural acceptance, among other obstacles. To transcend the restraints imposed by traditional culture, China and India have both dedicated considerable efforts to exploring vocational education reforms, obtaining tangible accomplishments along the way. Reciprocal engagement in experiential exchange, together with a joint quest to navigate a common path towards breakthroughs, bears significant implications for the development of these two nations. Introduction Cultural research forms an important aspect within the comparative education domain.Gu Mingyuan explored the historical development of cultural research within comparative education, elucidating cultural research manifestations and explaining the relationship between cultural research, overcoming Western cultural centrism, and the interaction between culture and education [1].Vocational education originated within the context of industrialized societies, the evolution of which correlates closely with widespread socio-economic activities.Consequently, scholars primarily study vocational education from an economic perspective.With the deepening of research, some scholars began exploring vocational education issues from a cultural perspective, expanding research efforts to an international level while comparing the varied vocational education characteristics arising from different cultural backgrounds within different nations.Zhu Xiaobin explicated how German ethnic thinking, value orientation, as well as cultural heritage, deeply impacted the "dual system" vocational education model adopted within Germany.Zhu further compared traditional Chinese national culture's influence upon vocational education, revealing the fundamental underlying reasons for the disparities between China and Germany's vocational education systems [2].This comparative research approach still exhibits characteristics rooted in Western cultural centrism, focused on the differences and similarities between developed nations and China and developed countries. Both China and India are ancient civilizations with long-standing histories, cultural legacies, as well as a shared history of invasion and colonization by the West in modern times.In the present, both nations rank among the largest developing countries and fastest-growing economies worldwide.In light of these similarities, this paper aims to explore and compare the developmental challenges of vocational education in both countries and the traditional cultural factors behind them.Briefly analyzing the reformative experiences of both nations, mutual exchange and learning to search for solutions will bear significant implications for both the vocational education and economic development of China and India. The Connotation of Traditional Culture Traditional culture refers to the sum of unique value systems, ways of thinking, customs, lifestyles, aesthetic preferences, and religious beliefs developed by a nation or ethnic group during their long history of social development.As a form of social spirituality, traditional culture embodies the essence of national spirit, and features fundamental and stable characteristics.In turn, it exerts significant influence on social and historical development owing to its strong vitality and power of influence.Samuel P. Huntington, an American political theorist and philosopher defined culture as the commonly held values, attitudes, beliefs, orientations, and perceptions of a society [3].This paper defines traditional culture from its spiritual core as the spirit, character, and values of a nation or ethnic group. The Relationship between Traditional Culture and Vocational Education Culture and education are deeply interlinked, having grown and evolved together over the course of human history.Traditional culture changes and develops over time as it is passed down from generation to generation, reflecting distinctiveness and embodying a process of accumulation and inheritance, forming a corresponding system of values, and regulating and adjusting education's developmental trajectory and methods in various ways.Vocational education, as an educational type, aims to cultivate students' vocational abilities through systematic education, enabling them to master the knowledge, skills, and attitudes needed for a particular profession.Compared to general education, vocational education has a more explicit career orientation, closely linked to political, economic, cultural, and other issues in social development.On the one hand, it cannot break through the barriers of cultural tradition.On the other hand, it can also inject new vitality into cultural tradition.Traditional culture, as a unique set of values and beliefs of a nation, possesses strong historical inertia and permeability, influencing people's cognition of vocational education, determining vocational education's social status, guiding the establishment of cultivation objectives and talent standards that meet the public's psychological expectations, and subsequently forming different teaching contents and teaching methods in vocational education. The Restriction of Traditional Culture on Vocational Education in Different Countries Culture is a set of shared psychological programs among people in a particular environment that can distinguish them from others.Due to differences in historical process, geographical location, and life experience, each ethnic group and nation has accumulated and formed unique traditional cultures during their respective development.Different forms of traditional culture, like soil, nurture vocational education thoughts, systems, behaviors, and material forms that are closely related to them.For example, Germany has always emphasized a practical orientation to craftsmanship and skills, prioritizing training and practical learning, which has led to the development of vocational education in Germany with broad popular cultural foundations and a strong sense of social identity.Meanwhile, influenced by the "gentleman culture", the UK values traditions, prioritizes humanity over technology, and focuses on training elite individuals with "gentlemanly manners", which, to some extent, limits the development of vocational education in the United Kingdom [4]. The Challenges of Vocational Education under the Influence of Chinese Traditional Culture In China, traditional culture can be generally viewed as a system consisting of Confucianism as the main body and the complementary doctrines of Taoism and Confucianism [5].It has always placed emphasis on individual moral cultivation while undervaluing production and technological knowledge, known as "valuing morality over skills".The concept of "Dao" is the original principle of all things and human nature, and the governing principle of state affairs.The concept of "Qi" is the material end of things, which is useful for practical life and material invention.This cultural tradition has produced a value system that looks down upon technical skills.Although ancient China had skilled artisans and the traditional master-disciple system, these craftsmen were unable to transcend feudal-level hierarchies.Technological practice was often demeaned as "crude works" or "insignificant skills" contrary to high levels of moral distinction, such as "self-cultivation", "family harmony", "governing the country", and "bringing peace to the world".In addition, the imperial examination system and the promotion of Confucian officials, under a national political level of measurement, reinforced and consolidated these value systems, creating the belief that "valuing learned officials above everything else".As a product of industrial production and technological revolution, vocational education is responsible for carrying forward skills and cultivating professional talents, and this contradicts the traditional Chinese culture's emphasis on learning over technology.Within such a cultural context, China's vocational and technical education face problems of low social recognition, misplaced education goals, content bias, and inappropriate teaching and evaluation methods. Low Social Recognition The cultural tradition of "valuing morality over skills" has not only constrained the development of science and technology in China but also impacts the value and recognition of people working in different professions.The saying "those who work with their minds govern people, while those who work with their physical labor are governed by others" not only reflects a distinction between physical labor and mental labor but also affirms the existence of strict social hierarchies within feudal Chinese society.The imperial examination system further reinforced the traditional value orientation of "learning for the purpose of officialdom" and "valuing learned officials above everything else", as well as the "officials-first" talent concept.These traditional views have deeply rooted in China's cultural veins, leading to a heavy dependence on the "examination-oriented education" system and resistance to change.Even today, many companies still have a bias towards an individual's academic background as the primary criterion for selecting and hiring talent.Under this bias towards valuing academic qualifications over practical skills, especially after the expansion of higher education, vocational education is often viewed as a second-rate "last resort" option for parents and students. Misplaced Education Goals and Contents The goal of vocational education is to cultivate practical talents, scientific and technological talents, and workers who are skilled in production.However, the implementation of this goal is challenging.The Confucian culture and imperial examination system that have influenced China's education for thousands of years have traditionally focused on educating gentlemen who can govern the country rather than developing people with specific skills and knowledge for production.For individuals, education has been seen as a means to gain official positions and social status rather than a way to obtain practical and individual development skills [6].These traditional educational concepts and talent beliefs have long been embedded, limiting the selection of teaching content and methods mainly to historical culture, moral ethics, and downplaying the importance of natural sciences and production skills.In ancient times, vocational education was limited in scope, and vocational skills were only taught through informal apprenticeships.Modern vocational and technical education, taught in a typical school format, has only been around in China for a little more than 100 years, drawing on Western education models.However, due to the inertia of traditional educational concepts, China's vocational education goals and teaching content have deviated from their essential characteristics of "occupational" and "practical", emphasizing theoretical knowledge over practical skills. Inappropriate Teaching and Evaluation Methods The traditional collective teaching system of being "deaf to what's happening outside while focusing only on studying saintly books" has led to a narrow and closed "classroom-centric" view, which has affected teaching quality and the development of students' personalities.The imperial examination system, which lasted for more than 1,300 years, forced school education to emphasize literature rather than practical studies.The study atmosphere emphasized dogmatism and formalism, which valued authority over innovation, inheritance over development, and failed to adequately nurture scientific spirit, independent thinking, innovation, and critical thinking skills.Vocational education, as specialized technical education, should focus on practical skills training, adapting to the needs of industrial development [7].However, China's vocational education has been stamped with the imprint of traditional education in terms of teaching organization, teaching methods, and personal development.It focuses on teaching theory in the classroom and relying on experience in practice, without giving sufficient attention to practical skills development or adjusting the content according to the changes of the real situation in practice.Moreover, the "imperial examination system" has severely constrained the evaluation and assessment of education.Even today, exams remain the most "effective" method for evaluating educational achievement in most vocational colleges.By merely allowing students to pass exams and earn certificates, vocational education has blurred its characteristic features and become a "subsidiary" and "supplement" of general education, and students trained in this way inevitably lack differentiation and competitiveness. The Challenges of Vocational Education Under the Influence of Indian Traditional Culture India is known as a "religious museum", and religiosity is an important characteristic of Indian traditional culture.Hinduism is one of the world's oldest religions, with 83% of the Indian population being its followers, and its dominant idea is "the unity of Brahman and Atman"."Brahman" is a supernatural force that can be regarded as the creator and master of the universe and the overall universe; "Atman" is the subjective world, a manifestation of "Brahman" in the human world [8].This concept requires people to overcome their material desires and maintain the original harmonious state of nature.This reflects the emphasis on the hierarchical order in social life, prominently manifested in the caste system.The caste system is the most important social system and norm in Indian tradition.Due to the need for political power, it has undergone multiple adjustments and was solidified during British India, becoming a strictly class-based hierarchical system.Additionally, prolonged division and conflict have led to the complexity and diversity of Indian culture, and with the harmony and inclusiveness of Hinduism, it has resolved, absorbed, and fused reasonable elements of many foreign cultures, forming the characteristics of India's cultural diversity and unity.However, vocational education in India faces challenges such as a large development gap, disconnection from social needs, and low identity. A Big Gap in the Level of Educational Development India is a country with complex cultural diversity that presents a bipolar characteristic in its social structure.There is a huge gap in regional development, varying degrees of modernization, and significant income inequality, with severe disparities between the rich and the poor.Due to the elitist education system introduced during colonial rule, India's cultural education has long fallen behind.When India became independent in 1947, the national illiteracy rate was over 80%, and vocational education was almost nonexistent.After independence, the Indian government actively explored mass education and gradually promoted vocational education in regular schools.However, since India operates on a decentralized power structure, the implementation of vocational education is the responsibility of each state, and the lack of clarity in responsibility between the central and state governments leads to inefficient management.As a result, it is difficult to implement nationwide reforms effectively.At the same time, the level of mandatory education varies between regions.High dropout rates in primary education impede the progress of vocational education, resulting in low registration for vocational education and difficulty achieving set goals.Furthermore, vocational education in India displays hierarchical differentiation and lacks a nationally standardized system.Among the three components of vocational education, including vocational education, vocational training, and technical education, technical education, especially software technology training, is the most outstanding.Technical education institutions are more flexible in their approach, providing updated curricula and stronger vocational training, and establishing national vocational qualifications that meet both industrial development and personal needs.However, these institutions still fail to resolve the employment problem and execute a nationally standardized system. Disconnection from Social Needs Unlike the tendency in Western modern civilization to emphasize individual needs and values, and different from the pursuit of "learning for the purpose of officialdom" in China, traditional Indian culture has a religious feature of valuing spirituality over materialism.It does not emphasize material or political success but focuses on spiritual liberation as the highest goal of life, emphasizing personal spiritual enlightenment.When projected into education, this forms the internal logic of the development of Indian education from divine to inspiration and then to self-knowledge.It pays less attention to the application of knowledge and serving society, and does not focus on the social value of individuals themselves.This results in a disconnect between vocational education and market demands, leading to the inability to target skills that are in sync with economic and industrial development, and an inability to adjust the curriculum according to changes in market needs.As a result, graduates have poor employment prospects and low wages.Furthermore, Indian vocational education is designed to exist in parallel with academic education, with vocational education departments being almost entirely isolated from general higher education.As a result, vocational education is viewed as a tool for academic diversion rather than as high-quality options for nurturing students' talents, employment prospects or achieving their life goals.However, from another perspective, the highly spiritualized tendency of traditional Indian culture is characterized by "subjective introspection", which is more in line with the thought process of modern Indian technological development.This allows Indian intellectuals to be more patient and methodical in scientific research, leading to more creative research results. Low Social Identity In ancient India, the emphasis on social hierarchy was reflected prominently in the maintenance of the caste system.This system made the majority of the people unconditionally follow the behavioral norms of their caste in terms of emotions, thoughts, and actions, effectively maintaining social hierarchy, stability, and harmony.However, this also increased social inequality and created barriers between different groups, leading people to be passive, conservative, and resistant to change.Even though the caste system was abolished after independence, its influence remains widespread and persistent in actual life, becoming a fundamental obstacle to India's modernization.This, in turn, has substantially affected individuals' and society's values on vocational education.The societal concept of occupational status also limits the attractiveness of vocational education in certain fields such as mechanics, electronics, installations, and cosmetics.These occupations and educations are often regarded as a means of making a living for the lower social class, which also contributes to the lower social status of vocational education.Furthermore, the academic elitist policy in the education system views vocational education as secondary, making it a reluctant option for students who perform relatively poorly academically.However, vocational education graduates also face lower employment prospects and lower wages, contributing to the poor social perception of vocational education.Lastly, Indian women also have lower rates of participation in vocational education as traditional views of gender roles favor women staying at home for domestic work. Experience of Vocational Education Reform in China and India China and India are both countries with long-standing, deeply rooted, and rich cultural traditions, yet they have both experienced the intrusion of Western colonialism in recent history.Modern vocational education in these countries has been established based on experiences borrowed from developed nations.Although such practices were initially effective, with the advancement of vocational education, "obstacles related to cultural adaptation" have gradually emerged.Despite the differences in specific cultural expressions, national conditions, and economic situations, both countries are confronted with similar challenges in vocational education.To overcome the shackles of traditional culture, China and India have undergone significant reforms and explorations in the field of vocational education. Following its independence, India overturned the cultural and education-centric model, vigorously investing in engineering and technological education, which has cultivated a large pool of skilled professionals.In the past decade, it has established new institutions, including vocational colleges, undergraduate vocational degree programs, and community colleges.It has also formulated and implemented a national vocational qualification framework, which collectively constructs a modern vocational education system with vertical connections within and horizontal connections between vocational and general education at the tertiary level [9].With development goals such as "Skill India", "Made in India", and "Digital India", India places greater emphasis on the role of vocational skills in promoting economic construction.Through measures such as promoting vocational education in secondary and tertiary institutions, increasing investment, improving standards, and effectively integrating resources, India is taking effective measures to ensure the availability of skilled workforce. After the establishment of the People's Republic of China, the long-term positioning of vocational education was focused on elementary and secondary vocational education.It was only after the advent of reforms and opening-up policies that China had vocational education at the tertiary level.Previously, the education system had prioritized humanistic values and pursued political and ethical values.However, it has now shifted its focus towards utilitarianism as the theoretical basis for social and economic values, pursuing market and contractual benefits and highlighting the applicability, vocationalism, and effectiveness of education [10].In 1996, the Vocational Education Law established the legal status of higher vocational education.In the last twenty years, the scale of vocational education has rapidly expanded, and new concepts and institutional mechanisms are continuously being innovated.Today, vocational education is moving towards combining vocational and academic education, integrating theory and practice, and building a modern vocational education system that emphasizes the development of vocational undergraduate education. Conclusions It is evident that blindly borrowing and dogmatic learning without considering cultural backgrounds have minimal effects.Reform tailored to suit local conditions can weaken the negative impact of traditional culture and promote vocational education development.Traditional culture can be a double-edged sword for any country.Therefore, we must guide development according to the situation, exploring the unique vocational and technical genes with distinctive Chinese characteristics.By combining the reasonable elements of imported theories with the actual conditions in our country, we can find effective models that conform to the universal laws of vocational education and the cultural traditions of our country.This way, we can foster a large number of high-quality technical and skilled personnel who meet the demands of the nation's development and the needs of the times. Today, both China and India are in a period of high-speed economic development, have vast labor forces and market potentials, and are committed to transforming their manufacturing industry and promoting technological innovation.They both face an urgent need for skillful personnel and for converting their demographic dividends.Thus, improving vocational education is a key element.Based on similar historical and cultural backgrounds and educational development challenges, exchange and mutual learning between China and India's vocational education are crucial.Balancing the relationship between traditional culture and economic development, transforming the negative factors of traditional culture into positive ones, exploring vocational education paths that suit traditional culture, and addressing these issues can bring infinite prospects for vocational education and national development in China and India.
2023-09-18T15:11:59.483Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "9ea12222a1c0df0d050db5f564a279eefbb17732", "oa_license": "CCBY", "oa_url": "http://www.clausiuspress.com/assets/default/article/2023/09/06/article_1693976204.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "fdae32edaa03c3b43abb65ee29a545b258c61f2c", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
269353145
pes2o/s2orc
v3-fos-license
Impact of prosthetic rehabilitation on oral health-related quality of life of Saudi Adults: A prospective observational study with pre–post design This study aimed to assess the impact of prosthetic treatment on the quality of life of partially/completely edentulous patients through the Oral Health Impact Profile-14 (OHIP-14) scale. This pre-post observational study was conducted in the College of Dentistry, Imam Abdulrahman Bin Faisal University, Saudi Arabia, between November 2022 and September 2023. Eligible participants were those aged between 26 and 80, in need of prosthetic treatment, and able to complete the questionnaire voluntarily. The questionnaire presented to the patients had two sections; the first included demographic and dentures-related variables, and the second included the OHIP-14 questionnaire. Differences in overall OHIP-14 scores after treatment about demographic and prosthesis-related factors were assessed using the Mann-Whitney U test/Kruskal Wallis test with a significance level of 0.05. Out of 108 participants, 65 were males, and 43 were females with an average age of 52 years with different prosthetic treatments (13.9 % fixed prostheses, 43.5 % removable partial, and 42.6 % complete dentures). 59.3 % brushed their teeth twice or more daily, and only 36.1 % checked them regularly. Comparison between the OHIP-14 items before and after treatment revealed that subjects exhibited improvement in all the domains. OHIP-14 scores did not differ significantly in terms of age, gender, and education after treatment. OHIP-14 score was considerably higher for patients with medical conditions (P = 0.007) and among complete denture wearers compared to patients with fixed prostheses (P = 0.025). Prosthetic treatment positively impacts oral health-related quality of life (OHRQoL), which improved after treatment, particularly in the social domain. There was an association between patients’ medical condition, prosthesis type, and OHIP-14 score. Introduction Edentulism affects individuals' speech, mastication, esthetic, and psychological well-being (Ozdemir et al., 2006).Accordingly, dental professionals are required to design a proper treatment plan to fulfill patients' chief complaints and meet their expectations with an acceptable dental prosthesis (Tabassum et al., 2017). OHRQoL is a multidisciplinary concept that assesses biological and psychological situations linked to oral health.Information extracted from the OHRQoL concept is helpful in developing patient-focused treatment plans in the clinical field.In the educational aspect, it teaches health personnel to consider the patient's specific needs and problems rather than a treatment problem or outcome (Campos et al., 2021).Moreover, the dental field would rely on improving the population's health rather than focusing only on developing an innovative dental technique or dental treatment (John, 2021). Several indicators were recognized by the World Health Organization to assess the OHRQoL, but the most comprehensive indicator is the Oral Health Impact Profile (OHIP) (Slade and Spencer, 1994).The OHIP-14 questionnaire is a concise index developed from the extensive OHIP-45 items measure (Locker, 1988).Previous studies assessed the correlation between OHIP-14 parameters and tooth loss (Rodakowska et al., 2022;Rocha et al., 2016;Bortoluzzi et al., 2012).Anbarserri et al. (2020) and Imam (2021) reported adverse effect of tooth loss on OHRQoL. Multiple factors affect treatment selection and OHRQoL including the demographic variables, patients' experience of wearing dentures, technique of denture fabrication, dentists' clinical expertise, and patient-dentist relationship (Oweis et al., 2022).The lack of literature discussing this topic in Saudi Arabia enhanced the author's search for the contributing factors affecting OHRQoL among the Saudi population. The current study aims to assess the impact of prosthetic treatment on the quality of life of partially and completely edentulous patients through the OHIP-14 scale.The null hypothesis states that sociodemographic variables and prosthetic treatment will not significantly affect the oral health quality of participants' lives. Study sample and design This pre-post observational comparative study was conducted in the College of Dentistry, Imam Abdulrahman Bin Faisal University, Saudi Arabia, between November 2022, and September 2023.The study was granted ethical approval from the Institutional Research Ethics Board prior to initiating it (IRB-2022-02-468).Moreover, eligible patients signed an informed consent before undergoing the examination procedure and answering the survey questions.The consent form included a statement mentioning that the patients' participation is voluntary and they can withdraw their consent at any time.The consent form also included the name of the investigators, the study title, and the aim of the study. A total of 108 patients were calculated based on 5 % alpha error, 80 % power, and a change in OHIP-14 score after 1 month of prosthetic rehabilitation, yielding an effect size of 0.284, according to a previous study (Fueki et al., 2015).Adult participants were eligible to participate if they were above 25 years old, needed prosthetic replacement of their partially or completely edentulous jaws, and could complete the questionnaire without assistance.Patients were excluded if they had dementia, a systemic disease that could affect the treatment outcome, such as neuromuscular disorder, temporomandibular joint disorder, or severe bone resorption.Additionally, patients who didn't complete their treatment or did not attend the follow-up appointments were excluded from the study, and the senior staff rated the dental prostheses unacceptable.Patients whose treatment plan included only a single crown were excluded. Prostheses fabrication and evaluation The dental prostheses were fabricated at the prosthodontic clinics of the college of dentistry.The quality of the dental prostheses was evaluated under the supervision of senior prosthodontic specialists, following standardized methods and the rubrics of the corresponding prosthesis.One investigator was responsible for evaluating the removable partial or complete dentures, and another investigator evaluated the fixed partial dentures.The rubrics included all fabrication steps for complete, partial, and fixed dentures.The complete and partial removable dentures were evaluated for adequate retention, stability, occlusion, esthetics, phonetics, vertical dimension of occlusion, and free-way space.On the other hand, fixed partial dentures (FPDs) were considered adequate according to the criteria mentioned in Ryge's guidelines (Crisp et al., 2008;Sulaya and Guttal, 2020).The requirements included in the assessment of FPDs were the anatomical contour, color stability (free of staining), marginal adaptation, shade matching, surface smoothness, pontic ridge design, periodontal health, speech, mastication efficiency, absence of pain, porcelain chipping or fracture.Any dental prostheses the supervisors judged unacceptable according to the previously mentioned criteria were made over for the patient. Questionnaire The questionnaire included two sections, and it was distributed to the patients by a dentist (MG) who didn't participate in the treatment process for any of the patients at any stage.The first section included demographic and dentures-related variables: patients' age, gender, income, education, medical history, smoking habits, and past prosthetic history (number of previous complete or partial dentures and time since current complete dentures). The second section included OHIP-14; patients were asked to answer the Arabic version of the OHIP-14 questionnaire (Osman et al., 2018;Al Habashneh et al., 2012) before dental treatment on the first admission and at the follow-up session after one month of using the dental prosthesis.OHIP-14 questionnaire comprises 14 items sorted into seven domains (functional limitation, pain, psychological discomfort, physical disability, psychological disability, social disability, and handicap).For each OHIP-14 item, patients were asked how frequently they experienced the impact of that item.A five-point Likert Scale was used to record the participants' responses to the questionnaire: 0, never; 1, hardly ever; 2, occasionally; 3, fairly 4, often; and 5, very often.The total score of the OHIP-14 was calculated by adding all responses and thus ranged from 0 to 56 points.The participants who scored high in OHIP-14 had poor OHRQoL and decreased satisfaction with the dental prosthesis. Statistical analysis At baseline, the internal consistency of OHIP-14 was measured using Cronbach's alpha to capture the extent of agreement among all domains and items.Alpha values > 0.80 indicate a reliable scale, although values > 0.70 indicate an acceptable scale.The normality of OHIP-14 items was checked using the Kolmogorov-Smirnov test, quantile-quantile plots (Q-Q plots), and non-normal distribution was approved.OHIP-14 scores were presented using mean, standard deviation, median, minimum value, maximum value, and interquartile range, while frequency and percentage were used to demonstrate the qualitative variables.Differences in patients' responses before and after treatment were assessed using the Wilcoxon Sign Rank test.After treatment, differences in overall OHIP-14 scores about demographic and prosthesis-related factors were assessed using the Mann-Whitney U test or the Kruskal Wallis test.The significance level was set at a P-value of 0.05.All tests were two-tailed.Data were analyzed using the International Business Machine Corporation (IBM) Statistical Package for the Social Sciences (SPSS) statistics for Windows, version 23, Armonk, NY, USA. Results 108 adults completed the study out of 137 participants, with a response rate of 78.8 %.Reliability analysis of the scale's internal consistency showed a Cronbach's alpha value of 0.922, indicating the strong scale's internal reliability.Alpha values for almost all items ranged from 0.722 to 0.820, thus indicating acceptable items (Table 1).The participants mean age was 51.82 ± 13.12 years, 60.2 % were males, 95 % were married, 40.7 % were school graduates, 38 % had monthly income more than 1000 SAR and 56.5 % had a family size of 5 to less than ten members.Most of the participants were non-smokers (74.1 %) and were not suffering from any medical conditions (62 %) (Table 2).Fig. 1 represents prosthesis-related factors and oral health behaviors.Among the participants, 13.9 %, 43.5 %, and 42.6 % had fixed prostheses, removable partial or complete dentures, respectively.28 % wore dentures longer than one year, and 31.5 % of the patients had no previous dentures.59.3 % brushed their teeth twice or more daily, and only 36.1 % checked them regularly. Although the comparison between the OHIP-14 items before and after treatment revealed that subjects exhibited improvement in all the domains, only the social impact domain showed a significant reduction in social disability and handicap scores (P = 0.012, and 0.012, respectively) (Table 3). There were no significant differences in OHIP-14 scores following treatment regarding age, gender, and education.However, it was observed that male patients under 50 with an education level below high school reported a more significant impact on their oral health.Similarly, patients having two or more dentures had higher OHIP-14 scores (4.29 ± 4.44), followed by those with one denture (3.69 ± 4.57), compared to individuals who were new denture wearers (3.33 ± 6.51), but the difference was not significant. The OHIP-14 score was significantly higher for patients with medical conditions (P = 0.007) and those wearing complete dentures than those with fixed prostheses (P = 0.025).Patients who exhibited infrequent tooth brushing and irregular dental check-ups demonstrated higher yet insignificant OHIP-14 scores than those who regularly brushed their teeth and underwent dental examinations (Table 4). Discussion The present study evaluated the influence of various dental prostheses on the oral health impact profile among adult patients.The study's null hypothesis was partially accepted because the sociodemographic factors did not significantly affect the OHIP-14 scores.At the same time, the type of dental prosthesis and medical condition significantly affected the oral health quality of patients' lives. The results showed a decrease in OHIP-14 total score and in all domains after treatment, significantly reducing social disability and handicap scores.Similar findings were reported by Nunez et al. (2015) and Regis et al. (2013) when evaluating the quality of life (QoL) related to oral conditions using the Brazilian version of the OHIP-Edentulous scale after treatment with conventional and simplified complete denture (CD) during short-term follow-up until 6-months.In line with the present findings, Martins et al. (2022) showed improved OHRQoL among patients having CD for at least three months either in single or both arches, and the positive impact was maintained for one year of usage. The patients' score for OHIP-14 was highest for complete dentures, followed by removable partial dentures and fixed partial dentures, which showed the lowest score with significant differences.Previous studies reported high patient satisfaction with fixed dental prostheses (Albaqawi et al., 2023;Kashbur and Bugaighis, 2019).It could be attributed to the patients' feeling of fixed prostheses like natural teeth, unlike removable dentures, in addition to the superior esthetics and function of fixed dental prostheses compared to removable ones.However, the results of the present study showed improvement in patients' scores on OHIP-14 after treatment with all tested dental prostheses, which agrees with previous studies.(Shrestha,et al., 2020;Montero et al., 2012;Preciado et al., 2013a). The 'Physical pain' subscale showed the highest scores before and after treatment among the OHIP-14 domains in line with the literature.The reason could be that it is the most integral component for a decline in the self-perceived OHQoL with complete removable prostheses.'Psychological disability' and 'Social disability' subscales are essential causes of the general patients' concern (Meijer et al., 2003).The results showed a significant reduction of the subscales 'Social disability', and 'Handicap' after treatment.Similarly, the total score was reduced after treatment with no statistical difference. The socio-demographics didn't significantly influence the OHIP score after prosthetic treatment.Similar findings were reported previously, where patients' age and gender did not show a significant influence on their quality of life following the use of dental prostheses (Shrestha,et al., 2020;Niakan et al., 2024;Perea et al., 2015a;Preciado et al., 2013a).It could be explained by the worldwide increase in health awareness targeting a vast population.Another reason could be the close conditions of the patients in this study, who were all treated at the same University Hospital.In agreement, Poljak-Guberina et al. (2005) found that sociodemographic factors did not significantly affect patients' satisfaction with dental prostheses Furthermore, the educational level did not substantially impact OHIP scores after the treatment.This finding aligns with prior studies demonstrating the lack of association between education and the outcome measure (Preciado et al., 2012;Preciado et al., 2013b).However, Deeb et al. (2020) stated that the OHIP in individuals receiving removable dental prostheses is significantly affected by smoking, socioeconomic position, educational attainment, and health. Nevertheless, the elderly participants in this study exhibited comparatively lower OHIP scores than their younger counterparts, without any statistically significant disparities.This observation may be attributed to the fact that older individuals often encounter numerous medical conditions, leading them to tolerate dental problems (Perea et al., 2013;Perea et al., 2015b;Preciado et al., 2012;Preciado et al., 2013a). In the present study, patients' history of denture wear did not significantly affect overall satisfaction.The results also showed lower scores for patients who used dentures for more than one year compared to those with no experience or less than one year, but the difference was not significant, which comes in agreement with previous studies (Erić et al., 2017;Marin et al., 2014;Oweis et al., 2022).Patients' previous experience with denture wear might have improved their adaptation and satisfaction with the new denture (Asli et al., 2021). Patients who suffered from medical illnesses reported higher OHIP-14 scores.Limited data is available on the correlation between complete denture treatment and OHRQoL in patients with systemic conditions.However, previous studies investigated the association between diabetes mellitus and periodontal diseases and its impacts on the OHRQoL.Diabetes mellitus is accompanied by different oral mucosal problems, including dry mouth, denture stomatitis, and an increase in candida adhesion (Verhulst et al., 2019;Rohani, 2019;Reddy et al., 2017).According to these findings, the authors proposed that the cause of high OHIP-14 scores in diabetic patients might result from denture stability impairment, erythema, and discomfort from deteriorated oral conditions generated from diabetes mellitus.Previous studies found that OHRQoL was lower among healthy patients upon complete denture treatment, confirming the present study's findings (Ganapathy et al., 2013;Nikbin et al., 2014;Radovic et al., 2014). Examining the impact of various prosthetic types on OHRQoL patients is regarded as a notable aspect of the present study.This aspect aids in predicting the most suitable prosthesis type that is clinically accepted, taking into account individuals' socio-demographic profile and clinical characteristics.Consequently, it can significantly contribute to the decision-making process during discussions on prosthesis type and patient education (Perea et al., 2015a;Perea et al., 2015b). However, one of the limitations of this study was that the participants enrolled were solely from a single center.Consequently, it is imperative to employ caution when interpreting the results.Another limitation is the short follow-up period.Furthermore, due to the relatively small number of participants who received different types of prosthetic treatments, it is strongly advised that a multi-center study be conducted.In addition, further studies encompassing a larger sample size and with an extended follow-up period are needed.Moreover, future studies that evaluate the effect of oral rehabilitation of only completely edentulous patients using another questionnaire would be essential to verify the present results. Conclusions The replacement of missing teeth has a positive effect on the OHR-QoL social impact domain.The patient's medical condition and the type of dental prosthesis significantly impact their OHIP score. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sector. Table 3 Comparison between OHIP-14 subscales and overall score before and after treatment. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.γ Post hoc analysis between fixed prosthesis vs complete denture (P = 0.020). Fig. 1 . Fig. 1.Prosthesis-related factors and oral health behaviors of the study participants. Table 1 Internal consistency of the OHIP-14. Table 2 Demographic characteristics of the study participants. Table 4 Comparison of overall OHIP-14 score after treatment in relation to demographics, denture-related factors, and oral health behaviors.
2024-04-25T15:13:23.979Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "2edcb9f522e5b57f5c9219c0ea4fdbbf1d72fa41", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.sdentj.2024.04.006", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b7fe6f8f9aedae415f68bc9dc95cf49081fcb536", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }